Quantcast
Channel: Symantec Connect - Endpoint Management - Articles
Viewing all 861 articles
Browse latest View live

How to add the SMP site code as an ITA report filter?

$
0
0

If your ITA Server is configured to use its Multi-CMDB functionality, you may wish to have the ability to run ITA reports that can display data for a single or multiple SMP's instead of just all of them.

In order to do this, please follow these steps:

 

The “Add Remove Programs Search” report is used in this HOWTO.

Before.jpg

 

  1.  Reporting Services > ITA, highlight the report and then choose Download from the arrow selection.

ReportServer1.jpg

ReportServer2.jpg

Save the RDL file and then upload it giving it a new name.

Upload.jpg

Upload2.jpg

 

  1. Highlight the copied report, and choose “Edit in Report Builder” from the arrow selection.

Edit.jpg

Right-click the Datasets folder and choose “Show Hidden Datasets”.

Hidden.jpg

  1. Right-click the dFilter dataset and go to its Query.

Query.jpg

Copy the query and paste into Notepad and then press Cancel.

Replace all [Filter] instances with [Computer].

Replace all [Filter - Name] instances with [Computer - Server].

Before:

WITH MEMBER [Measures].[ParameterCaption] AS [Filter].[Filter - Name].CURRENTMEMBER.MEMBER_CAPTION MEMBER [Measures].[ParameterValue] AS [Filter].[Filter - Name].CURRENTMEMBER.UNIQUENAME MEMBER [Measures].[ParameterLevel] AS [Filter].[Filter - Name].CURRENTMEMBER.LEVEL.ORDINAL SELECT {[Measures].[ParameterCaption], [Measures].[ParameterValue], [Measures].[ParameterLevel]} ON COLUMNS , [Filter].[Filter - Name].ALLMEMBERS ON ROWS FROM [Installed Software]

After

WITH MEMBER [Measures].[ParameterCaption] AS [Computer].[Computer - Server].CURRENTMEMBER.MEMBER_CAPTION MEMBER [Measures].[ParameterValue] AS [Computer].[Computer - Server].CURRENTMEMBER.UNIQUENAME MEMBER [Measures].[ParameterLevel] AS [Computer].[Computer - Server].CURRENTMEMBER.LEVEL.ORDINAL SELECT {[Measures].[ParameterCaption], [Measures].[ParameterValue], [Measures].[ParameterLevel]} ON COLUMNS , [Computer].[Computer - Server].ALLMEMBERS ON ROWS FROM [Installed Software]

 

  1. Right-click the Datasets folder and choose “Add Dataset”.

         Name = dServer

         Select the “Use a dataset embedded in my report.” option.

         Data source = IT Analytics

Dataset.jpg

                   Open Query Designer.

                   Press the “Design Mode” icon to change to a text field.

DesignMode_.jpg

                   Copy query from Notepad and paste into the text field.

Press the Browse button and select the Installed Software cube.

Browse.jpg

                   Expand the Computer dimension to see its fields.

                   Press the Exclamation icon to run the query, and check its output.

Test.jpg

                   If correct, press OK and then select Fields.

         Change the Computer_Server field name to ParameterCaptionIndented.

Before:

FieldBefore.jpg

After:

FieldAfter.jpg

         Press OK.

 

  1. Right-click the Parameters folder and choose “Add Parameter”.

         Name = pServer

         Prompt = Server

         Select the “Allow multiple values” option.

         Make it visible.

Param1.jpg

         Available Values = Get values from a query

                   Dataset = dServer

                   Value = ParameterValue

                   Label = ParameterCaption

                   OK

Param2.jpg

 

  1. Right-click the Add Remove Programs dataset and choose Properties.

         Open Query Designer & press the Query Parameters icon.

Icon.jpg

                   <Enter Parameter> = pServer

                   Dimension = Computer

                   Hierarchy = Computer – Server

                   Check the “Multiple Values” box.

                   Default = All

                   Press OK.

QP.jpg

You may have to change some of the default values so that they match data found in your database (where a letter is used, the query will look for its existence within the whole value and not just the first character – start by changing the default value for the pComputerName parameter).

Copy the @pFilter line and paste it below itself.

                   Replace pFilter with pServer          within the copied line.

                   Add an additional bracket ‘)’ to the end of the query.

pFilter.jpg

                   Execute query to check that it works.

                   Press OK twice if it does.

 

  1. Right-click the @pServer parameter within the Parameters folder and choose properties.

         Check that its parameter name is pServer, and change it if not.

         Press OK.

 

  1. Only do this if you do not want the report to run right away.  Right-click the pFilter parameter within the Parameters folder, choose properties and select Default Values.

         Default Values = No default value

NoDefaults.jpg

Press OK.

SAVE!

 

  1. Symantec Management Console > Reports > All Reports.

         Right-click a custom ITA reports folder and choose “New IT Analytics Report”.

                   Report type = Report

                   Folder = IT Analytics

                   Name = name of newly created report via Reporting Services.

                   Parameter area = Initially visible

ITAReport.jpg

         Press the “Add Report” button.

Refresh the Reports tree.  The Server parameter allows you to choose which SMP site code you want to display data on.

After.jpg


セキュリティパッチ適用で入口対策を万全に!

$
0
0

止まらないサイバー攻撃による被害

閲覧したWebサイトでの感染や、メールの添付ファイルを開いた際にマルウェアが仕込まれてしまう等、

サイバー攻撃による被害が後を絶ちません。

被害を受けた結果、大切なお客様情報が漏洩したり、企業イメージが低下する等、様々な経営リスクが発生している状況はご存知の通りです。

1-biz-impact_0.png

 

サイバー攻撃による被害報告は増え続け、もはや対岸の火事とは言えない状況となっています。

組織内の従業員や職員が閲覧するWebサイトや受信したメールの添付ファイルには罠が仕掛けられているかもしれないのです。

 

求められる対策とは?

このような攻撃に対しては多層的な防御が必要であり、

入口対策(OS、アンチウイルスソフト、サードパーティ製ソフトウェアを最新版に更新)は重要なポイントです。

下記のWebブラウザとプラグインの脆弱性データに見られるように、

サードパーティ製ソフトウェアの脆弱性を突く攻撃が増加しています。

実はこれらのソフトウェアに最新のセキュリティアップデートが適用されていれば、被害を防ぐことが出来たケースも多いのです。

 

2-sort.png

 

対策に向けた課題

それにも関わらず、Adobe ReaderやFlash Player, Java Runtime Environment等の、

最新のセキュリティアップデートを適用出来ない理由はなぜでしょうか?

業務上どうしても最新のアップデートを適用できないケースももちろん考えられるでしょう。

それ以外のケースとして下記の理由が挙げられます。

  • 適用はPC所有者任せにしているが、煩雑さからセキュリティアップデートを適用していない
  • どのマシンが未適用か分からない
  • インストーラ、コマンドラインをベンダやプラットフォーム毎に準備するため、管理者の負担が大きい
  • 配布ソフトでセキュリティアップデートを配っているものの適用が成功しない。適用結果も不明。

脆弱性の検出と適用は運用面でハードルが高く、多くのお客様が十分に対策出来ていない状況です。

5-ITAdmin-resize-outline.png

 

 

最適なソリューション

このような経営リスクを回避し、運用管理者の負担を軽減する製品として、
シマンテック社ではSymantec Patch Management Solutionをご提供しています。

前述のAdobe Flash PlayerやJREに加えて各種ブラウザ(IE, Firefox, Chrome)や文書ソフト(Microsoft Word,PDF, Openoffice, LibreOffice)等、50種類以上のアプリケーションに幅広く対応しており、管理者の課題を解決することが出来ます。

  • 管理サーバーで最新の定義ファイル情報(ベンダ情報、緊急度、インストーラ、インストールコマンドライン etc)を一元管理
  • 管理エージェントを通じてサードパーティ製品の脆弱性適用状況をデータベース化
  • 管理者が選択したセキュリティアップデートは、各ベンダサイトから自動ダウンロードされ、未適用マシンに自動適用

Microsoft社から公開されるセキュリティアップデートの適用も可能ですので、

Microsoft WSUS(Windows Server Update Services)サーバーを介することなく、一元管理を実現します。

4-flow-image.png

 

まとめ(Symantec Patch Management Solutionで対策を万全に!)

このようにSymantec Patch Management Solutionは、

昨今、標的として狙われる傾向が高いサードパーティ製ソフトウェアや、

マイクロソフト社のセキュリティアップデート適用作業を自動化し、セキュリティリスクに対して迅速に問題解決を図ることで入口対策を強化します。

サイバー攻撃による被害を受けてしまった場合、顧客への対応や、イメージの信頼回復、業務復旧作業、計り知れない被害を被ることになります。

未然に十分な対策を取ることで、経営上のリスクを低減し、運用負担を軽減させることが可能です。

 

Symantec Patch Management Solutionを是非ご検討下さい。

(ホームページはこちら)

http://www.symantec.com/ja/jp/patch-management-solution

 

Multicasting white paper

$
0
0

Ever wondered how multicasting works? This document is an in-depth guide to how multicasting is implemented in the SMP.

Content

Executive Summary ............................................................................................4 
Introduction to Multicasting ..................................................................................5 
Benefits of Multicasting .......................................................................................6
Comparing Transmission Methods ........................................................................6 
Unicast ...............................................................................................................6 Broadcast............................................................................................................6 
Multicast .............................................................................................................6 
Conserving Bandwidth ..........................................................................................6 
Reduction of Hardware Requirements.....................................................................7 
Improved Design..................................................................................................7
Using Altiris Multicasting ......................................................................................8 
Configuring Multicasting........................................................................................8 
General Tab.........................................................................................................9 
Downloads Tab ................................................................................................. 10 
Blockouts Tab .................................................................................................. 17 
Global Symantec Management Agent Settings .................................................... 17 
SWD Package Options ...................................................................................... 20 
Description of the Multicast Process.................................................................... 21 
Agent start up proces for multicast ..................................................................... 22 
Package Notification and Download process ........................................................ 24 
Agent regristry entries ....................................................................................... 25 

 

Please note that it is provided without warranty or support and is for information only to support configuration of the technology. This functionality changes as new versions of the software are released and all statements in this document should be independanly verified against the software version implmenented.

How to add custom data to existing processes like Incident Management, Change Management etc

$
0
0

Step1. Create an Integration Library with User Defined Type with DB Mapping (ORM).

Custom Data 1.jpg  

Step2. Add Process Data Class to the IMExtendedDemoORM

Custom Data 2.jpgCustom Data 3.jpg

 

Step3. Add Properties as needed. For this example I have added Some Text, Number, Date, Logical and Choice List. Click next once you have all the required properties.

Custom Data 4.jpgCustom Data 5.jpg

Add Index for the Data if needed and created the DataType and hit Finish. Once it’s completed it will generate the required code for the same. 

Custom Data 6.jpgCustom Data 7.jpg

Step4. Now click the “Included Libraries” button and add the Symantec.ServiceDesk.IM.Automation.dll to bind this custom data with Incident Management Automation library 

Custom Data 8.jpgCustom Data 9.jpg

Custom Data 10.jpg

Step5. Click Add in the Generator and Select Automation Library Generator. When it asks to choose Service ID select “Use Existing Automation Library” and select Incident Management. Once done hit Compile and Close.

Custom Data 11.jpg Custom Data 12.jpg

Custom Data 13.jpgCustom Data 14.jpg

 

Custom Data 15.jpg

Step6. Now login to the Process Manager Portal with an admin account preferably native admin account (admin@symantec.com) and go to Admin>Portal>Plugin Upload, and select Plugin Type as Automation Library and uploaded the IMExtendedDemo.dll file from “C:\Program Files\Symantec\Workflow\Shared\customlib”, Once uploaded restart the IIS to reflect changes in  Process Automation 

 

Custom Data 16.jpg

 

Step7. Refresh the IE or Close it and re-open it and login with admin account and go to Admin>Process Automation and you’ll see that IMExtendedDemoDataType next to Incident Management

Custom Data 17.jpg

You can see the New DataType are added to Condition and Actions of Ruleset, Email templates and Reports.  

Custom Data 18.jpg

Custom Data 19.jpg

Custom Data 20.jpg

Custom Data 21.jpg

How to Protect PXE Boot Menu for Redeployment (Managed) Computer in Deployment 7.5.

$
0
0

How to protect with password PXE boot menu for Redeployment (Managed) Computer in  Deployment 7.5.

  1. Create PXE Linux Preboot environment with name  “LinuxPXEWinPE”.
  2. Create  PXE Windows PE Preboot environment with name “WinPE_PXE”.
  3. Create a  “C:\WinpeISO” folder on Notification Server.
  4. Run bootwiz.exe  from  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Bootwiz\{374E1C49-4F58-4F5C-8D51-07A30F0D44AD}\cache\bootwiz
  5. Click  Create New Configuration
  6. Enter Name “WinPEwithPassword
  7. Select in Pre-boot Operating System for this configuration: Windows PE
  8. OEM Extension: DS
  9. Boot Media Type: ISO
  10. Boot Task Type: Automation
  11. In the Path field enter the  C:\WinpeISO\WinPEwithPassword.iso
  12. Click Next and create iso WinPE file.
  13. Download a syslinux from  https://www.kernel.org/pub/linux/utils/boot/syslinux/syslinux-6.01.zip
  14. Unzip the syslinux to folder  С:\WinpeISO\syslinux61
  15. Go to folder  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images and copy folder  LinuxPXEWinPE to LinuxPXEWinPE.backup
  16. Copy file pxelinux.0 from  С:\WinpeISO\syslinux61\bios\core to folder  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC
  17. Copy file menu.c32  from  С:\WinpeISO\syslinux61\bios\com32\menu to C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC
  18. Copy file ldlinux.c32 from  С:\WinpeISO\syslinux61\bios\com32\elflink\ldlinux  to  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC
  19. Copy file memdisk from  С:\WinpeISO\syslinux61\bios\memdisk  to  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC
  20. Copy file libutil.c32 from С:\WinpeISO\syslinux61\bios\com32\libutil  to C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC
  21. In folder  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC\pxelinux.cfg create file “default “

default menu.c32

prompt 0

 

menu title PXE Special Boot Menu

menu INCLUDE pxelinux.cfg/graphics.conf

MENU AUTOBOOT Starting Local System in # seconds

 

label WinPE

                MENU LABEL Boot Windows PE from network

                MENU PASSWD passw0rd 

                kernel memdisk

    append iso raw

                initrd WinPEwithPassword.iso

 

22. In folder  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC\pxelinux.cfg create file  graphics.conf

menu color tabmsg 37;40      #80ffffff #00000000

menu color hotsel 30;47      #40000000 #20ffffff

menu color sel 30;47      #40000000 #20ffffff

menu color scrollbar 30;47      #40000000 #20ffffff

MENU MASTER PASSWD yourpassword

MENU WIDTH 80

MENU MARGIN 22

MENU PASSWORDMARGIN 26

MENU ROWS 6

MENU TABMSGROW 15

MENU CMDLINEROW 15

MENU ENDROW 24

MENU PASSWORDROW 12

MENU TIMEOUTROW 13

MENU VSHIFT 6

MENU PASSPROMPT Enter Password:

NOESCAPE 1

ALLOWOPTIONS 0

 

23. Copy  WinPEwithPassword.iso  file to the folder  C:\Program Files\Altiris\Altiris Agent\Agents\Deployment\SBS\Images\LinuxPXEWinPE\x86PC.

24. Go to console Notification Server and configure a NBS General Settings for Redeployment (Managed Computer) and select  Respond to Managed computers ,  “LinuxPXEWinPE “ in PXE boot Image and Continue immediately (Do not prompt) options.

Now, you have to enter password "passw0rd" for boot to Windows PE for redeployment managed computer. 

Imaging Windows Embedded standard and Windows 7 Embedded through Altiris 7.1

$
0
0

Why this Document?

            Altiris 7.x are not supporting imaging forWindows Embedded standard and Windows 7 Embedded”

(http://www.symantec.com/business/support/index?page=content&id=HOWTO9965)

1.jpg

2.jpg

 

 

What is the technical challenge for imaging through Altiris 7.x ?

            Altiris 7.x doesn’t recognize the OS type and hence while deploying we will not be able to get dropdown for selecting license.

 

What is the solution?

            Please read through ….

I am working in environment having 8000 + embedded OS and imaging was done previously with DS 6.9 or pen drive or … ways but we have introduced Altiris 7.1 centralized server and having many site/package servers surprisingly Altiris 7.1 image deployment is not supported for embedded OS. Now my environment is getting updated from Windows Xp Embedded to Windows embedded standard or Windows 7 Embedded to overcome the challenge I have done some R & D and found the way out. Below are the steps

For image capturing.

Image capture runs fine as of other OS. Above mentioned OS is having sysprep available in system build itself so follow the same steps

  1. Prepare for Image capture
  2. Create Image (rdeploy)
  3. Reboot To [Production]

Once image is created please note the GUID no of it. And stored location

  • Settings – Deployment and migration – Disk images

Select the image which you have captured in above steps and note down the Package location:

3.jpg

 

For deploying the image.

Here comes the tricky part…

  1. First task will be to reboot to PXE or Automation
  2. Task Run batch script1. (map drive and run rdeploy to deploy the image)

*****************************************

net use y: "\\Package server\Deployment\Task Handler"

y:\rdeploy\rdeploy.exe -rescan -md -f "y:\image\GUIDE\Image name .img"

*****************************************

Note:-

You have noted down the Package location:in above step. You can get  ’Package server’ and  ’ GUIDE\Imagename’ from it I have mentioned the example to help.

Package location:

3.jpg

In above screen shot ’Package server’ will be ‘Test-img.domain.com’

 And ’ GUIDE\Imagename’ will be ‘a5af6141-5969-xxx-xxxx-xxxxxxxx\test.img’

  1. Task Run batch script2. (Recreate MBR to boot from c:drive)

*****************************************

bcdedit /store c:\boot\bcd /set {bootmgr} device boot

bcdedit /store c:\boot\bcd /set {default} device partition=c:

bcdedit /store c:\boot\bcd /set {default} osdevice partition=c:

  1. Reboot to production

 

Sample Job screen shot:

4.jpg

 

 

 

 

 

Windows PE 4.0, PXE and Ghost

$
0
0

After performing the steps in this walk through you will have a working PXE and TFTP set up delivering a Win PE 4.0 package with the latest Ghost executables and 2 optional default Win PE 2.0 Ghost packages.

With this walk though it is assumed that the Ghost Solution Suite 2.5 is already installed on a 2008 server (other os’s are an option however the screen shots and file paths are form a 2008 server installation.) These instructions also assume that your Ghost Server, DHCP server and Client are on the same network segment.

Because WinPE, ADKsetup.exe, TFTPD32.exe, and 3com PXE server are not Symantec products Symantec technical support is not responsible for their performance or able to assist with troubleshooting or supporting these technologies.  The appropriate manufacturer of each of these third party software’s will need to be contacted to assist with troubleshooting their respective technology’s. 

Downloading and installing required tools.

Windows ADK

  1. Download and run the ADKsetup.exe from the following link.   This is for the Windows Assessment and Deployment Kit (ADK) for Windows 8 http://www.microsoft.com/en-in/download/details.aspx?id=30652
  2. Choose the “Install the Assessment and Deployment kit to this computer” radio button.  Maintain the default locations if the batch files included are to be used.
  3. You will only need to select the  “Deployment Tools” and “Windows Preinstallation Environment (Windows PE)”
    2-5-2014 9-52-35 AM.jpg
  4. This download may take a long time but you can move forward with setting up this server while the download finishes.

PE batch files.

  1. Create the following folder structure.   Make sure that capitalization is correctly observed.
    • c:\WinPE
    • c:\WinPE\Ghost
    • c:\WinPE_x64
    • c:\WinPE_x64\mount
    • c:\Drivers
    • c:\TFTPBOOT
  2. Download the Winpe1.txt and Winpe2.txt files from the following KB to the c:\WinPE folder renaming them as noted to .bat files.  http://www.symantec.com/docs/HOWTO93846

 

 

3 Com install

  1. Run the Setup.exe  from the following folder of the ghost install disk “\Extras\3Com Boot Services\Install\”
  2. Continue the install wizard. You will see a window entitled 'Setup Type.' Specify Server rather than Administrator or Custom.
  3. Accept default values for the rest of the installation.
  4. When done with the wizard, click on Finish, and exit all windows.
  5. Run the PXE Server.
  6. Click Start and All Programs.
  7. Select 3Com Boot Services and click PXE Server. You will be prompted to create a DHCP proxy. If you are installing 3Com Boot Services on a computer that does not run a DHCP Server, answer 'Yes.'

 

NOTE:  If the PXE Boot Services are installed on the DHCP Server, you must use the option that adds the "Option 60 PXEClient Tag String" to DHCP packets.

If the PXE Boot Service are installed on a different computer, you must enable the ProxyDHCP function in the PXE Server software. The first time you run the PXE Server, you will be prompted to enable ProxyDHCP. Symantec recommends that the PXE, Ghost and DHCP be installed on the same subnet. These instructions assume that this is true.

 

Download and install TFTPD32

  1. The TFTPD32 Standard Edition Installer is what is used for this demonstration.   The service version could be used and then set to run as a service rather than needing to be logged in and running.   Download it from http://tftpd32.jounin.net/tftpd32_download.html
  2. Once downloaded start the install and accept the defaults.

Create a TCP/IP Network Boot Image with the Ghost Boot Wizard.

At least one pe 2.0 package has to be built with the ghost boot wizard even if you will be replacing the package with a manually created pe 4.0 package.  Custom PE4.0 and Stock PE 2.0 ghost packages can exist on the same server.  

In this demonstration we create 1 item and it will be the default option when PXE booting. 

  1. Click Start > Programs > Symantec Ghost > Symantec Ghost Boot Wizard.
  2. Select Windows PE [Default]
  3. Click Next.
  4. Select TCP/IP Network Boot Image option and click Next. 5x
  5. Browse to the c:\TFTPBOOT
  6. Name the package  PE4GHOST
  7. Click Next twice.
  8. Click Start Again
  9. Select Windows PE [Default]
  10. Click Next.
  11. Select TCP/IP Network Ghost Boot Image option and click Next. 5x
  12. Browse to the c:\TFTPBOOT
  13. Name the package  PE4GHOST
  14. Click Next and then Finish.
  15. In windows explorer copy bootmgr.exe from the C:\TFTPBOOT\boot folder to the C:\TFTPBOOT folder.

BCD File

PXE boot packages are tracked in the bcd file located in the C:\TFTPBOOT\boot folder.  

The default behavior for the boot menu is that if only one PXE package has been created with the ghost boot wizard then no menu will be presented and the only item listed will be booted to.   If a second boot package is created with the Ghost boot wizard then a menu will be displayed and the first package created will be the default option with a 30 second counter.   

Microsoft provides the bcdeditor.exe to allow for modifying this file.  The command line tool from Microsoft can be confusing and tedious to use.  Several third party tools exist that allow for visually editing the bcd file using a GUI editor and may be worth researching if you need to be modifying the bcd boot menu file to change the default selection or to update the time to respond to the boot menu.

Building the PE 4.0 boot package with Ghost.

The following directories and files should have already been created and downloaded as noted in http://www.symantec.com/docs/HOWTO93846

c:\WinPE
c:\WinPE\Ghost\  (the updated ghost executables from HOWTO93843
c:\WinPE_x64
c:\WinPE_x64\mount
c:\Drivers
c:\WinPE\Winpe1.bat

c:\WinPE\Winpe2.bat

If you don’t already have the updated Ghost executable Ghost64.exe (12.0.0.4570) downloaded contact support and mention you need the ghost executables noted in HOWTO93846 then place them in the c:\WinPE\Ghost\ folder

Download any new Win8_x64 NIC drivers and or Storage drivers (if necessary) into C:\Drivers folder.  Make sure that the drivers have been extracted and are not still in the .zip or .exe format that they were downloaded in.

 

  1. Copy the winpe.wim from the amd64 folder that is located under the ADK installer directory C:\Program Files (x86)\Windows Kits\8.0\Assessment and Deployment Kit\Windows Preinstallation Environment\amd64\en-us, to the C:\WinPE folder.
  2. Launch a command window as administrator by navigating to the Start Menu> All Programs> Accessories>   and Right clicking on the Command Prompt and selecting Run as administrator
  3. In the command prompt type cd\ press Enter
  4. Type cd "Program Files (x86)\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools"  press Enter
  5. Type DandISetEnv.bat  press Enter
  6. Type cd\ press Enter
  7. Type cd WinPE press Enter
  8. Type Winpe1.bat Press Enter
  9. Look for the Press Any Key prompt and review the lines above for errors If no errors are seen then Press Any Key.  Do this for each prompt you receive.
  10. To add the drivers from the c:\Drivers folder type Dism /Image:c:\winpe_x64\mount /Add-Driver /Driver:c:\drivers /Recurse and press Enter
  11. Leave the command windows open for now and open windows explorer and navigate to c:\winpe_x64\mount\windows\system32
  12. Open the startnet.cmd file with Notepad. You will see that it only contains the word wpeinit at the top. On the line below wpeinit add the following script lines as they appear below (each command on its own line so just copy and paste it):

wpeutil disablefirewall
X:
cd\
cd ghost
start ghost64.exe

  1. Exit note pad saving your changes close the windows explorer window and return to the command window from step 11
  2. Type cd\ Press Enter
  3. Type cd WinPE Press Enter
  4. Type Winpe2.bat  Press Enter
  5. Look for the Press Any Key prompt and review the lines above for errors If no errors are seen then Press Any Key.  Do this for each prompt you receive.

In the c:\WinPe_x64 folder you should now have a winpe_x64.iso that could be burned to DVD to boot from to test its viability.  If when booted from the local systems hard drive is seen and the network is also seen then this package is viable and can be transferred into the PXE server.

Replacing .wim file.

  1. In Windows Explorer Navigate to the C:\WinPe_x64\ISO\sources folder and locate the boot.wim file.  
  2. Right click the boot.wim file and select to Rename the file to PE4GHOST.wim
  3. Copy this file to the C:\TFTPBOOT\boot folder replacing the existing PE4GHOST.wim
  4. The ghost pe 2.0 menu item will be replaced with this new pe 4.0 package

Configureing the PXE server

Specify client computers in the BootPTab Editor.

  1. Click Start > All Programs > 3Com Boot Services > BootPTab Editor.
  2. Click Edit and then click Add Host to open the Edit Host window.
  3. Provide the information as follows:
    1. Name: This is the computer name of the client computer, or host computer. If you don't want to specify a particular computer name, click the box to "Use node for name"
    2. Node: Specify the MAC address of the client computer that should use the PXE menu. If you want to specify multiple computers based on MAC addresses, you can use a question mark as a wildcard character. For example, if one computer's MAC address is 123456889988 and another is 123456852963, notice that the first 6 digits are the same. Using the MAC address of 123456?????? in the Node field would allow both machines to connect to use the PXE package. This will work as long as there are no other BootPTab host entries that exactly match the MAC address of the computers. You can also use all wildcards (???????????) to allow all of your clients to receive the PXE package.
    3. IP #: Do not use this box.
    4. Image: Type the information as follows exactly as follows: /boot/pxeboot.n12
  4. Click OK. The entry should be reflected on the Hosts tab.
  5. Click on File, and Save. Do NOT change the file name of the BOOTPTAB. It must be named 'BOOTPTAB' (without any extension) and be located in C:\TFTPBOOT.
  6. Close the BootPTab Editor.
  7. Launch the 3com PXE server and keep it running whenever clients need to PXE boot.

Configuring TFTPD32

  1. Open TFTPD32 from the desktop short cut (it will need to be launched before it works if you want it to run as a service consider using the service version of TFTPD32)
  2. Configure the Current Directory to point to the C:\TFTPBOOT and select the correct Network card from the Service Interface see below.
    2-5-2014 9-54-35 AM.jpg
  3. Open the Settings option
  4. The only check box on the Global tab needs to be checked is TFTP Server
    2-5-2014 9-55-59 AM.jpg
  5. Click OK and restart if requested.
  6. Because this walk though didn’t choose the TFTPD32 service installer we will have to launch the TFTPD32 application and leave it running in order for clients to pxe boot. 

Windows Operating System Key Inventory

$
0
0

The following article will guide you through the steps of creating a Windows Operating System Key Inventory, Importing Operating System Key Data, Creating SQL Tables and  Creating Compliance Reports.  It will enable you to discover and report on the compliance of your Windows Operating System Keys.  The following article applies to Windows 7, Windows 8, Windows 8.1, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2.

 


aila2: Version 1 Full Package with Installation and Execution Scripts

$
0
0

Introduction:

The aila2 collection of tools comes with quite a few articles and downloads (see references [1-6]. But how can we benefit from this tool without having to read the full documentation? In short, how can we deploy the tool as simply as possible?

This is what this download page aims to answer.

Package content:

The package aila2-version1-full.zip contains the following files:

  • aila2.exe [2]
  • aila2.html [5]
  • aila2.js [3][5]
  • aila2-filter.exe
  • aila2-runner.exe
  • aila2-siteconfig.exe
  • index.html [5]
  • install.bat
  • quickview.html [3]
  • run.bat
  • style.css [3][5]
  • web.config
  • web2.config
  • web3.config
  • web4.config

Installation:

The package contains 2 batch files to install and run the toolkit. Here are some variable that you should change to match your environment:

  • aila2 web-directory: C:\inetpub\wwwroot\aila2
  • IIS log files directory: c:\windows\system32\LogFiles\W3SVC1

To install the tool on the server you need to unpack the attached zip file in a local folder.

Installing the web-UI:

Double click install.bat once you have modified the file to match your environment or as it is if it is applicable.

The web-application is now available. Check on your browser that the page can load properly:

http://localhost/aila2

If you encounter an error you will most likely need to use an alternative web.config file. This is because certain directives are necessary for the web-application to run and IIS does not accept the directive to be set twice (so inherited configuration cannot be overwriten locally).

The default web.config defines index.html as default document (so you can navigate to the aila2 folder without specifying the filename to be loaded) and files with a mime type .json as 'application/json'.

Web2.config, Web3.config and Web4.config each provide an alternative configuration (default document only, application/json only and nothing respectively).

Here is the content of the install.bat file:

@echo off

SET aila2="C:\inetpub\wwwroot\aila2"

echo Create the aila2 folder in the web-root...
IF NOT EXIST %aila2% mkdir %aila2%

echo Copy the web-ui files (if they do not exist yet)...
IF NOT EXIST %aila2%\style.css  copy style.css %aila2%
IF NOT EXIST %aila2%\quickview.html copy quickview.html %aila2%
IF NOT EXIST %aila2%\aila2.html copy aila2.html %aila2%
IF NOT EXIST %aila2%\aila2.js copy aila2.js %aila2%
IF NOT EXIST %aila2%\index.html copy index.html %aila2%
IF NOT EXIST %aila2%\web.config copy web2.config %aila2%\web.config

Generating IIS logs data:

Once the web-UI is configure you need to parse the IIS log files in order to generate the result files that will be used in the UI.

To do this you need to make sure that the run.bat file points to your web-application folder and IIS log directory. Once the pre-reqisites are satisfied you can launch run.bat from the unpacking directory.

Once the initial execution completed you can schedule the execution of run.bat daily to have your calendar view automatically updated. Beware in the task that the run.bat should be launch from the direcotry where the aila2 executables are located.

Note! If you have a lot of IIS log files you may want to clean up the folder to keep only the last 60 or 90 days, depending on how far back you want to go (but note that the default calendar view limits the display to 60 result files maximum).

Here is the content of run.bat:

@echo off

SET aila2=C:\inetpub\wwwroot\aila2


SET input=c:\inetpub\logs\LogFiles\W3SVC1

if EXIST %input% goto run

SET input=c:\windows\system32\LogFiles\W3SVC1

:run

aila2-runner -i %input% -o %aila2%

echo Running siteconfig now...
aila2-siteconfig -i %aila2% > %aila2%\siteconfig.json

Conclusion:

With this download you can now setup a single server very quickly and review how much work is being done on your SMP server via its IIS log files.

References:

ReferenceContent nameProgram name
[1]aila2-filter: A tool to filter IIS log files by time-taken or uri-stem fieldsaila2-filter.exe
[2]aila2: A c# program to analyze Altiris IIS log filesaila2.exe
[3]aila2-web: How to use Quickview.html to draw charts from IIS log filesn/a
[4]aila2-runner: A simple tool to analyze all log files in a folderaila2-runner.exe
[5]aila2-web: Introducing the Calendar View and siteconfig json filen/a
[6]aila2-siteconfig: A tool to generate a site configuration fileaila2-siteconfig.exe

 

Displaying the Organizational View Structure via SQL

$
0
0

When you go to "Manage > Organizational Views and Groups" and select either a Org View or an Org Group, if one or more Org Groups contain resources, the Organizational Group column will display the sub-group hierarchy as a single string value, by seperating each node by a backslash:

Default.jpg

 

The following query will allow you to target either an Org View or an Org Group so that you can see this same representation by directly interrigating the database:

 

declare @parentGuid uniqueidentifier;
set @parentGuid = N'Guid of Org View or Org Group goes here';
declare @folders TABLE (  ScopeCollectionGuid uniqueidentifier
                        , ParentScopeCollectionGuid uniqueidentifier
                        , BaseGuid uniqueidentifier)
declare @OrganizationalGroup TABLE (ScopeCollectionGuid uniqueidentifier
                    , OrganizationalGroup nvarchar(1000))

INSERT @folders
SELECT DISTINCT ScopeCollectionGuid = fbf.[FolderGuid]
, ParentScopeCollectionGuid = f.[ParentFolderGuid]
, BaseGuid = ISNULL(ipf.BaseGuid, fbf.FolderGuid )
FROM FolderBaseFolder fbf
INNER JOIN ItemFolder f ON fbf.FolderGuid = f.ItemGuid
LEFT OUTER JOIN ItemPresentation ipf ON ipf.Guid = fbf.FolderGuid
WHERE fbf.ParentFolderGuid = @parentGuid
AND f.[IsFolder] = 1;

WITH Hierarchy AS (
select f.ScopeCollectionGuid, f.ParentScopeCollectionGuid, f.BaseGuid, CAST(s.String as NVARCHAR(1000)) [String]
from @folders f
INNER JOIN String s ON s.BaseGuid=f.BaseGuid AND s.StringRef='item.name' AND s.Culture=''
where f.BaseGuid=@parentGuid
UNION ALL
select f.ScopeCollectionGuid, f.ParentScopeCollectionGuid, f.BaseGuid, CAST(h.String+'\'+s.String as NVARCHAR(1000)) [String]
from @folders f
INNER JOIN String s ON s.BaseGuid=f.BaseGuid AND s.StringRef='item.name' AND s.Culture=''
inner join Hierarchy h ON f.ParentScopeCollectionGuid = h.ScopeCollectionGuid
)
INSERT @OrganizationalGroup
SELECT h.ScopeCollectionGuid, h.String
FROM Hierarchy h
;

/* Example: using the @OrganisationalGroup table variable in a query */

SELECT vi.Guid, vi.Name, og.OrganizationalGroup
from vItem vi
inner join ScopeMembership sm
on sm.ResourceGuid = vi.Guid
inner join @OrganizationalGroup og
on og.ScopeCollectionGuid = sm.ScopeCollectionGuid

ORDER BY vi.Name ASC;

 

 

SQL.jpg

 

 

AllResources.jpg

SQL2.jpg

SMP 7.5 New Package Delivery Features: File Hashing and Snapshot Signing!

$
0
0

Table of Content

Introduction

The long awaited package hashing feature made a rather quite entrance on the Symantec Management Platform 7.5. Today we turn the light on this feature and how this (and a few others we'll discuss along) helps greatly strengthen the security of Symantec Management Platform package delivery.

Top

Snapshot creation

The package snapshots are created on the Management Platform once a package is defined and saved or during the delta and full package refresh schedule tasks. It can also be refreshed using the "update distribution point" context option on the package.

The snapshots are stored under the "C:\ProgramData\Symantec\SMP\Snapshots" directory. This directory contains the snapshot xml file (saved as <packageid>.xml and the snapshot xml signature file (saved as <packageid>.sig). More details are available on this in the next section.

The generated snapshot file will contain an xml tree structured that matches the package directory, with the following attributes for each files:

  • File name
  • File size
  • File modified date
  • File sha256 sum

And the following attributes for each folder:

  • Folder name
  • Total Size
  • File count
  • Folder count

Here is a sample from the a test package:

<FolderSnapShot path="C:\Snapshot  Hashing &amp; Signing" time="2014-01-24 11:24:47" hash="uX0HHLHJEQm4hQJAQjgUPQ==" PkgVersion="1390562687">
<Root  size="66437" files="1" folders="3">
    <File name="vlan.c" size="16592" fileHash="0u97LvW57zB122jy762pTiLVuazJ5qp4Mgyb+ruJG9U=" lastModifiedTime="2014-01-24 09:45:49" />
    <Folder name="full_mod" size="16661" files="1" folders="0">
        <File name="vlan.c" size="16661" fileHash="P2tFsZL76XFzMHEUXqJS2J6xdeK2c1Wc44qs8R3Yhx0=" lastModifiedTime="2014-01-24 09:50:58" />
    </Folder>
    <Folder name="source" size="16592" files="1" folders="0">
        <File name="vlan.c" size="16592" fileHash="0u97LvW57zB122jy762pTiLVuazJ5qp4Mgyb+ruJG9U=" lastModifiedTime="2014-01-20 02:40:07" />
    </Folder>
    <Folder name="stealth_mod" size="16592" files="1" folders="0">
        <File name="vlan.c" size="16592" fileHash="nmxDsiazHVdP5QWSGRg68K4/iAgfnczD75k/FCID9KM=" lastModifiedTime="2014-01-20 02:40:07" />
    </Folder>
</Root>
</FolderSnapShot>

Top

Snapshot delivery

In order to validate that the snapshot available on the client is the same as the one produced on the SMP the server now signs the snapshot cryptographically using its private key.

The snapshot and the signature are transferred by GetPackageSnapshot.aspx (both NS & IIS PS cases). In UNC PS case the snapshot and the signature are downloaded by directly copying the corresponding files from the remote PS share.

The codebases (the list of servers from which the package files and snapshots are available to the client based on Site and Subnet configuration) are available from the SMP GetPackageInfo.aspx interface. This interface now encrypts the response using the public agent key and decryption is done by the client.

This adds two new levels of security: first we can ensure that the snapshot used by the client is the same as the one generated on the server, second we ensure that the agent receives accurate codebases (again to avoid man-in-the-middle type of attacks).

Top

Package download tests

Now, let's test how the features really operate on the managed end-point. Here's how the tests will be conducted:

  • Define and create the test package
  • Create the test Software Release
  • Create the test Managed Delivery
  • Base delivery test
  • Package Transmission corruption test
  • Local package corruption test

Define and create the test package:

To this avail I have prepared a package that contains 3 versions of a file named "vlan.c". This is the vlan implementation file from the Linux Kernel 3.13. The 3 files are each in a folder indicating the file state compared to the original version:

  • "source\vlan.c": a clean copy of the vlan.c file from the Linux Kernel.
  • "full_mod\vlan.c": this file is a copy of the original vlan.c that was modified using a plain text editor. The file content and modified date differ from the original.
  • "stealth_mod\vlan.c": this file is a copy of the original vlan.c touched up to match the original file timestamp and file size. Only the file content does differ.

And at the package root with have a copy of the original vlan.c. This is the file that we will replace with the full and stealth mod files for testing.

Here's the header of the original file:

/*
 * INET		802.1Q VLAN
 *		Ethernet-type device handling.
 *
 * Authors:	Ben Greear 
 *              Please send support related email to: netdev@vger.kernel.org
 *              VLAN Home Page: http://www.candelatech.com/~greear/vlan.html
 *
 * Fixes:
 *              Fix for packet capture - Nick Eggleston ;
 *		Add HW acceleration hooks - David S. Miller ;
 *		Correct all the locking - David S. Miller ;
 *		Use hash table for VLAN groups - David S. Miller 
 *
 *		This program is free software; you can redistribute it and/or
 *		modify it under the terms of the GNU General Public License
 *		as published by the Free Software Foundation; either version
 *		2 of the License, or (at your option) any later version.
 */

#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt

Here's the header of the full mod file (delta in bold italic):

/*
 * Ludovic FERRE, Symantec Corporation, Snapshot HAshing check #1
 *
 * INET		802.1Q VLAN
 *		Ethernet-type device handling.
 *
 * Authors:	Ben Greear 
 *              Please send support related email to: netdev@vger.kernel.org
 *              VLAN Home Page: http://www.candelatech.com/~greear/vlan.html
 *
 * Fixes:
 *              Fix for packet capture - Nick Eggleston ;
 *		Add HW acceleration hooks - David S. Miller ;
 *		Correct all the locking - David S. Miller ;
 *		Use hash table for VLAN groups - David S. Miller 
 *
 *		This program is free software; you can redistribute it and/or
 *		modify it under the terms of the GNU General Public License
 *		as published by the Free Software Foundation; either version
 *		2 of the License, or (at your option) any later version.
 */

#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt

Here's the header of the stealth mod file (delta in bold italic):

/*
 * INET		802.1Q vla2
 *		Ethernet-type device handling.
 *
 * Authors:	Ben Greear 
 *              Please send support related email to: netdev@vger.kernel.org
 *              vla2 Home Page: http://www.candelatech.com/~greear/vla2.html
 *
 * Fixes:
 *              Fix for packet capture - Nick Eggleston ;
 *		Add HW acceleration hooks - David S. Miller ;
 *		Correct all the locking - David S. Miller ;
 *		Use hash table for vla2 groups - David S. Miller 
 *
 *		This program is free software; you can redistribute it and/or
 *		modify it under the terms of the GNU General Public License
 *		as published by the Free Software Foundation; either version
 *		2 of the License, or (at your option) any later version.
 */

#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt

I will not details here how to create a package. But here is how I named the it: "Snapshot Hashing and Signing"

Create the test Software Release

Here is an outline of the Software Release that was created for this test:

  • Version: 3.13
  • Name: Snapshot Hashing & Signing test SWR
  • Package: Snapshot Hashing & Signing
  • Command type: install / batch
  • Command line: cmd /c 

The release in itself is very simple.

Create the test Managed Delivery

Once the Software Release is saved you can right click on the object in the tree or select Action in the resource Manager > Create Managed Delivery Policy.

The policy needs to have a schedule starting in the future (I choose the end of the day) that is set to repeat so that it remains available on the Symantec Management Agent UI for us to run path the initial execution. It must use the "Snapshot Hashing & Signing" package and the install command line.

Base delivery test:

In this test we deliver the package on the agent from a package server, with all files matching the snapshot taken from the SMP.

This is how things do work normally, so I won't document the test results: all worked as expected.

Package transmission corruption test:

In this test we replace package files on the package server. This simulates network transmission error, or Package Server out-of-synchronization use case. On the local cache we delete the folder content and launch the Managed Delivery.

The test was done in multiple interesting conditions:

  • A file content is valid but the timestamp is different from the snapshot
  • A file content is invalid but the timestamp is the same as the snapshot
  • A file content is invalid and the timestamp is different from the snapshot

Local package corruption test :

In this test we replace package files on the local package cache. This simulates local package corruption that can be caused by running the managed delivery itself (some programs do over-write existing file(s) on the running folder).

The test was done in multiple interesting conditions:

  • A file content is valid but the timestamp is different from the snapshot
  • A file content is invalid but the timestamp is the same as the snapshot
  • A file content is invalid and the timestamp is different from the snapshot

Test results:

Thanks to an efficient design the Transmission corrupt / Remote out of Sync versus Local cache corruption are dealt with in the same manner when it comes to matching the snapshot, so both of the main tests above end up being the same (but we still had to run thru the tests to find this out) albeit the remediation is different: in the local corruption we download back from the PS immediately. In the transmission corruption case we have a 180 seconds delay before the download is triggered again.

Also the last 2 test conditions ended up with the same result: regardless of the timestamp, the file hash is found to not match, and causes the file to be deleted and re-downloaded all the same.

Now, this article should contain enough for anyone out here on Connect to run the tests on their own so I will not detail the tedious process I went thru nor add log data. Rather we'll show how the software behaved in the 2 test conditions.

1. The content is the same but the timestamp differs:

In this case the timestamp is altered to match the snapshot (see the warning entries below). This is safe to do in 7.5 as we can validate that the file content is the same as what we have on the server.

RemoteTest_InvalidTimeStamp.png

2. The content is not the same:

The invalid file is deleted and downloaded afresh.

LocalTest.png

Top

Conclusion

The Symantec Management Agent 7.5 is built up with very strong security features that will prevent the delivered packages from running if any of the snapshot file do not match the snapshot that was defined on the Symantec Management Platform.

Top

Automated Software Management

$
0
0

I recently had these questions posed to me, and I thought I would share my ideas on the topic.

Question 1:

Can a deployment database be tied into Altiris which would enable software to be delivered or recovered automatically?  An example would be an employee  who has a computer hard drive crash, when the replacement computer is deployed could an image be installed based on the previous inventory already on file in Altiris?  The counter to that would be terminating an employee.  When someone leaves, can Altiris be setup to uninstall the software and return the purchased licenses back into inventory?

The short answer is 'YES, if….'

The technical answer is 'Why? We can already do this without another database"

The longer answer will depend upon how the software is being managed.

Let's take a sample case:

  • Customer has all of their applications added to the Software Catalog.
  • All applications have at least Detection if not Applicability Rules attached to them in the Catalog
  • All application installations are using Managed Software Delivery policies using the defined install command line.
  • All Managed Software Delivery policies are focussed on logical, dynamic targets defining who should have the particular application e.g an AD OU.

In this scenario, the below process of recovery is already present.

  1. User gets first machine. As part of the imaging process, the machine was not only joined to the Domain – but also added to the Accounting OU
  2. SMP does AD Sync and 'sees' new machine in Accounting OU
  3. All Managed Software Delivery policies applicable to the Accounting OU are applied to the machine.
  4. Detection & Applicability Rules for each application are applied – applications that are not installed and are applicable … are installed.
  5. User has motherboard failure – gets a new machine……. See Step 1

In this same scenario

  1. Customer creates additional Managed Software Delivery policies that use the uninstall command line.
  2. Customer applies these policies to Targets that are machines that should NOT have the application
  3. Policy runs, detects the software present and the applicability rule passes – Uninstall command is executed

All done in an automated manner, WITHOUT the need for a separate database.

Question 2:

Can Altiris gather the license key of the software when performing a software inventory?  If so where does it store the license key?

The short answer 'It Depends…'

Not all applications store a license key on the individual machine. Many keys are encrypted using an external algorithm.

A custom inventory can generally bring back the key (if it exists on the machine) to the CMDB database in a Custom Field  - it can not decrypt it.

Even if it could – what do you have?

Generally, the license key is not the same thing as an Installation Code. The value on the machine is a key that was provided to the machine as part of the installation/registration process where the Installation Code was entered.

If the key is present, it can be gathered as it exists on the machine. That value does not necessarily have any value other than a data point. It can generally not be used for further installations.

 

Patching in Low Bandwidth Environments

$
0
0

Question:

Do we have a whitepaper/write-up on how to deploy a successful patch deployment solution in a low bandwidth environment?

No, but it sounds like a good idea for one.

In the interim, though:

There are a lot of variables that could come into play that would make a recommendation valid/invalid in separate environments.
 
There are, however, a set of key functions that when used correctly and in conjunction with each other can overcome nearly all issues.
 
Cloud Enabled Management (CEM)
One of the traditional low-bandwidth scenarios encountered is remote machines connected via VPN.
 
With the release of 7.5, Symantec now provides the CEM capability where Gateway software is installed to the DMZ. For CEM enabled devices, the dependance upon VPN connection for management is no longer present. 
 
Provided the CEM-Enabled resources are connected to the Internet, are able to connect to the CEM Gateway, and have the correct certificates – the machine can be patched without introducing load to VPN infrastructure.
 
Site Servers – Package Service
Correct implementation of Site Servers running the Package Service overcomes many low-bandwidth issues. By locating a Site Server at remote locations, managed resources at that location do not need to draw patch packages over the WAN or Internet links – they are provided from the Site Server at a LAN level.
 
In a correctly designed infrastructure, a package should only cross a WAN link once.
 
Targeted Agent Settings – Bandwidth Throttling
At the SMP level – Bandwidth Throttling settings and Blockout periods can be applied to groups of machines at a broad or granular level. 
 
This allows for separate identification and control of computers on low-bandwidth connections without placing those same limitations upon managed resources with more optimal connections.
 
Multicasting
Multicasting can be used in conjunction with Site Servers to provide short term, extension of the Package Service. Multicasting allows clients within a larger rollout to re-broadcast packets to other computers at the LAN level as they download it from the Site Server. Multi-casting can also have it's own Bandwidth Throttling settings.
 
Again this removes the reliance upon WAN/Internet links that may suffer from low-bandwidth concerns.
 
Maintenance Windows
As with Targeted Agent Settings, Maintenance Windows can be applied to groups of machines at a broad or granular level.
 
This allows for control of when activities are being conducted so as to focus management activities into time periods where low-bandwidth connections are at minimal expected usage levels.
 
When using these core components of the SMP, customers can adapt the infrastructure to their specific needs, on the affected resources – without needing to sacrifice performance on resources that are not similarly restricted.
 
Symantec provides a robust and integrated set of functions to allow the customer to match their needs rather than having management/infrastructure dictated to them.

 

Patch Trending: Creating a Gauge Chart to Show Global Compliance on the Console

$
0
0

Introduction

The Patch Trending toolkit [1] allows you to create and integrate into you SMP Patch Management compliance charts by bulletins and a sub-site to navigate the available trending data (that also extends to inactive computers and compliance by computer).

However there is no control or element that allows the administrator to build up a small control panel showing the global Patch Compliance for a SMP. This is what we will do here, from creating the control all the way to integrating it on the SMP.

Top

Creating the Control

As all of the chart on the Patch Trending toolkit (and on more visualization tools available here on Connect [2][3]) this one will be no exception: it is based on the Google Chart API [4] and builds up on data available in Patch Trending.

In order to quickly and efficiently display the compliance of the system we will use the Gauge control [5] and the global data file "global_1.js" stored under the Patch Compliance web-directory Javascript folder.

Here are sections of the html code with explanation of what they do:

Html code:

<html>
  <head>
    <title>Compliance gauge view</title>
    <script type='text/javascript' src='https://www.google.com/jsapi'></script>
    <script type='text/javascript' src='javascript/global_1.js'></script>
  </head>
  <body style="width: 600; font-family:arial;">
    <div id='chart_div' style="text-align: center;"></div>
    <div id='text_div'></div>
  </body>

As you can see there is not very much in term of html: we define the page title, load the Google jsapi, load the global vulnerability data form the local site, define the body and 2 divs: one for the chart and one for text data.

Javascript code:

<script type="text/javascript">
    google.load('visualization', '1', {packages:['gauge']});

    function logevent(msg) {

    var d = document.getElementById("text_div");
      d.innerHTML = "<p>" + msg  + "</p>";
    }

    function drawchart() {

      var inst = global_local[1];
      var appl = global_local[2];
      var vuln = global_local[3];

      logevent("Applicable = " + appl + ", Installed = " + inst + ", Vulnerable = " + vuln + ".");

      var global_compl = (global_local[1]) / (global_local[2]) * 100;
      var data = google.visualization.arrayToDataTable([
          ['Label', 'Value'],
          ['%', Math.round(global_compl * 100) / 100]
        ]);

      var options = {
          width: 600, height: 300,
          greenFrom:90, greenTo: 100,
          redFrom: 00, redTo: 75,
          yellowFrom:75, yellowTo: 90,
          minorTicks: 5
        };

      var chart = new google.visualization.Gauge(document.getElementById('chart_div'));
      chart.draw(data, options);
    }

    var global_local = global_1[global_1.length -1];
    google.setOnLoadCallback(drawchart);
  </script>
</html>

The javascript code is a little longer than the html code, but it is not very complicated either. Here's a view of the sub-sections of code:

  • Load the Google visualization Gauge controls
  • define the function logevent to insert a string of text in the html text_div
  • define the function drawChart to generate the Gauge control
  • define a global variable that point at the last entry in table global_1
  • set the Google API to call the function drawchart once everything is loaded and ready

Processing:

The html is loaded on the browser and javascript files are retrieved remotely and locally. Once the Google jsapi is fully loaded on the browser it call the function drawChart. This function retrieves the count of installed, applicable and vulnerable updates from the global_local table (which is the last record from the table containing all global results for the server). Then the text summary is added to the text_div calling logevent() and the compliance ratio is calculated and saved. The result is added to a datatable, some options are set and used by the visualization API to draw the chart inside the chart_div element.

Options:

There are a number of options that I have settle upon that could be change to best fit your environments. Let's detail them:

  • Gauge is set to 300x300 pixels for a large view
  • Gauge label is '%'
  • Percents 0 to 75 are shown as red
  • Percents 75 to 90 are shown as orange (albeit the js indicates it should be some form of yellow)
  • Percents 90 to 100 are shown as green
  • Minor ticks are show every 5 percent

Here is the code that define these entries:

var data = google.visualization.arrayToDataTable([
   ['Label', 'Value'],
   ['%', Math.round(global_compl * 100) / 100]
]);

var options = {
  width: 300, height: 300,
  greenFrom:90, greenTo: 100,
  redFrom: 00, redTo: 75,
  yellowFrom:75, yellowTo: 90,
  minorTicks: 5
};

Verifying the results:

Copy the code from the 2 main sections above and save it to file, or use the attached html file (saved as text for simplicity) and save it to your computer. Then create a javascript directory under the location you saved the html and add the following text to a file name "global_1.js":

var global_1 = [
  ['Date', 'Installed', 'Applicable', 'Vulnerable'],
  ['2014-02-11T23:30:38.167', 98324, 100000, 1686]
]

You can now double click on the html file and it should show you the following in your browser:

gauge_sample.png

Top

Integrating with the SMP:

Now that we have a working gauge-control we can integrate it in the SMP console via a web-part and add it to any view we want.

Creating the Web-part:

Copy the html file to the PAtch Trending web-folder. Verify that it works using your browser. If the file is in the correct location it should display the current compliance information for your system.

Navigate to the SMP console and select "Settings > Console > Web Parts". Select a folder where you want to create the new web-part and right-click "New Web Part".

Add in the web-part the required information (based on where you trending site is configured:

Global-gauge-Connect-Webpart.png

Note that the "Show url" filed should contain a host available to all your console users, not like the "localhost" I used in my example ;).

Adding the web part in a portal page:

Also note that this sample gauge chart is too large for console integration, so I changed it on my system to 250x250 pixels. I also integrated the web part in a clone of the portal page "Windows Compliance" (because the default view is read-only).

Once the page is cloned you can edit it to add any web-part, albeit in this case you want to find the one we just create and put it in the portal page:

WindowsCompliance-Portal-page.png

Top

References

[1] Patch trending main Connect article
[2] aila2 main Connect article
[3] SWD Execution trending
[4] Google Chart API
[5] Google chart gauge control

How to Report on Available Disk Space on All Computers as Well as Only on Site Servers

$
0
0

The first query targets all computer resources and allows you to specify the byte size range as well as the percentage of available disk space.

The second query only targets site servers.  It provide similar options as the first, but also allow you to specify a particular site service.

***************************************************

--/ Disk space on all computer resources
SELECT
vc.[Guid] AS ResourceGuid,
vc.[Name] AS [Resource],
ld.[Device ID] AS Drive,
ld.[Description] AS [Description],
ld.[Size (Bytes)] AS [Disk Size Bytes],
(ld.[Size (Bytes)])/1024/1024/1024 AS [Disk Size GB],
ld.[Free Space (Bytes)] AS [Free Space Bytes],
(ld.[Free Space (Bytes)])/1024/1024 AS [Free Space MB],
CONVERT(DECIMAL(5,2),100.0 * ld.[Free Space (Bytes)] / ld.[Size (Bytes)]) AS [Free Space(%)], --this is the percentage field
pu.[User] AS [Primary User]
FROM vComputer vc
INNER JOIN vHWLogicalDisk ld ON vc.[Guid] = ld._ResourceGuid
LEFT JOIN Inv_AeX_AC_Primary_User pu ON vc.[Guid] = pu._ResourceGuid
WHERE pu.[Month] = DATENAME(mm, GETDATE())
AND LOWER(ld.[Description]) LIKE '%local%'
--AND t1.[Size (Bytes)] > '2000'
--AND CONVERT(DECIMAL(5,2),100.0 * ld.[Free Space (Bytes)] / ld.[Size (Bytes)]) < '10' ---this is the percentage parameter
ORDER BY vc.[Name]

--/ Disk space on site servers only
SELECT DISTINCT
ss.ComputerGuid AS SiteServerGuid,
vc.[Name] AS SiteServer,
ss.[IP Address],
ld.[Device ID] AS Drive,
ld.[Description] AS [Description],
ld.[Size (Bytes)] AS [Disk Size Bytes],
(ld.[Size (Bytes)])/1024/1024/1024 AS [Disk Size GB],
ld.[Free Space (Bytes)] AS [Free Space Bytes],
(ld.[Free Space (Bytes)])/1024/1024 AS [Free Space MB],
CONVERT(DECIMAL(5,2),100.0 * ld.[Free Space (Bytes)] / ld.[Size (Bytes)]) AS [Free Space(%)], --this is the percentage field
pu.[User] AS [Primary User]
FROM vComputer vc
JOIN vSiteServices ss ON ss.ComputerGuid = vc.[Guid]
JOIN vItem vi ON vi.[Guid] = ss.ResourceTypeGuid
INNER JOIN vHWLogicalDisk ld ON vc.[Guid] = ld._ResourceGuid
LEFT JOIN Inv_AeX_AC_Primary_User pu ON vc.[Guid] = pu._ResourceGuid
WHERE pu.[Month] = DATENAME(mm, GETDATE())
AND LOWER(ld.[Description]) LIKE '%local%'
--AND t1.[Size (Bytes)] > '2000'
--AND CONVERT(DECIMAL(5,2),100.0 * ld.[Free Space (Bytes)] / ld.[Size (Bytes)]) < '10' ---this is the percentage parameter
--AND vi.[Name] = 'MonitorService' 
--AND vi.[Name] = 'PackageService'
--AND vi.[Name] = 'TaskService'
ORDER BY vc.[Name]


How to Add the Useful 7.5 vSiteServices View to a 7.1.x System?

$
0
0

SMP 7.5 includes a view called vSiteServices which maps the actual site servers computer guid to the site servers site services guid, allowing you to easily join this view to other database objects to provide a site server picture.

As this view is useful, how can it be added to a 7.1.x system?

The following SQL is that views create query which has been modified in order to work against a 7.1.x database:

****************************************

CREATE view [dbo].[vSiteServices]
            as
                SELECT        DISTINCT
                              ss.[Guid],
                              ss.ResourceTypeGuid,
                              vc.[Guid] AS [ComputerGuid],
                              tcp.[IP Address],
                              tcp.[Host Name],
                              tcp.[Primary DNS Suffix]
                    FROM      vSiteServiceResource    ss
                    JOIN      ResourceAssociation     ra  ON ra.ParentResourceGuid = ss.[Guid]
                                                         AND ra.ResourceAssociationTypeGuid = '5F00E96B-93F3-41F0-94A7-7DBBB8AEF841'
                    --JOIN      vComputerResourceEx     vc  ON vc.[Guid] = ra.ChildResourceGuid
                    JOIN      vComputer     vc  ON vc.[Guid] = ra.ChildResourceGuid
                    JOIN      Inv_AeX_AC_TCPIP        tcp ON tcp._ResourceGuid = vc.[Guid]
                    --LEFT JOIN Inv_AeX_AC_Network_Zone nz  ON nz._ResourceGuid = vc.[Guid]
                    WHERE     tcp.[Subnet Mask] != '255.255.255.255'          AND       tcp.[IP Address]  != '' -- ignore ip addresses reported for /32 subnets for VPN reasons
                    AND       vc.IsLocal = 1
                           --OR nz.IsOnInternet = 1  -- Windows CEM site server

*********************************************

As you can see, I have remarked out the line that targets the vComputerResourceEx view as that object does not exist in a 7.1.x database, and have replaced it with the vComputer view, as we need the IsLocal column.

I have also remarked out the Inv_AeX_AC_Network_Zone table along with its associated OR line, as they relate to CEM which 7.1.x does not have.

Simply execute the query and the view will be created.

Patch Trending - Linking Gauge Controls to Create a Dashboard with Aggregate Compliance

$
0
0

Introduction

In a previous article [1] we learnt how to create a gauge-control to quickly review the patch compliance level for a server. How can we extend this view to multiple servers for example in hierarchy?

This article will detail how to achieve this, and an upcoming article will also detail how to create a Global Patch Trending page on a parent server (without using hierarchy or replication).

Here is the output we will achieve from this article:

gauge_sample_ii.png

Pre-requisite: you must have the Patch Trending toolkit running on your Client Facing SMP's before you can create a global gauge control.

Top

Design

In order to keep the task as simple as possible we will not replicate any data from the Client Facing SMP's (CF-SMP). Rather we will load the compliance data javascript files in turn and build up a gauge control, including a global compliance gauge based aggregate data.

In this manner you can add the page to a Console web-part and replicate it to all child servers (via standalone of hierarchy replication) and it will work in all cases.

In addition to a single gauge control (displaying the 4 entries) we will add a one line summary of the global situation, with the count of applicable, installed and vulnerable updates.

Top

Implementation

So, let's get started. In this document we will use 3 CF-SMP named SMP-NALA, SMP-EMEA, SMP-APAC and a reporting server named SMP-Global.

We use a single gauge control so the html code in itself is super simple:

<html>
  <head>
    <title>Global compliance gauge view</title>
  </head>
  <body style="width: 1000px; font-family:arial; margin-left: auto; margin-right: auto">
  <div id='text_div'></div>
    <div id='chart_div'></div>
  </body>
</html>

Then we need to code in the Javascript to gather the data from the various servers, but before we do that let's make sure the results will be satisfactory. To this avail we will hard code the gauge data inside the javascript (data is random and shown in bold italic):

<html>
  <head>
    <title>Global compliance gauge view</title>
    <script type='text/javascript' src='https://www.google.com/jsapi'></script>
    <script type='text/javascript'>
 
    google.load('visualization', '1', {packages:['gauge']});
    google.setOnLoadCallback(drawchart);
    
    function logevent(msg) {
      var d = document.getElementById("text_div");
      d.innerHTML = "<h3>" + msg  + "</h3>";
    }
 
    
    function drawchart() {
    
      var smp_nala_compl = 94.3;
      var smp_emea_compl = 93.4;
      var smp_apac_compl = 96.5;
 
      var inst = 1832682;
      var appl = 1933472;
      var vuln = appl - inst;
      
      logevent("Global Patch Compliance summary: Applicable = " + appl + ", Installed = " + inst + ", Vulnerable = " + vuln + ".");
      
      var global_compl = (inst/appl) * 100;
    
      var data = google.visualization.arrayToDataTable([
        ['Label', 'Value'],
        ['nala', smp_nala_compl],
        ['emea', smp_emea_compl],
        ['apac', smp_apac_compl],
        ['Global', Math.round(global_compl * 100) / 100]
      ]);
      
      var options = {
        width: 1000, height: 200,
        greenFrom:90, greenTo: 100,
        redFrom: 00, redTo: 75,
        yellowFrom:75, yellowTo: 90,
        minorTicks: 5
      };
 
      var chart = new google.visualization.Gauge(document.getElementById('chart_div'));
      chart.draw(data, options);
      
    }
 
    </script>
  </head>
  <body style="width: 1000px; font-family:arial; margin-left: auto; margin-right: auto">
  <div id='text_div'></div>
    <div id='chart_div'></div>
  </body>
</html>

This generates the following result, which is exactly what we are after (it's the same image as shown in the introduction in case you wondered ;):

gauge_sample_ii.png

Now, we can dig in the technical part: in order to gather each server compliance and calculate the global compliance we must load global_1.js file from each server in turn, as these files all define the same variables (so we cannot add them simply as remote javascript).

We could use xmlhttp to do this and evaluate the response as javascript, however I have decided to let the browser download the javascript by using a small function to add the script source dynamically.

The loading implementation is done with the following threading model in mind:

Multi-server_Gauge-chart.png

The cascading execution is required because we want to make sure the data is loaded before we attempt to use it. So we insert the js file into the document, and then wait for a timeout to expire. Once the timeout expired (and the data loaded) we can copy the data (to store it as it will be over-written when loading the next global_1.js file) for later use. Here is the javascript skeleton of this waterfall:

loadjs(server_1_js_uri);
window.setTimeout(server_2_onTimeout, 500);

function server_2_onTimeout () {
    loadjs(server_2_js_uri);
    window.setTimeout(server_2_onTimeout, 500);
}

function server_3_onTimeout () {
    loadjs(server_3_js_uri);
    window.setTimeout(server_3_onTimeout, 500);
}

// if you need ore than 3 servers, you can carry on...

function server_n_onTimeout () {
    loadjs(server_n_js_uri);
    drawchart();
}

Now this is the simplified outline. In practice I had to resort to something a little more complex, because of browser specific issues.

Instead of calling directly the loadjs function I added a callback function on the remote server that contains a single function call. In this manner we avoid thread synchronization issues or calling functions before data is available etc.

Here is the final code, with the variables you should change highlighted:

<html>
  <head>
    <title>Global compliance gauge view</title>
    <script type='text/javascript' src='https://www.google.com/jsapi'></script>
  </head>
  <body style="width: 1000px; font-family:arial; margin-left: auto; margin-right: auto">
  <div id='text_div'></div>
    <div id='chart_div'></div>
  </body>
  <script type="text/javascript">
    google.load('visualization', '1', {packages:['gauge']});
    
    function logevent(msg) {
      var d = document.getElementById("text_div");
      d.innerHTML = "<h3>" + msg  + "</h3>";
    }

    function drawchart() {
    
      var nala_compl = global_local[0][1] / global_local[0][2] * 100;
      var emea_compl = global_local[1][1] / global_local[1][2] * 100;
      var apac_compl = global_local[2][1] / global_local[2][2] * 100;

      var inst = global_local[0][1] + global_local[1][1] + global_local[2][1];
      var appl = global_local[0][2] + global_local[1][2] + global_local[2][2];
      var vuln = inst - appl;
      
      logevent("Global Patch Compliance summary: Applicable = " + appl + ", Installed = " + inst + " Vulnerable = " + vuln + ".");
      var global_compl = (global_local[0][1] + global_local[1][1] + global_local[2][1]) / (global_local[0][2] + global_local[1][2] + global_local[2][2]) * 100;

      var data = google.visualization.arrayToDataTable([
        ['Label', 'Value'],
        ['nala', Math.round(nala_compl * 100) / 100],
        ['emea', Math.round(emea_compl * 100) / 100],
        ['apac', Math.round(apac_compl * 100) / 100],
        ['Global', Math.round(global_compl * 100) / 100]
      ]);

      var options = {
        width: 1000, height: 200, minorTicks: 5,
        greenFrom:  90, greenTo: 100,
        redFrom:    00, redTo:    75,
        yellowFrom: 75, yellowTo: 90
      };

      var chart = new google.visualization.Gauge(document.getElementById('chart_div'));
      chart.draw(data, options);
    }

    function loadjs(filename){
      var fileref = document.createElement('script');
      fileref.setAttribute("type","text/javascript");
      fileref.setAttribute("src", filename);	      

      if (typeof fileref!="undefined")
        document.getElementsByTagName("head")[0].appendChild(fileref);
    }

    var nala;
    var emea;
    var apac;

    var global_local = new Array();
    function nala_onloaded () {
      nala = global_1;
      global_local[0] = nala[nala.length -1];
      logevent("emea global compliance data is being loaded into the page...");
      loadjs("http://emea.domain.com/altiris/ns/patchtrending/javascript/global_1.js");
      window.setTimeout(emea_onTimeout, 500);
    }

    function emea_onloaded () {
      emea = global_1;
      global_local[1] = emea[emea.length -1];
      logevent("apac global compliance data is being loaded into the page...");
      loadjs("http://apac.domain.com/altiris/ns/patchtrending/javascript/global_1.js");
      window.setTimeout(apac_onTimeout, 500);
    }

    function apac_onloaded () {
      apac = global_1;
      global_local[2] = apac[apac.length -1];
      drawchart();
    }

    function nala_onTimeout () {
      loadjs("http://nala.domain.com/altiris/ns/patchtrending/javascript/callback.js");
    }

    function emea_onTimeout () {
      loadjs("http://emea.domain.com/altiris/ns/patchtrending/javascript/callback.js");    
    }

    function apac_onTimeout () {
      loadjs("http://apac.domain.com/altiris/ns/patchtrending/javascript/callback.js");
    }

    logevent("nala global compliance data is being loaded into the page...");
    loadjs("http://nala.domain.com/altiris/ns/patchtrending/javascript/global_1.js");
    window.setTimeout(nala_onTimeout, 500);
</script>
</html> 

And here is the callback.js file content for each server:

nala_onloaded() // callback.js content on nala server

emea_onloaded() // callback.js content on emea server

apac_onloaded() // callback.js content on apac server

Top

Conclusion

you should be able to link your client facing server data now in order to build-up a global compliance chart. You could also integrate this chart in the SMP console using techniques already described in [1].

Top

References

[1Patch Trending: Creating a Gauge Chart to Show Global Compliance on the Console

Top

Setting the Troubleshooting Password & Decrypting Agent Files on Unix, Linux and Mac Clients

$
0
0

Certain Unix, Linux and Mac (ULM) agent data that was in clear text in previous versions has been encrypted in 7.5.  Among the data that is now encrypted are the package codebase and policy xml files, which are useful for troubleshooting purposes. This data can be made available in a decrypted format by applying what is known as a ‘troubleshooting password’ and running the ‘aex-dsecuredb’ command on a ULM client computer. 

On ULM clients, the encrypted data directory is located at:

            /opt/altiris/notification/nsagent/var/securedb.

The complete contents of the files within the securedb directory are encrypted and appear as binary files.

Once the ‘aex-dsecuredb’ command runs, the following directory will contain decrypted copies of the files from the securdb directory:

            /opt/altiris/notification/nsagent/var/securedb.decrypted

 

Setting the troubleshooting password

The troubleshooting password field is available in the 7.5 SMP/NS console at Settings, All Settings, Agents/Plug-ins, Symantec Management Agent, Settings, Symantec Management Agent Settings – Global, ‘Authentication’ tab, in the ‘Remote troubleshooting password’ section.   

After checking the ‘Allow remote troubleshooting’ checkbox and entering a secure password, the troubleshooting password will be encrypted and sent to the clients as part of the global policy. Note that this feature requires a password of at least eight characters and must contain at least on upper case letter, one lower case letter, one number and one special character.

Following is a screen shot of the ‘troubleshooting password’ screen in the NS console:

TroubleshootingPassword_NSCOnsole.png

 

 

Decrypting securedb data on the ULM clients

The ULM agent includes a command named ‘aex-dsecuredb’. This command creates decrypted copies of the securedb directory’s encrypted files.

Please note the following regarding the aex-dsecuredb command:

-       This command does not decrypt password, certificate or other highly sensitive data.  This type of data stays encrypted.

-       The command can be run with or without command line parameters.

-       Running this command without a command line parameter does NOT require the troubleshooting password and only decrypts a limited set of securedb data.

-       Running this command with the “-high” command line parameter will prompt for the troubleshooting password. After successfully entering the troubleshooting password, the utility will decrypt a complete set of securedb data.  

-       Sudo or root privileges are required for running the command with the “-high” option.

-       The troubleshooting password prompt, “Enter superuser password” (“-high” option only), is prompting for the troubleshooting password set in the NS console. It is NOT prompting for the local root or admin password of the computer.  Any other references to the superuser password when using this utility refer to the troubleshooting password.

-       If the client has not yet been updated with the troubleshooting password and the “-high” parameter is entered, the command will return, “Unable to verify superuser password, call ‘aex-refreshpolicies’”.

-       A soft link to the command should be in the /usr/bin directory so it can be ran from anywhere. The actual path to the utility is: /opt/altiris/notification/nsagent/bin/aex-dsecuredb.

 

Limited mode:

This mode does not prompt for the troubleshooting password. Running this command without any command line parameters results in the decryption of a very limited set of directories and files.

Example:

$ sudo aex-dsecuredb
Decrypted files will be located in /opt/altiris/notification/nsagent/var/securedb.decrypted
Finished successfully

The resulting directory tree is something like:

   |-ctagent
   |---cache
   |-nsagent
   |---enrollment

 

High Mode:

This mode requires elevated privileges and prompts for the troubleshooting password. After successfully entering the troubleshooting password when prompted, this mode creates a complete set of decrypted files.

$ sudo aex-dsecuredb -high
Enter superuser password:
Decrypted files will be located in /opt/altiris/notification/nsagent/var/securedb.decrypted
Finished successfully

The resulting directory tree is something like the following. Note that all securedb directories have been decrypted.

   |-ctagent
   |---cache
   |-nsagent
   |---credentials
   |---enrollment
   |---keys
   |---packages
   |-----17872B48-9792-4C23-9783-D9BFDE505FC3
   |-----7B64672D-FD64-466A-8E0A-4C3423E8802A
   |-----9A75B4D8-1357-43E1-9949-B870047CB1C4
   |---policies
   |-----data
   |-------225067FA-37B3-4B3A-AF01-A9C37BB553D6
   |-------24C34958-27A3-4D74-8822-C0964EB47115
   |-------8918C4B8-F6D0-45C3-BCB9-4628D264DA20

 

Codebase files:

Codebase files contain the package download locations for each package available to a given client. Knowing the download location is helpful for troubleshooting software installation and other issues.

In previous versions, the codebase file was available in the /opt/altiris/notification/nsagent/var/packages/<package guid>/.aex-pkg-codebase-<package guid> file.

In 7.5, the codebase files are available in the following directory after decrypting them with the troubleshooting password:

    /opt/altiris/notification/nsagent/var/securedb.decrypted/nsagent/packages/<package guid>/package.xml

 

Policy XML files:

Policy files contain information regarding each policy assigned to a given client, including  (depending on the policy type), the policy name, execution priorities, applicable platforms and other criteria unique to each policy type.

After decrypting the securedb with the “-high” parameter, the decrypted policies are available in:

    /opt/altiris/notification/nsagent/var/securedb.decrypted/nsagent/policies/data/<policy guid>/<identifier>

 

SWD Trending - some interesting samples

$
0
0

Introduction:

I recently release the first version of a Software Delivery trending tool for SMP [1]. With a few weeks of operations in production it is time to share some interesting result that are showing how useful the tool can be when it comes to understand what is happening in a production environment.

In this article we'll go thru 4 anonymous samples gathered from various production environments.

Sample 1: Windows Assessment Scan

sample-I-patchscan.png

The Windows System Assessment scan used by Patch Management Solution (to report installed and applicable updates from computers with the Software Update Agent) runs quick regularly - by default every 4 hours on workstation. In this graph we do see that the tool is running very regularly with some fixed interval (so we can see how the managed computers are synchronised due to common user trends i.e. people do come in the office in the morning, turn the computer on and work until the evening ;).

However there is something else that may not be obvious, but that is quite interesting tool. Given the SWD Execution table is set to hold no more than 1 million entry you would expect to see a full graph, with no flat line anywhere. This can be explained again by human behaviour. People do go on holiday :D. And when this happens, sometimes they take their laptop home for casual usage, or sometimes they shutdown just before the Windows System Assessment Scan results are sent to the server.

So when the workstation is opened a few days or weeks later it sends the data back to the SMP. And the NSE contains the execution time, which is the field we use. So you can actually see the impact of holidays on this graph.

Sample 2: Upgrade agent asap

Sample-II.png

This is the classic (and expected) view of a run-asap deployment to computers. Many computers will get the policy really quickly and run it, forming a nice bell. There isn't much to say about this, after form the fact that I wish all my Manged Deliveries would work just the same way.

Sample 3: Software Update installation head

patch-head.png

The software update installation head is the behaviour encountered during the first week of release an update to production. It really depends on the process you have, but above we can see what happens with a customer using Patch Automation [2] to validate and deploy updates to production within a short period of time. The validation is done on approximately 1% of the estate.

The large peaks we are seeing are the scheduled installation, whilst the short peaks are night execution widows (mainly used by servers). All is going well for this software update freshly release, however in other cases you can see more errors that prompt further monitoring (depending if the error rate is constant over time or not).

Sample 4: Software update installation tail

Sample-IV-software-update-tail.png

The Software Update Installation tail is what you see and learn from checking Software Update execution results 4 weeks in a patching campaign. On the graph above (and many other software update graph for some customer servers) you are seeing a large count of failure versus some successes.

Upon deeper inspection we found out that the execution return code for the failing installation where 1058. This code indicates that either the Widows Update Service is disabled or Windows 7 computers were not activate properly against a licensing server (so they cannot apply any software updates).

Conclusion:

The visual aspect of the Evt_AeX_SWD_Execution table allow us to peer into what policies are doing in production. It allows us to troubleshoot problem that would not necessarily be obvious, and it also helps understand how human interaction with systems can impact patch compliance (when a computer is off for a few weeks) or software problem (there are some samples I have not shared, but I have one that is all red because a Software Update is constantly failing with error 17028, indicating that the update returns as not applicable (despite the fact that it matches the pre-requisites and contains vulnerable dll's). 

References:

[1] SWD Trending download

[2] CWoC Patch Automation

How to Reinstall the SMP Agent for Unix, Linux and Mac Without Uninstalling

$
0
0

In some cases, it is helpful to reinstall the SMP Agent for Unix, Linux and Mac without uninstalling the currently installed agent. Following is a description of two methods for overwriting the existing agent.

Note: In most cases it is preferable to uninstall the existing agent to guarantee removal of various configuration files. Using either of the methods shown in this article do not guarantee the removal of those corrupt configuration files.

 

Method 1: Using the aex-bootstrap configuration file

Directing the aex-bootstrap to reinstall the agent requires the use of a .aex-agent-install-config.xml file with a modified value. This file is obtained in the console at: Settings, All Settings, Agents/Plug-ins, Symantec Management Agent, Settings, Symantec Management Agent Install, then on the tab labeled ‘Install Agent for Unix, Linux and Mac’, (adding a computer and clicking the computer entry in the grid, if necessary) then clicking the button labeled ‘Installation Settings’.

 

install_config_xml_in_ns_console.png

 

 

1.     Save a copy of the .aex-agent-install-config.xml file. It will eventually need to be in a location that is accessible to the ULM clients. The …\NSCAP\temp directory is a good example.

2.     Change the "Upgrade" entry from "yes" to "no”. After making this change, the file is ready to be used by the bootstrap to reinstall the agent. Note that the file contents are all on one line. Do not modify the files line endings or use any editor that adds formatting or other data to this file. Sample changed entry should appear as shown here:

    …<Settings UseDomain=”no” EnableNICError=”no” Upgrade=”no”…
 

3.     Copy the modified install config file and the aex-bootstrap-<platform> file to a local directory on the ULM client. They should be in the same directory. Note that the file begins with a dot or period, so it will be a hidden file on any ULM client. You can prove it exists by running ‘ls –al’ on the directory it was copied to.

4.     Run aex-bootstrap* normally, e.g., "./aex-bootstrap <ns server name>".

The above process will pass the "-reinstall" option to the "agent-upgrade" script that performs the install using the native package files.

 

Method 2: Using the native package files and scripts

The bootstrap file does not contain the actual agent install files. Those files exists separate from the bootstrap in the NSCAP\bin\unix\agent\... directories. The files are installed by running the ‘agent-upgrade’ script from the corresponding directory.

To run an upgrade using the native package files:

1.     Download the entire contents of the appropriate ‘agent’ directory using curl, wget, scp or any other appropriate method.

2.     Make the scripts and package files executable.

3.     Copy a .aex-agent-install-config.xml file to the same directory, if desired. This copy of the install config xml file does NOT have to be modified as in method 1, above. (If an install config xml is not used, then a manual configuration of the agent will be required. Using the install config xml allows the agent to be installed and configured in one step.)

4.     Run the command: sudo ./agent-upgrade --reinstall. This will tell the installer to not check for an existing agent installation and to reinstall the agent, overwriting the existing files.

 

Viewing all 861 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>