Quantcast
Channel: Symantec Connect - Endpoint Management - Articles
Viewing all 861 articles
Browse latest View live

Windows System Assessment Scan fails with Exit Code 7

$
0
0

Issue:

The scheduled Windows System Assessment Scan Policy or Task for PatchManagement Fails with Exit code 7

Error:

The error can be located here C:\ProgramData\Symantec\Symantec Agent\Logs

 

<event date='02/22/2014 08:00:17.2070000 -05:00' severity='2' hostName='ComputerName' source='AeXNSEvent::raw_SendQueued' module='AeXNSEvent.dll' process='AexPatchAssessment.exe' pid='5404' thread='5968' tickCount='816960219'>

 <![CDATA[Error loading type library/DLL. (0x80029C4A)]]>

</event>

<event date='02/22/2014 08:00:17.2080000 -05:00' severity='1' hostName=‘ComputerNmae’source='Utils::ComException::ComException' module='AexPatchAssessment.exe' process='AexPatchAssessment.exe' pid='5404' thread='5968' tickCount='816960219'>

  <![CDATA[HR=0x80029C4A, MSG='Inventory::InventorySender::Send()- Error loading type library/DLL.']]>

</event>

<event date='02/22/2014 08:00:17.2200000 -05:00' severity='1' hostName=‘ComputerNmae’source='Utils::ApplicationException::ApplicationException' module='AexPatchAssessment.exe' process='AexPatchAssessment.exe' pid='5404' thread='5968' tickCount='816960235'>

  <![CDATA[Message='Cannot send inventory' (ExitCode=7).]]>

</event>

<event date='02/22/2014 08:00:17.2270000 -05:00' severity='1' hostName=‘ComputerNmae’source='BaseApplication::Run' module='AexPatchAssessment.exe' process='AexPatchAssessment.exe' pid='5404' thread='5968' tickCount='816960235'>

  <![CDATA[Application exception caught: Cannot send inventory]]>

</event>

<event date='02/22/2014 08:00:17.2320000 -05:00' severity='4' hostName=‘ComputerNmae’source='BaseApplication::Run' module='AexPatchAssessment.exe' process='AexPatchAssessment.exe' pid='5404' thread='5968' tickCount='816960235'>

  <![CDATA[Deinitializing environment...]]>

</event>

<event date='02/22/2014 08:00:17.2360000 -05:00' severity='4' hostName=‘ComputerNmae’source='ProgramExec' module='smfagent.dll' process='AeXNSAgent.exe' pid='4556' thread='4528' tickCount='816960250'>

  <![CDATA[Program 'Windows System Assessment Scan' completed. Exit code=7]]>

</event>

It’s important to understand that two inventory files (Notification Server Event (NSE) are created during the Windows System Assessment Scan (WSAS) 

One for reporting and one for patch data.

Failed data is reported in the Console

Home> No Scan Data Reported

Report.jpg

If you want to trap the inventory files follow this article: http://www.symantec.com/docs/HOWTO4191

 

Environment:

Clients - Windows Server 2008 R2

 

Cause:

There are conflicting DLL’s installed on the endpoint due to a previous Altiris Agent installation (Version 6.x) and the AeXAgentUtil.exe /clean not removing the DLL’s or the Registry entries.

AeXAgentUtil.exe  switches http://www.symantec.com/docs/HOWTO5511

 

Switches.jpg

 

Solution 1:

1.       Utilizing AeXAgentUtil.exe run the UninstallAgents switch

2.       Utilizing AeXAgentUtil.exe run the clean switch

If you do not know or are not sure what registry entries belong to Altiris or if you do not have permissions to the registry you can utilize the script attached. Do not delete any registry keys you are not sure of.

3.       Open Regedit on the affected endpoint, click on Computer and search for Altiris

Caution should be taken when removing Registry entries.

4.       Right click the Registry key and click Delete.

5.       Continue through the Registry

6.       Scroll back to the top and click on Computer and search for AEX.

7.       Right click the Registry key and click Delete.

8.       Continue through the registry.

9.       Ensure all file directories have been cleaned.

10.   Install the Altiris Agent. 

11.   Allow time for the agent to check in and install plugins.

Solution 2: 

1.       Download the attached scripts to C:\Windows\Source\Altiris7Migration

2.       Change agent_clean.txt file extension to batch file or .bat

3.       Change launchclean.txt file extension to visual basic script or .vbs

4.       Right click on agent_clean.bat and click edit

5.       Scroll to the bottom and replace:

ns=”your notification server name”

nsweb=” http://your notification server name/Altiris"

C:\WINDOWS\SOURCE\Altiris7Migration\aexnsc.exe /install /ns=" your notification server name " /nsweb="http:// your notification server name /Altiris" NOTRAYICON /s  >> C:\Windows\Source\Altiris7Migration\myaexnsc.log

6.       Copy  \NSCap\bin\Win32\X86\NS Client Package\ AeXNSC.exe from the Notification Server to the affected endpoint C:\Windows\Source\Altiris7Migration.

7.       Right click and Open launchclean.vbs

launchclean.vbs will create a log file (myaexnsc.log)       

                logs completion of tasks

launchclean.vbs  will execute agent_clean.bat

                agent_clean.bat will remove directories, files, registry keys and install the Altiris agent.

8.       Allow time for the agent to check in and install plugins.

 

Related KBs  http://www.symantec.com/docs/TECH196681

 

 


Symantec Management Platform Agent Version Build numbers

$
0
0

Since there are many requests for a document with accurate Symantec Management Platform Agent version numbers and to verify a successful SMP Agent upgrade I created a small Excel Sheet with all version numbers known (by me) at this time of creating the sheet.

This document should give you an overview of all agent versions for theSymantec Client Management Suite. Starting from version 7.5 (initial Release) I´ll provide a list of the agent versions including Hotfix versions.

At this time the list is incomplete and does not contain all agent versions for every HF released.

This list is only an overview for Client Management Suite (CMS) agent versions and does not contain agent version numbers for Symantec Server Management Suite.

This document will be updated as soon as a new official Hotfix or Service Pack for Symantec Client Management will be released. This document will not contain version numbers for agents that are included in a Pointfix!

In addition, this document includes a report that will list all computers in your SMP environment with all agent versions (core agent and plug-in agent versions, see screenshot below). This report can be used to monitor and verify successful agent upgrades.

Altiris CMS 7.5 Agent Versions  

Agent / Plugin Name for Version 7.5 without HF

Version

  

Core Agent

 

Altiris Base Task Handlers

7.5.1670

Altiris Client Task Agent

7.5.1670

Symantec Management Agent

7.5.1670

Inventory Rule Agent

7.5.1670

Software Management Framework Agent

7.5.1670

 

 

Client Agent Plugins

 

Altiris Inventory Agent

7.5.1597

Software Management  Solution Agent

 

Symantec pcAnywhere Agent

12.06.8096

Deployment Solution Plug-in

 

Software Update Agent Plug-in

 

 

 

Server  Agent Plugins

 

Package Server

7.5.1670

Deployment NBS Plug-in

 

Deployment Package Server

7.5.1597

Deployment Task Server Handler

7.5.1597

Altiris Client Task Server Agent (CTServerAgent.dll)

7.5.1670

Altiris Client Task Server Agent (CTServerAgent_x64.dll)

7.5.1670

  
  

Agent / Plugin Name for Version 7.5 with HF1

Version

  

Core Agent

 

Altiris Base Task Handlers

7.5.1671

Altiris Client Task Agent

7.5.1671

Symantec Management Agent

7.5.1671

Inventory Rule Agent

7.5.1670

Software Management Framework Agent

7.5.1670

 

 

Client Agent Plugins

 

Altiris Inventory Agent

7.5.1597

Software Management  Solution Agent

7.5.1597

Symantec pcAnywhere Agent

12.06.8096

Deployment Solution Plug-in

 

Software Update Agent Plug-in

 

 

 

Server  Agent Plugins

 

Package Server

7.5.1670

Deployment NBS Plug-in

 

Deployment Package Server

7.5.1597

Deployment Task Server Handler

7.5.1597

Altiris Client Task Server Agent (CTServerAgent.dll)

 

Altiris Client Task Server Agent (CTServerAgent_x64.dll)

 

  
  

 Agent / Plugin Name for Version 7.5 with HF2

Version

  

Core Agent

 

Altiris Base Task Handlers

7.5.1672

Altiris Client Task Agent

7.5.1672

Symantec Management Agent

7.5.1672

Inventory Rule Agent

7.5.1670

Software Management Framework Agent

7.5.1672

 

 

Client Agent Plugins

 

Altiris Inventory Agent

7.5.1597

Software Management  Solution Agent

7.5.1597

Symantec pcAnywhere Agent

12.06.8096

Deployment Solution Plug-in

7.5.1599

Software Update Agent Plug-in

 

 

 

Server  Agent Plugins

 

Package Server

7.5.1670

Deployment NBS Plug-in

7.5.1599

Deployment Package Server

7.5.1597

Deployment Task Server Handler

7.5.1597

Altiris Client Task Server Agent (CTServerAgent.dll)

7.5.1672

Altiris Client Task Server Agent (CTServerAgent_x64.dll)

7.5.1672

  
  

 Agent / Plugin Name for Version 7.5 with HF3

Version

  

Core Agent

 

Altiris Base Task Handlers

7.5.1673

Altiris Client Task Agent

7.5.1673

Symantec Management Agent

7.5.1673

Inventory Rule Agent

7.5.1670

Software Management Framework Agent

7.5.1673

 

 

Client Agent Plugins

 

Altiris Inventory Agent

7.5.1597

Software Management  Solution Agent

7.5.1597

Symantec pcAnywhere Agent

12.06.8096

Deployment Solution Plug-in

7.5.1600

Software Update Agent Plug-in

 

 

 

Server  Agent Plugins

 

Package Server

7.5.1670

Deployment NBS Plug-in

 

Deployment Package Server

7.5.1597

Deployment Task Server Handler

7.5.1597

Altiris Client Task Server Agent (CTServerAgent.dll)

 

Altiris Client Task Server Agent (CTServerAgent_x64.dll)

 

  

Agent / Plugin Name for Version 7.5 with HF4

Version

Core Agent

 
Altiris Base Task Handlers

7.5.1674.11

Altiris Client Task Agent

7.5.1674.11

Symantec Management Agent

7.5.1674.11

Inventory Rule Agent

7.5.1670

Software Management Framework Agent

7.5.1673
  

Client Agent Plugins

 

Altiris Inventory Agent

7.5.1597

Software Management  Solution Agent

7.5.1597

Symantec pcAnywhere Agent

12.06.8096

Deployment Solution Plug-in

7.5.1600

Software Update Agent Plug-in

 
  

Server  Agent Plugins

 

Package Server

7.5.1670

Deployment NBS Plug-in

 

Deployment Package Server

7.5.1597

Deployment Task Server Handler

7.5.1597

Altiris Client Task Server Agent (CTServerAgent.dll)

7.5.1674.11

Altiris Client Task Server Agent (CTServerAgent_x64.dll)

7.5.1674.11
  
  

Additional Agent Versions for 7.5 HF2

Version

  

Server  Agent Plugins (SMS)

 

Virtual Machine Management Task Handler

7.5.1597

Altiris Pluggable Protocols Architecture Agent

7.5.1597

Altiris Monitor Agent RMS

7.5.1597

SMP Agent Version Report:

Agent_Report.png

Agent Version numbers with a * are indicating that this Version is not current!

This report allows you to monitor the Agent Rollout for every Client in your Environment!

 

SQL Report:

declare @list table (guid uniqueidentifier)

--insert into @list

-- select vc.guid from vResource vc

-- group by vc.guid having count(*) > 0

insert into @list

select _resourceGuid as guid from Inv_AeX_AC_Client_Agent;

IF OBJECT_ID('tempdb..#a1') IS NOT NULL DROP TABLE #a1

select vc.name [Computer], aca.[Agent Name] [Agent],

aca.[Product Version] + case when aca.[Product Version] <> a.[Max Version] then ' *' else

'' end [version]       

into #a1

from @list x

left join Inv_AeX_AC_Client_Agent aca on aca._resourceGuid = x.guid

left join vComputer vc on vc.guid = aca._resourceGuid

left join (

select [Agent Name], max([Product Version]) [Max Version]

from Inv_AeX_AC_Client_Agent

group by [Agent Name]

) a on a.[Agent Name] = aca.[Agent Name]

DECLARE @agent_list varchar(900)

SET @agent_list = ''

SELECT @agent_list = @agent_list + ', ' +'[' +[Agent] +']'

from (select distinct [Agent] from #a1 ) a

order by [Agent]

SET @agent_list = substring (@agent_list, 3, len(@agent_list))

-- print @agent_list

declare @sql nvarchar(2000)

set @sql = N'select [computer], ' +@agent_list +

'from (select [computer], [Agent], [version] from #a1) b1 '+

'pivot ( max(version) for [Agent] in (' +@agent_list +')) c2'

-- print @sql

execute sp_executesql @sql

 

Network23

How Does Disabling the Mac Root User Affect the SMP Agent for Mac

$
0
0

Short answer:

Disabling the Mac root user via the Directory Utility app or equivalent command line has no affect on the SMP Agent for Mac installation or subsequent functionality.

Long answer:

The Mac OS X Directory Utility, located at /System/Library/CoreServices/Directory Utility.app, has the option to enable or disable the root user on a mac. The common thought is that disabling the root user in the Directory Utility will disable the root user everywhere on the system and perhaps impede the ability to install the SMP Agent for Mac or break it’s subsequent functionality. That is not true.

Disabling the root user in the Directory Utility will only disable the ability to login at the GUI login screen as the root user. When the root user is enabled, it is possible to switch users at the GUI logon screen, click ‘other…’ and enter ‘root’ and the root user password to login. After doing so, elevated privileges are granted. Examples of elevated privileges can be found in the System Preferences app. A root user entering “Security & Privacy”, “Users & Groups”, etc., will not need to click the lock icon to authenticate while a non-root user will need to click the lock and authenticate in such places. It’s a common security practice in the Unix, Linux and Mac world to log in as a non-root user and use ‘sudo’ or to authenticate before performing functions that impact the system. This way, it’s not as easy to execute commands that delete, disable or otherwise negatively impact the system. At least, that’s the hope.

Disabling the root user does not disable root functionality at the OS or shell level. If it were to disable the root user at the shell level, then the OS would likely break since most OS-level functions need to run with full root privileges. The root user is still alive and functioning at the shell level by necessity. 

Regardless of the state of the root user in the Directory Utility, it is always possible to switch to the root user at the shell level since it is never disabled at that level. That is why using ‘sudo’ at the shell level in the Terminal app still works. When the SMP Agent for Mac is installed, it switches to the root user and installs the agent with root privileges. Any subsequent SMP Agent processes that need to run with root privileges will do so.

There is no need to be concerned with customers disabling the root user in the Directory Utility. It is a perfectly acceptable security configuration for any customer and does not impact the SMP Agent for Mac.

Note: It may be possible to place some restrictions on the root user on a Unix, Linux or Mac system. That is separate from enabling/disabling the root user in the Directory Utility. Restricting the root user or various executables is not a supported configuration for running the SMP Agent for Mac. We require a default, unrestricted root user configuration for proper functionality regardless of whether it is enabled or disabled at the GUI interface level.  

Inventory, Reporting, and Filtering of Internet Explorer in Inventory Solution 7.x

$
0
0

Introduction

In previous versions of Inventory Solution Internet Explorer was well reported. By moving to the CIM (Common Information Model) Inventory lost some of the robust report of Internet Explorer. This article demonstrates how to setup inventory and reporting to properly track Microsoft's Internet Explorer in your environment. Hopefully this can help you fill this need should this data become necessary.

Inventory

It is recommended to run at least weekly a Delta Software Inventory with the Software Discovery option checked to keep your data from growing stale. As long as you are at or above the versions mentioned in the above note, a delta inventory will have a minimal impact on the Notification Server processing.

By Default, due to the nature of how Internet Explorer is reported by Windows, we do not capture it with a default Software scan captured through the Inventory Policy from the checkmark labeled "Software - Windows Add/Remove Programs and UNIX/Linux/Mac software packages" on an Inventory Policy. This is due to the software being listed as part of the Operating System, which we filter out by default. While you can adjust the filters, you'll get a lot of clutter in the Software Catalog so it is not recommended.

Software Resource

First, we need to create a Software Resource to use as the Inventory point for Internet Explorer. This will be used for reporting and allow the data we collect to be inserted into the standard "Software" tables. The following process takes you through the creation process.

  1. In the Symantec Management Console, click on Manage > Software Catalog.
  2. Click Add under the Newly Discovered, Undefined software list, as shown:

image001_10.png

  1. Provide a Name for the Resource, such as "Microsoft Internet Explorer 9".
  2. Provide a version, in this example "9".
  3. The Company will be the manufacturer, in this case Microsoft.
    NOTE: The Name, Version, and Company will be used for filtering in a Software Product, and other parameters for the reports and filters created later. It is important to have these set correctly.

image003_12.png

  1. Click on the Rules tab.
  2. Click the Add link along the Detection Rule entry.
  3. Click the blue plus under Expressions in the left-hand pane > browse under Standard Rule > select Registry Key Version.
  4. Provide the following values as applicable to the version you are tracking:
    1. Registry key path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer
    2. Registry entry: Version
    3. Version: > 8 (this should be one version down from the version you wish to capture)

image005_6.png

  1. Click OK to save the parameter of the rule.
  2. Right-click on the AND operator and choose Standard Rule > Registry Key Version.
  3. Use the same values in step 9, but with the following change with 9.c: Version: < 10
  4. The above will only capture version 9, nothing above or below.
  5. Click OK to save the new Software Resource

Now the Software Resource is created and available for use within a Targeted Inventory Policy. You can repeat these steps for other versions of Internet Explorer as needed. Change the parameters used in the version field to signify which version you wish to capture. The first parameter will be the version under the desired one, and the second parameter should be the version above, i.e. 7 and 9 respectively to capture version 8.

You may consider at this time to review the Clean-up section at the end of this article.

Targeted Software Inventory Policy

The next step is to create or add to a Targeted Software Inventory Policy. These policies are basic but can utilize the detection checks of any selected Software Resource. The following steps should be followed to create a new Targeted Software Inventory Policy. If you already have one created and enabled, you can add the Software Resource created for Internet Explorer to capture that data in the existing policy.

  1. In the Symantec Management Console, browse under Managed, and select Policies.
  2. Open the Discovery and Inventory folder, right-click on Targeted Software Inventory, browse under New, and select Targeted Software Inventory.
  3. Click on the Name text, labeled New Targeted Software Inventory to enable edit mode.
  4. Provide a name, such as: Capture Internet Explorer Installs.
  5. Click on Select Software to add our Software Resources.
  6. Find the newly created Resource or Resources to add. When found, select and use the > symbol to add them, as shown:

image007_3.png

  1. Note that you can add multiple Software Resources for each version of Internet explorer you wish to track. Additionally the Targeted Software Inventory policy can contain any number of Software Resources and does not need to be limited to Internet Explorer.
  2. Click OK to save the selections.
  3. Expand the Schedule section by clicking the expand arrow to the far right of the schedule bar.
  4. Click Add schedule and select Schedule Time.
  5. Provide a time that works for you. Note that by default these policies will be run as soon as possible after the scheduled time if the computer is off.
  6. Now browse under the Applied to section.
  7. The default filter is Windows Computers with Inventory Plug-in, which may include Linux or Mac systems. This is the default filter that makes the most sense. If you wish to change the filter, following the next steps.
  8. Click Apply to and select Computers.
  9. Provide the following parameters for your filter:
    1. THEN: exclude computers not in
    2. Filter - leave this setting
    3. <Filter> Choose a filter that meets your requirements

image009_0.png

  1. Click Update results to ensure you get the results you are expecting.
  2. Click OK to save the new filter.
  3. Other filters can be used, such as Group filters as in this example:

image011.png

  1. If you created a new filter, be sure to remove the old by selecting it and click the red X.
  2. Click the Save changes button to save the new Policy.
  3. When ready, change the OFF to ON and Save changes to enable the policy to roll out and execute on target clients.

The policy will execute the detection check for all Software Resources included in the policy's list. These detection checks will add entries to the softwarecache.xml that stores installed data for Software locally on the client, and will thus send the "Install Software" data for that software up to the Notification Server via NSE.

Reporting

Reporting includes Filters that can be used in jobs and tasks, as detailed in the two subsequent sections.

Reports

Once you've setup your Software Resources to hold installed instances of Internet Explorer and setup a Targeted Software Inventory to capture the data, you are ready to review those installs via reports. The first example is our standard report for Installed Software.

  1. In the Symantec Management Console, browse under Reports, and select All Reports.
  2. In the Reports tree, browse under Discovery and Inventory > Inventory > Cross-platform > Software / Applications > Software > and click on Installed Software.
  3. Use the following parameters to find Internet Explorer:
    1. Name: %Internet Explorer%
    2. Version: % (You can add a version here if you are only looking for a specific one. If you want all versions, use just the wildcard as shown)
    3. Company: Microsoft
    4. Type: All Software
    5. Discovered since: Set a time that works for your reporting timeframe. Keep in mind that older inventory might exclude systems that reported early on.

image013.png

  1. Click the Refresh button when the parameters are set correctly.
  2. In the results, you'll get a return for each version reported. From here you can drilldown to see what computers have the version you drilldown from.

You can also use Installed Software by Computer to check what is installed, including the entry now captured for Internet explorer.

Filter

By creating a filter based on the Internet Explorer data that has been collected you can target systems with specific versions of Internet Explorer. This makes it valuable to track, upgrade, patch, or add plug-ins to specific versions. The following steps detail how to create these filters. Note that it is advised to first create the Software Resource, the Targeted Software Inventory Policy, and have that policy execute on your target systems. Without the Inventory data the Silverlight interface will not show the Resource you've created.

  1. In the Symantec Management Console, go to Manage > Software.
  2. In the Newly Discovered Software list, you should now have the Resources you created listed. Use the quick filter to find Internet Explorer in the list.
  3. Right-click on the version you wish to create a filter for > go to Actions > and select Create Installed Software Filter.

image015.png

  1. Provide a name for the filter, such as Computers with Windows Internet Explorer 9 Installed.
  2. Click OK to save the filter. That's it!
  3. To validate, browse in the console under Manage > Filters.
  4. In the left-hand tree browse under Software Filters. You'll see the new filter listed.
  5. Select the filter.
  6. Review the results and it should have all computers that have that version of Internet Explorer installed.

Keep in mind that this filter and the Installed Software report discussed previously systems need to have run the Targeted Software Inventory policy. That is where the data for both the filter and the report come from.

Clean up

To ensure we are reporting correctly, we'll want to remove any other Software Releases that may be part of the system. To do this, follow these steps:

  1. In the Symantec Management Console, go to Manage > All Resources.
  2. Browse in the left-hand tree under Default > All Resources > Software Components > and select Software Release.
  3. Use the quick filter in the upper right of the results window to filter by typing "Internet Explorer".
  4. Review the results to see if you have any duplicates for the Software Resources you created. Before you delete one, make sure the one you delete is not the one that contains the Detection Rule you created and assigned to the policy and filter.
  5. You can review by Right-clicking, selecting Actions, and clicking Edit Software Resource.
  6. You can now look under the Rules tab to see if it is the detection rule you created.
  7. Back at the All Resources results screen, right-click and select delete for any duplicates.

Conclusion I hope this helped show you a method to track and use data on Internet Explorer. This can help you manage Internet Explorer and take a proactive approach to help users get in compliance with any versioning requirements, or help deploy plug-ins for the correct versions. While this demonstration has been about Internet Explorer, the same process can be used for any software. This will thus be useful for any software that may not be captured automatically through our Software Discovery via our Inventory Policies.

Mac Commands - Directory Editor, dscl and Custom Inventory

$
0
0

This article provides an overview of the Mac OS X Directory Editor and associated dscl command line utility. It includes examples of using the Symantec Management Platform’s Custom Inventory for Mac to gather this directory data. This article is intended to present this information simply yet detailed enough to get started. 

 
The directories available to a Mac include the local directory on the Mac, Active Directory on domain, etc. 
 
WARNING: Keep in mind that accessing production data without authorization may be a violation of company policy and/or government laws. You are advised to check with your company prior to accessing production company data to avoid any issues arising due to unauthorized access. All testing for this document was done on a test Mac computer and a test Active Directory server with dummy data. 
 
 
Overview of the Directory Editor and dscl command
 
The Mac OS X ‘Directory Utility’ provides functionality to bind a mac computer to a domain, enable/disable the root user and several other features. This utility is found in /System/Library/CoreServices/Directory Utility.app. 
 
One of the lesser-known features of the Directory Utility is the “Directory Editor”. 
 
The command line version of this feature is ‘dscl’, which is described in its man page as the “Directory Service Command Line Utility”. 
 
Both the Directory Editor and the dscl command allow for connecting to, querying and interacting with a directory. 
 
 
Terminology
 
Note: ‘~=’ means ‘equals or roughly equal to’ in this document. 
 
Node
 
Node ~= datasource ~= (server and database). The node can specify a local database or a database hosted on another machine. Sample nodes are similar to the following: 
 
/Local/Default
/Active Directory/MYDOM0/All Domains
 
The first row shows a node to the Default database on the Local mac computer. The second row is for the All Domains Active Directory database in the specified domain. 
 
Commands
 
The dscl utility has several operations that can be performed on a database record. Among them are list, read, readall, create, delete, merge, change, etc. See ‘man dscl’ for more details on available commands. This document will only deal with the read type of commands. 
 
Path
 
Path ~= (table and record). The path typically includes the database table name and the record name. Examples include: 
 
Users johndoe
Computers mymac$
 
 
 
The Directory Editor
 
In the Directory Editor screen, servers and databases are shown in the ‘nodes’ drop-down list:
 
DirectoryEditor_Nodes.png
 
 
Tables, or the first portion of the ‘path’ for the selected node, are shown in the ‘Viewing’ drop-down list:
 
DirectoryEditor_Databases.png
 
 
 
Individual records, or the second portion of the path, are then shown in the left-pane of the app’s screen. In this case, the only local computer is the ‘localhost’, which makes sense. Once a record is selected from the list in the left-pane, individual attributes and corresponding data for that record are shown in the main portion of the window, as shown here:
 
DirectoryEditor_Records_and_Attributes.png
 
 
This explains the basic process for finding database data using the Mac Database Editor and dscl command line: Select or specify a server and database (node or datasource), then select a table and record (path). It is then possible to see individual attributes such as ipaddress, DNSName, etc. 
 
 
 
The ‘dscl’ Command
 
The dscl command is run from a shell prompt using the Terminal app or an equivalent app. It has two modes – interactive and non-interactive. The dscl command returns the same data shown in the Directory Editor app. 
 
Note: Most names are case-sensitive when using the dscl command. 
 
 
Interactive Mode
 
Typing ‘dscl’ at a shell prompt and pressing ‘enter’ provides access to the interactive mode. Interactive mode displays a ‘>’ prompt. At that point, dscl is waiting for further commands. To quit interactive mode, type the letter ‘q’ and press ‘enter.’ 
 
Note that the ‘ls’ and ‘cd’ commands work within interactive mode. This allows for viewing entries at the current location and to traverse the node and path. The prompt will include the current location in the directory path. 
 
 
Non-Interactive Mode
 
In non-interactive mode, the entire command is entered on one line, the resulting output is displayed on-screen, followed by the normal shell prompt. 
 
The general syntax of this command is to specify a node, a command to perform, a path and, optionally, a list of attributes or columns.
 
Node, command, path, attributes
 
or
 
data source and database, command, table and record, attributes
 
Note that attributes are optional. Not specifying attributes returns all attributes in the specified table. Viewing all attributes of a table may be helpful for determining attribute names and which attributes are most helpful for a given requirement. 
 
Sample non-interactive commands: 
 
• dscl /Local/Default read Computers/localhost IPAddress
• dscl  /Active\ Directory/MyDomSrv/mydom.com -read /Computers/mymacpro$ distinguishedName 
 
Note that the node, command and path must be specified in this order. It does not seem possible to specify the command, node/path or other variations. 
 
 
 
Sample Interactive sequence to read localhost data
 
dscl
Entering interactive mode... (type "help" for commands)
 
ls
LDAPv3
Local
Contact
Search
 
cd /Local/Default
/Local/Default > read Computers/localhost
 
dsAttrTypeNative:KerberosFlags: 110
AppleMetaNodeLocation: /Local/Default
IPAddress: 127.0.0.1
IPv6Address: ::1 fe80::1%lo0
KerberosServices: host afpserver cifs vnc
RecordName: localhost
RecordType: dsRecTypeStandard:Computers
/Local/Default > 
 
Sample Non-interactive command to read localhost data
 
dscl /Local/Default read Computers/localhost
 
dsAttrTypeNative:KerberosFlags: 110
AppleMetaNodeLocation: /Local/Default
IPAddress: 127.0.0.1
IPv6Address: ::1 fe80::1%lo0
KerberosServices: host afpserver cifs vnc
RecordName: localhost
RecordType: dsRecTypeStandard:Computers
 
Sample Non-interactive command to read a single attribute from the localhost record
 
dscl /Local/Default read Computers/localhost IPAddress
IPAddress: 127.0.0.1
 
 
The following examples show the interactive and non-interactive commands for gathering the DNSName, RealName, and RecordName from Active Directory for a specific computer.  
 
Sample Interactive command to read specific active directory computer data
 
dscl
Entering interactive mode... (type "help" for commands)
 
cd Active\ Directory/MYDOM0/All\ Domains/Computers
/Active Directory/MYDOM0/All Domains/Computers 
 
ls
MYDOM$
mymacmini$
mymacpro$
MYNB$
WIN7VM $
 
/Active Directory/MYDOM0/All Domains/Computers > read MYNB$ DNSName RealName RecordName
 
DNSName: MYNB.mydom.com
RealName: MYNB
RecordName: MYNB$
 
 
Note in the interactive sample, above, the database name (Computers) was included in the node portion of the command. The non-interactive mode does not allow for putting the database in the node. The following two non-interactive commands show the incorrect and correct node and path syntax, respectively. (There may be variations to this rule.)
 
 
Sample Non-interactive command to read active directory computer data
  
* The node is enclosed in double quotes since it contains spaces. 
 
dscl "/Active Directory/MYDOM0/All Domains/Computers" -read MYNB$ DNSName RealName RecordName
Data source (/Active Directory/MYDOM0/All Domains/Computers) is not valid.
 
dscl "/Active Directory/MYDOM0/All Domains" -read Computers/MYNB$ DNSName RealName RecordName
 
DNSName: MYNB.mydom.com
RealName: MYNB
RecordName: MYNB$
 
 
 
Custom Inventory to Gather Database Information
 
At this point, we are ready to create a custom inventory script to gather specific data. Following are two custom inventory samples that use ‘dscl’ to gather directory data. 
 
Note: The first line of each script is commented so the helper script is not included. This allows for seeing the output on-screen without sending the results to the NS/SMP server. To actually send the data to the NS/SMP server, remove the beginning ‘#’ sign. 
 
Note: Those with greater programming skills may be able to reduce the number of ‘dscl’ commands executed in these scripts. Feel free to share the code, if you do. ☺
 
Gathering Local User information
 
-------------------------------------------------------------------
#. `aex-helper info path -s INVENTORY`/lib/helpers/custominv_inc.sh
# SCRIPT_BEGINS_HERE
#
#!/bin/sh
 
# specify custom inventory data class, attributes, etc. 
echo cust_mac_localusers
echo "Delimiters=\" \""
echo "string250 string50 string250 string250 string50 string250"
echo "NFSHomeDir PrimaryGroup RealName RecordName  UniqueID UserShell"
 
# list users, get data for each user
for i in `dscl  /Local/Default -list /Users`; do 
  uniqueID=`dscl  /Local/Default -read /Users/$i UniqueID | awk '{ print $2 }'`
 
  # exclude system-created users, which have IDs below 500
  if [ "${uniqueID:-0}" -ge 500 ]; then
    nfsHD=`dscl  /Local/Default -read /Users/$i NFSHomeDirectory | awk '{ print $2 }'`
    pGroupID=`dscl  /Local/Default -read /Users/$i PrimaryGroupID | awk '{ print $2 }'`
    realName=`dscl  /Local/Default -read /Users/$i RealName | awk '{ print $2 }'`
    recName=`dscl  /Local/Default -read /Users/$i RecordName | awk '{ print $2 }'`
    userShell=`dscl  /Local/Default -read /Users/$i UserShell | awk '{ print $2 }'`
    echo $nfsHD $pGroupID $realName $recName $uniqueID $userShell
  fi
done
-------------------------------------------------------------------
 
 
 
Gathering Active Directory Computer Names
 
-------------------------------------------------------------------
#. `aex-helper info path -s INVENTORY`/lib/helpers/custominv_inc.sh
# SCRIPT_BEGINS_HERE
#
#!/bin/sh
 
# specify custom inventory data class, attributes, etc. 
echo cust_ad_computernames
echo "Delimiters=\" \""
echo string50 string50 string50
echo "RealName DNSName DistinguishedName"
 
# get a list of computers in AD and return specific attributes; use awk to exclude the attribute name and return only the value. 
for i in `dscl  /Active\ Directory/MYDOM0/mydom.com -list /Computers`; do 
  realName=`dscl  /Active\ Directory/MYDOM0/mydom.com -read /Computers/$i RealName | awk '{ print $2 }'`
  dnsName=`dscl  /Active\ Directory/MYDOM0/mydom.com -read /Computers/$i DNSName | awk '{ print $2 }'`
  distName=`dscl  /Active\ Directory/MYDOM0/mydom.com -read /Computers/$i distinguishedName | awk '{ print $2 }'`
  echo $realName $dnsName $distName
done
-------------------------------------------------------------------
 
The above two samples are attached below. 
 

 

Inactive Computers, Seasonality and Inventory

$
0
0

Table of content

  • Introduction
  • Inactive Computers
  • Shelved computers
  • Seasonality
  • Impact on Inventory 
  • Conclusion
  • References

Introduction

I am working with a customer that manages a lot of computers world-wide since last year, and this involvement can be directly seen here on Connect (I implemented and tested aila2 [1] there, added some features to Zero Day Patch [2] to suits their need, created the Patch Trending Toolkit [3], SWD Trending [4] and a few other tools that are not Connect ready yet [5]).

One of the recent challenges I faced was to explain why Inventory Data quality is not at the expected levels (this customer is quite demanding, so expectations generally start at 100% success). In order to explain why 100% data return is not possible, I have come up with a small graphic that I will explain here (the graphic is generic and uses some patch trending data).

Inactive Computers

I added a monitoring module in Patch Trending to account for inactive computers over time back in September 2013. The reason for this was simple: you can't patch computers that are not on the network, and if you don't know how many are off, you can't understand as they impact your compliance.

The inactive computer module records computers counts based on the 'Computers to purge' criteria at 7 days and 17 days. 7 days takes care of anyone that is out of office for a week, whilst the 17 days threshold takes care of holidays that are longer than 2 full weeks (that is leaving on a Friday evening and returning the 3rd Monday after that).

With 6 month of data behind us we can estimate that percent of inactive computers is as follow:

RegionInactive (17 days+)Inactive (7 days+)
NALA5%8%
EMEA6%10%
APAC3%6%

These are the low points rounded up, so the percent of inactive computers is often above this with some impressive peaks. So let's look at those and their root cause in the section.

Shelved computers

But before we get there we have to look at another type of inactive computers that are not necessarily present in your environment but found in large enterprises: shelved machine. In our case we have a small but regular flow of computers that are received, imaged, added to the CMDB and put back in a box or on a shelf for a certain period of time. The impact can be fractional for Inventory solution but rather bad on Patch Management Solution. These computers are not so difficult to detect, but they can impact your inventory results and patch compliance very quickly.

Recently we found out with a customer that 1% of the computer estate could be considered as boxed (with an active time span of less than 3 days) and accounted for over 10% of the entire estate Vulnerabilities. 10% of vulnerabilities that are not exploitable from a security perspective (as long as the machines are off net) but that have no chances of being fixed either (until they come on net).

Seasonality

Computers are used by people like you and I, so there are a lot of trends that are related to human behavior. Some of those are visible thru the IIS log files (start-up / login time between 0800 and 0900) and thru the inactive computer reports.

Let's look a graphs from each of the regions afore-mentioned and discuss the event that are the cause of the peaks seen there.

North America Latin America (NALA):

INA_NAM.png

The first bump seen on this graph (pretty much centered) is related to the Thanks Giving holiday, and the massive peak seen on its right hand side is the year end holiday (Christmas and New Year). At peak we had ~48% of the computers out for more than 7 days and ~13% computers out for more than 17 days.

Europe Middle East Africa (EMEA):

INA_EMEA.png

In this regions (largely dominated by European countries) we have a lot of small variations up to the 1st of November (All Saints bank holiday) and much less until we reach the year end break, with a peak between Christmas and New Year at ~74% inactive computers for 7+days and 29% inactive for more than 17 days.

Asia Pacific (APAC):

INA_APAC.png

This region is dominated by China so the seasonal event's are dominated by the main Chinese holidays: the Moon Festival in October and Chinese New Year (end of January this year) but we can see the Christmas / New Year celebration (most likely from Australia).

The peaks for 7 days + are 56% and 57% respectively, with only a marginal increase in the count of inactive computers at 17 days +.

Consolidated view of the peaks per region:

Region17 days + peak7 days + peak
NALA13%48%
EMEA29%74%
APAC6%57%

 

Impact on Inventory

Inventory and Patch Management are impacted by inactive computers and seasonality, but the impact on Inventory is generally higher because they run with a larger interval than the Windows Assessment scan and Software Update installation window (both happening daily or multiple times per day versus bi-monthly for a full inventory).

With a full inventory running every 2 weeks (so we have a sliding inventory age ranging from 0 to 13 days) we can visualize the impact of inactive computers on the data quality / update rate:

Inactive-vs-Inventory_0.png

We can see in green the 2 weeks full inventory window, where the big green circle will travel to the right as days go by and t is coming closer to the full inventory schedule.

Whilst this is happening we have computers that are built and should be running their full inventory (it has a schedule starting the the past to it should run asap) however some may not have the time to do so.

The computers inactive between 7 and 17 days are still showing an up-to-date inventory (i.e. Inventory last modified within the last 4 weeks) however computers inactive for 17 days + have missed 2 or 3 inventory schedules so they will be flagged in our inventory trending reports [6].

We are only plotting the full inventory, but delta inventory do not run any better on inactive computer, regardless of their more frequent interval ;).

Conclusion

Inventory data is greatly impacted by inactive computers and seasonality, to a greater extent than Patch Management, whilst boxed / or shelved computers (new or upgraded) can have a very serious impact on your patch compliance.

So in all cases, it's very important to monitor your environment to understand how the human factor impacts inventory, patch and other Altiris related solutions.

And here is the final graph made from the above data:

Time-considerations.png

References

[1] Altiris IIS log analyzer 2

[2] Zero Day Patch version 9

[3] Patch Trending

[4] SWD Trending

[5] A note on things to come

[6] Crafting an Inventory "freshness" report

Taking Some Radical Actions to Improve Inventory Quality on the Symantec CMDB

$
0
0
  • Introduction
  • Resource Update Summary hashes history
  • Manifesto against datahash verification
  • Creating a SQL Task to clean up the hashes
  • Conclusion
  • References

Introduction

Inventory quality is very important for a lot of customer, and it's very important for Symantec as well, as many times invalid data found on the CMDB can be associated with the product quality (sometimes correctly, other times not so).

Today we'll review a specific element of the inventory gathering chain: the data insertion and we'll take some (arguably) drastic step to prevent those issues

Resource Update Summary hashes history

As previously documented [1] (and the process as not changed since I wrote this article in November 2009) inventory data that is received from a resource will be inserted in the database only if the inventory hash does not match the hash already stored in the ResourceUpdateSummary table (for this resource and inventoryclassguid).

This process has proved to be not so reliable [2][3][4] (and despite the documentation pointing to 6.x and forwarding it still applies to 7.x and agent inventory). This is because the inventory data that is found on the various data class tables can differ from the data that generated the hash recorded in the Resource Update Summary table. This leads to agents sending valid data to the server that is processed and not inserted in the DB when it should.

Manifesto against datahash verification

Warning! This view reflects the option of the author and the author only.

With inventory data being sent only if modified (for the case of standard inventory) or in full (in case of a full inventory) one can ask why we are using a datahash verification process.

After all, if the agent finds that its latest inventory differs from the previous one and sends it to the server it would make sense to commit the received data (even if the server thinks - many a time wrongly - that the data is already stored in the DB.

So I argue here that it is better to insert the data and take the risk of having it correctly insert into the DB versus taking the risk to not insert valid data. And if the SQL server is taking a hit because of this we can always re-work the inventory task schedules to take this into account.

Creating a SQL task to clean up the hashes

Now it is very simple to clean up the Inventory Solution classes so that the datahashes (in resource update summary) will be clean before you start a new day. We start with this SQL query (note that we do not include the basic inventory classes as these events are sent very often):

update ResourceUpdateSummary
   set DataHash = null
 where InventoryClassGuid in (
			 select d.Guid --, d.name, d.DataTableName 
			   from DataClass d
			  where DataTableName like 'Inv_SW%'
				 or DataTableName like 'Inv_HW%'
				 or DataTableName like 'Inv_UG%'
				 or DataTableName like 'Inv_OS%'
		)
   and datahash is not null

Then we create the new SQL task to run against the CMDB:

Connect_Cleanup_RUS_Hashs.png

And we schedule this task to run every day (around 0500 - not on the hour).

RUS_CleanUp_Sched.png

With the count of affected rows recorded in the SQL task we can even track the count of inventory classes updated daily. A nice side effect of this clean up process:

Cleanup_Exec_Result.png

Conclusion

Inventory data may not be inserted in the data base for the wrong reason. With the provided steps in place you can now ensure that inventory data that is received by the server is inserted unless it was received since your last cleanup (so duplicated data will not impact database use at the insert point).

You can also see how many dataclasses are updated in within the scheduled interval to track how much data is really coming back (and if this really has an impact on your database usage as well).

References

[1] Spotlight on the Notification Server ResourceUpdateSummary Table

[2] Data is not synchronized on the reporting server even after a full refresh cycle from the forwarding server

[3] KNOWN ISSUE: Forwarded data missing for some data classes

[4] Hash checker command line options details here!

Map User Location and Department to Incident Created via Email

$
0
0

If you like to have affected user location and department populated along with incident data you need to modify SD.Email.Monitor project.

1. Open SD.Email.Monitor.package.

2. Switch to Model[2] ProcessMessage.

All of the following goes between components 2.40 - Verify Primary Email and 2.112 - Create Incident (Single Value Mapping) component.

3. Add: Get User Details Component add connection from Verify Primary Email equals and Item in collection (picture of end result below).
Component configuration: 
- Input: Email Address expand and set value source to Process variables. From variables list add [UserToCheck.PrimaryEmail].
- Relative URL: expand and set value source to Process variables. From variables list add [[ProfileProperties].service_desk_settings_data_services_url]

 4. Add Get Department for User Component and connect Get User Details Component to it.
Component configuration:
- Input: User ID: Expand add Process variables and add [GetUserDetailsComponentResult.UserUniqueId]
- Relative URL: expand and set value source to Process variables. From variables list add [[ProfileProperties].service_desk_settings_data_services_url]

5. Add Text Exists component and connect Get Department for User to it.
Component configuration:
- Expand and add variable GetDepartment.Name
- Exists path connect to Single Value Mapping (5.1) to Map department.
- Does not exist path connect to GetLocationForUser component:

5.1 Add Single Value mapping component to map department.
Component configuration:
- Target Type: IncidentTicket (under Symantec.ServiceDesk.Im.Core).
- Output Variable Name; write: Incident (Output must not be changed!).
- Mapping: Expand GetDepartment (on left) connect Name to AffectedDepartment (on right) and ResourceGuid to AffectedDepartmentId

6. Add GetLocationForUser component. Connect Doesnot exist (5) and Single value mapping (5.1) component to it.
Component configuration:
- Expand User ID and add variable [GetUserDetailsComponentResult.UserUniqueId].
- Relative URL: expand and set value source to Process variables. From variables list add [[ProfileProperties].service_desk_settings_data_services_url] 
Note:
Leave email empty!

7. Add Text Exists component and connect GetLocationForUser component to it.
Component configuration:
- Expand and add variable UserLocation.Name
- Exists path connect to Single Value Mapping (7.1) to Map Location.
- Does not exist path goes to Create Incident (component 2.112).

7.1 Add Single Value mapping component to map Location.
Component configuration:
- Target Type: IncidentTicket (under Symantec.ServiceDesk.Im.Core).
- "Map into Existing Value" must be checked!
- Output Variable Name; write: Incident (Output must not be changed!).
- Mapping. Expand UserLocation (on left) connect Name to AffectedLocation (on right) and LocationUniqueId to AffectedLocationId
- Connect it with Create Incident (component 2.112).

8. Add Exception Trigger by Components and connect it to Create Incident (component 2.112).
Component configuration:
 - expand Components and select:
 Get User Details Component
 Get Department for User
 GetLocationForUser

9. Edit: Create Incident Single value mapping component.
Map into Existing Value must be checked!

10. Test, publish.

Please note that this will only apply to new incidents created via email.
I have added few extra logging components on flow to log exception and values that we are mapping for easier troubleshooting.

SDEmailMonitoring_LD.JPG

 


A Stored Procedure to Monitor Agent Upgrade Status Over-Time

$
0
0
  • Introduction
  • Design
  • SQL code
  • Usage
  • Conclusion

Introduction

Monitoring agent upgrade progress over time is an important task in large environments, and is beneficial in any environment to understand the managed computer pool behaviour and effects of seasonality and human behaviour on tasks execution, agent upgrades or patch compliance.

In this article we will create a stored procedure that will allow us to automatically find out which is agent versions are the highest and record the count of computers with the agent and the count of computers to update.

Design

The data will be collected in a custom table name 'TREND_AgentVersions. If the table does not exist it will be automatically created when running the procedure.

The data collection should not happen many times a day, so to avoid this we verify if the last recorded data set was taken within the last 23 hour. If so we will return the last dataset to the caller. If yes we collect fresh data and return the fresh data to the caller.

The data gathered itself is based on the Basic Inventory dataclass 'AeX AC Client Agent' .

We currently track the following agent versions:

  • Symantec Altiris Agent (core)
  • Altiris Inventory Solution agent
  • Altiris Software Update Agent (Patch Management agent)
  • Altiris Software Management Solution Agent

Other agents could be added, such as the Symantec Workspace Virtualization, but this could be done easily by amending the select code in the procedure.

The table will contain the following columns:

  • _exec_id
  • _exec_time
  • Agent Name
  • Agent Highest Version
  • Agents Installed
  • Agents to upgrade
  • % up-to-date

The last field is the result of a computation that could be done at run time (when we select data from the table) but I have decided to store the data so that the information is readily usable for SMP reports and other consumption by users.

SQL Code

Here is the full procedure code:

GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE procedure [dbo].[spTrendAgentVersions]
  @force as int = 0
as
/* 
      STORED AGENT COUNTS
*/
-- PART I: Make sure underlying infrastructure exists and is ready to use
if (not exists(select 1 from sys.objects where type = 'U' and name = 'TREND_AgentVersions'))
begin
  CREATE TABLE [dbo].[TREND_AgentVersions](
    [_Exec_id] [int] NOT NULL,
    [_Exec_time] [datetime] NOT NULL,
    [Agent Name] varchar(255) NOT NULL,
    [Agent Highest Version] varchar(255) not null,
    [Agents Installed] varchar(255) NOT NULL,
    [Agents to upgrade] varchar(255) NOT NULL,
    [% up-to-date] money
  ) ON [PRIMARY]

  CREATE UNIQUE CLUSTERED INDEX [IX_TREND_AgentVersions] ON [dbo].[TREND_AgentVersions] 
  (
    [_exec_id] ASC,
    [Agent Name]
  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = 
OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

end

-- PART II: Get data into the trending table if no data was captured in the last 23 hours
if ((select MAX(_exec_time) from TREND_AgentVersions) <  dateadd(hour, -23, getdate()) or (select COUNT(*) from TREND_AgentVersions) = 0) or (@force = 1)
begin

  declare @id as int
    set @id = (select MAX(_exec_id) from TREND_AgentVersions)

  insert into TREND_AgentVersions
  select (ISNULL(@id + 1, 1)), GETDATE() as '_Exec_time', _cur.[Agent Name], _cur.Latest as 'Agent highest version', _cur.[Agent #] as 'Agents installed', isnull(_old.[Agent #], 0) as 'Agents to upgrade',
       CAST(CAST(_cur.[Agent #] - isnull(_old.[Agent #], 0) as float) / CAST(_cur.[agent #] as float) * 100 as money) as '% up-to-date'
    from 
      (
        select [Agent name], COUNT(*) as 'Agent #', max(a.[Product Version]) as 'Latest'
          from Inv_AeX_AC_Client_Agent a
         where [Agent Name] in ('Altiris Agent'
                    , 'Altiris Inventory Agent'
                    , 'Altiris Software Update Agent'
                    , 'Software Management Solution Agent'
                    )
         group by [agent name]
      ) _cur
    left join (
      select a1.[Agent name], COUNT(*) as 'Agent #'
        from Inv_AeX_AC_Client_Agent a1
        join (
            select [Agent name], max(a.[Product Version]) as 'Latest'
              from Inv_AeX_AC_Client_Agent a
             where [Agent Name] in (  'Altiris Agent'
                        , 'Altiris Inventory Agent'
                        , 'Altiris Software Update Agent'
                        , 'Software Management Solution Agent'
                        , 'Symantec Workspace Virtualization Agent'
                        )
             group by [agent name]
          ) a2
        on a1.[Agent Name] = a2.[Agent Name]
       where a1.[Product Version] < a2.Latest
       group by a1.[Agent Name]
      ) _old
    on _cur.[Agent Name] = _old.[Agent Name]
   order by [Agent Name]
   
end
select *
  from TREND_AgentVersions
 where _exec_id = (select MAX(_exec_id) from TREND_AgentVersions)
 order by [Agent Name]
 
GO

Usage

Copy the SQL procedure code above or save the attached file to run it against your Symantec_CMDB database.

Once the procedure is created on the server you can call it from a SQL task on the SMP, with the following command:

exec spTrendAgentVersions

Save the task and schedule it to run daily. during the night (anytime between 2100 and 0500. Personally I like to schedule it before 23:59 as this ensure the _exec_date field matches the day when the results where collected. If you run the task past midnight the data will be shown for day <d> but the execution time (and date label in any UI) would show <d +1> which can be confusing.

Conclusion

With a daily schedule you can now track the agent upgrade status of your computers over time. But in order to show the data in a visualize appealing manner you will need a custom User Interface. But this will be the subject of another article or download!

How to Customize WinPE 4.x in Altiris 7.5

$
0
0

##########################################################################################################

This is NOT supported by Symantec, so remember to back up the files, case you want to revert back.

##########################################################################################################

How to modified WinPE 4.x in Altiris 7.5 HF4

In this article, you'll learn, 

  • How to add tools and utilities into WinPE x86 (since most of these tools are not available in x64)
    • VNC
    • RocketDock
    • Sysinternals Tools
    • Unix Utilities
    • Notepad
    • Regedit
    • BGInfo
    • Q-Dir (WinPE version of explorer)
  • How to add WinPE optional packages, like: PowerShell, .Net, etc.
  • How to change WinPE background picture
  • Display the Symantec GUID, Computer Serial Number (Dell since that is what we use) in the BGINFO
     

*All the files I use in my environment will be included in the end

 

My WinPE Environment:

On our WinPE environment we decide to implement extra tools to help us troubleshooting problems during the WinPE. Also we added extra information on the background (BGInfo) so is easy for our tech guys on the field to identify a computer when deploying to a lab with 100+ computers. Normally when is manage this is not a problem, but for new computers and replace computers, where Altiris uses the minint name (in 7.5 serial) helps identify and send tasks.

After trying different configurations we settle for the configuration show on the picture below. We display general computer information, plus serial number, minint name and Altiris GUID number.

Also we have the Microsoft components running like PowerShell and .Net added to the WinPE environment and for remote connections we decide to use VNC for now.

WinPE.jpg

 

Important File Locations to Know:

The Folders and Files locations below is assuming the PXE server is install on a speared server (so not on the NS). Since I don’t have an environment where the PXE and NS are in the same server, I can’t tell info the location below will change. So make sure you keep that into consideration.

[Altiris Install Dir] - assuming it is on default locations [C:\Program Files\Altiris]

 

  • On the Notification Server - (these will be the files you need to modified)
  • Location of the WinPE Background Picture: [winpe.bmp] and [winpe.jpg]
  • [Altiris Install Dir]\Deployment\BDC\bootwiz\Platforms\WinPE\x86\Optional\Boot
     
  • Location to add the utilities and tools:
  • [Altiris Install Dir]\Deployment\BDC\bootwiz\oem\DS\winpe\x86\Base
     
  • Location of the [WINPE.WIM] file:
  • [Altiris Install Dir]\Deployment\BDC\WAIK\Tools\PETools\x86
     
  • Location of Automation Files:
  • [Altiris Install Dir]\Notification Server\NSCap\bin\Win32\X86\Deployment\Automation\PEinstall_x86
     
  • On the Site Server (PXE) - (files below get overwrite every time you re-build the WinPE environment, from the package server)
  • Location of the WinPE Background Picture: [winpe.bmp] and [winpe.jpg]
  • [Altiris Install Dir]\Altiris Agent\Agents\Deployment\SBS\Bootwiz\{GUID}\cache\bootwiz\Platforms\WinPE\x86\Optional\Boot
     
  • Location to add the utilities and tools:
  • [Altiris Install Dir]\Altiris Agent\Agents\Deployment\SBS\Bootwiz\{GUID}\cache\bootwiz\oem\DS\winpe\x86\Base
     
  • Location of the [WINPE.WIM] file:
  • [Altiris Install Dir]\Altiris Agent\Agents\Deployment\SBS\Bootwiz\{GUID}\cache\WAIK\Tools\PETools\x86

 

Geting ready for the WinPE customizations:

Before starting the customization, make sure you have done the following steps:

  1. Download the attached files from this article "WinPETools.zip"
  2. Read the notes on the bottom of the article that explains what the files (scripts) do, case you would like to add or remove extra tools and functionality
  3. Have access to your NS, Package and PXE server
  4. Have a reference computer ready, since you will need to install ADK, and you don’t want do that on the server
    a. Install the Windows Assessment and Deployment Kit (ADK) for Windows 8 (which is WinPE 4.0) on a reference computer to add the extra components into WinPE, instructions for the process is on section “Adding PE Optional Components Ref into WinPE”
    b. To download ADK: http://www.microsoft.com/en-us/download/details.aspx?id=30652
  5. If you decide to use VNC as a remote option, on “Adding Tools to the WinPE” have the KB link explain the process. But if you decide not to used, than you need to do the following:
    a. Delete the files: ultravnc.ini, winvnc.exe, vnchooks.dll
    b. Delete or comment the following lines 19 and 20 from the file “WKU_WinPE.bat”
        i.  echo Launching VNC server...
        ii. start x:\Utilities\winvnc.exe

 

So let's get started:

Once the pre-configurations above are in place, there are only 3 steps:

  • Adding PE components
  • Adding the Extra Tools, like BGIngo, RocketDock, etc
  • Build the WinPE environment

 

Adding PE Optional Components Reference into WinPE:

I noticed that after build a standard WinPE using the Altiris tools, that components like PowerShell and .Net was not enable by default, this present a problem if you are trying to use scripts in PowerShell or .Net. One of the problems I found it was that if you run the powershell script from the console. It will display it was run successfully. Like the picture below:

powershell.jpg

But after looking at the details of the job, I noticed the output properties display the powershell.exe is missing or not install. So the script never run. Since we use powershell scripts and .Net tools, I decided to add the extra components into WinPE and below are the steps to do it.

For more information in building and modifying the WinPE environment, go to the following links:

There is 2 files inside the .ZIP attached to this article:

Before running the batch script “WinPESettings.bat” you will need:

  • Make sure ADK is installed
     
  • Create the following structure on the reference computer root
    • C:\WinPEx86
    • C:\WinPEx86\mount
       
  • Copy the file “WINPE.WIM” from the notification server to “C:\WinPEx86”.
    • NS File location: [Altiris Install Dir]\Deployment\BDC\WAIK\Tools\PETools\x86 - for the x86 WINPE.WIM.
       
  • Copy the “WinPESettings.bat” to the reference computer root
     
  • Browse to > Start > All Programs > Windows Kits > Windows ADK
    • Run: Deployment and Imaging Tools Environment
    • This will open a CMD with the right environment variables set, just go back to the root and run the WinPESettings.bat file from the open CMD
       
  • If you decide to added the “unattend.xml” to set the resolution than, you will need to comment out lines 78, 79 and 80 from the “WinPESettings.bat”, once the batch file added all the components you need to copy the unattend.xml to the following locations:
    • C:\WinPEx86\mount
    • C:\WinPEx86\mount\Windows\System32
    • once you added the xml file
    • unmount the WINPE.WIM image by running the following command on the open CMD
    • Dism /Unmount-Image /MountDir:"C:\WinPEx86\mount" /commit

Once you are done with the WINPE.WIM file, than you will need to move the file back to the Notification Server.

 

Just a sample of the batch file, full code is attached to this article:

ECHO***********************************************************ECHO** Mount Windows PE boot image "winpe.wim"**ECHO***********************************************************
imagex /mountrw c:\winpex86\winpe.wim 1 c:\winpex86\mount

ECHO** Adding Package [WinPE-WMI]                            **
DISM /Add-Package /Image:"C:\WinPEx86\mount" /PackagePath:"[path to Windows Kits]\x86\WinPE_OCs\WinPE-WMI.cab"ECHO** Adding Package [WinPE-NetFx4]                         **
DISM /Add-Package /Image:"C:\WinPEx86\mount" /PackagePath:"[path to Windows Kits]\x86\WinPE_OCs\WinPE-NetFx4.cab"ECHO** Adding Package [WinPE-Scripting]
DISM /Add-Package /Image:"C:\WinPEx86\mount" /PackagePath:"[path to Windows Kits]\x86\WinPE_OCs\WinPE-Scripting.cab"ECHO** Adding Package [WinPE-PowerShell3]                    **
DISM /Add-Package /Image:"C:\WinPEx86\mount" /PackagePath:"[path to Windows Kits]\x86\WinPE_OCs\WinPE-PowerShell3.cab"ECHO** Adding Package [WinPE-MDAC]                           **
DISM /Add-Package /Image:"C:\WinPEx86\mount" /PackagePath:" [path to Windows Kits]\x86\WinPE_OCs\WinPE-MDAC.cab"ECHO** Adding Package [WinPE-HTA]                            **
DISM /Add-Package /Image:"C:\WinPEx86\mount" /PackagePath:" [path to Windows Kits]\x86\WinPE_OCs\WinPE-HTA.cab"ECHO** Adding Package [WinPE-DismCmdlets]                    **
DISM /Add-Package /Image:"C:\WinPEx86\mount" /PackagePath:" [path to Windows Kits]\x86\WinPE_OCs\WinPE-DismCmdlets.cab"ECHO***********************************************************ECHO** Unmount the Windows PE Image                          **ECHO***********************************************************
Dism /Unmount-Image /MountDir:"C:\WinPEx86\mount" /commit

 

Adding Tools into WinPE

These part you will prepare the environment with all the files, so the system will be ready to build the WinPE.

Download the WinPETools.zip from the article and make sure you have all your settings ready.

For VNC, there are other articles out there that explain the process (http://www.symantec.com/connect/articles/winpe-21-remote-control-ultravnc), this one is not necessary for 7.1 or 7.5, but the process is very similar, the major difference is that VNC change the settings location from registry to INI file. All the VNC files are include on the download, all you need to do is build the ultravnc.ini I am including the file but I erase the password field.

Getting all files setup:

  • Login to the NS
    • Make sure you back up the originals Files, case you want to revert back.

 

  • Extract the “WinPETools.zip”
     
  • Copy folder “Utilities” > [Altiris Install Dir]\Deployment\BDC\bootwiz\oem\DS\winpe\x86\Base
    • While you are on the “[Altiris Install Dir]\Deployment\BDC\bootwiz\oem\DS\winpe\x86\Base” folder
    • Edit the file “runagent.bat”
    • Add the following command on the bottom of the batch file

 

REM Set PowerShell to unrestricted
ECHO Starting PowerShell
IF EXIST x:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.exe x:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.exe Set-ExecutionPolicy Unrestricted

 

  • This will initialize the custom script to start rocketdock, bginfo, etc …
REM Start Utilities
ECHO Start Utilities
IF EXIST x:\Utilities\WKU_WinPE.bat x:\Utilities\WKU_WinPE.bat

 

  • Copy background pictures > [Altiris Install Dir]\Deployment\BDC\bootwiz\Platforms\WinPE\x86\Optional\Boot
    • I didn’t add the background picture, since it is customize for my environment, but you just need to create one and replace the files on the folder above
       
  • Replace the modified “WINPE.WIM” > [Altiris Install Dir]\Deployment\BDC\WAIK\Tools\PETools\x86

 

Process to build the WinPE files on a PXE Site Server

Since in our environment we don’t have the PXE services running on the NS, we assign a site server as our PXE server.

So the process to get your custom files inserted into the WinPE environment is:

  1. Add all the customizations files/settings on the locations show above (Notification Server)
  2. In 7.5 the BootWiz folder is itself, part of a package, so normal package replication will pick up the changes, though it could take up to 24 hrs to update site servers. You can either wait or force the replication by running the Scheduler task:
    1. Run taskschd.msc
      1. Click [Task Scheduler Library]
      2. Right-click > run the following tasks
    2. NS.Package Distribution Point Update
    3. NS.Package Refresh
    4. NS.Delta Resource Membership Update
  3. Once the replication tasks runs, go to the package server to make sure the following package was update:
    bcd_pck.jpg
  4. Once the package is update on the Site Server [Package]
    1. Go to the Altiris Console > Settings > All Settings > Deployment and Migration > Preboot Configurations
    2. Click on the WinPE environment you would like to re-create and click the button [Recreate Preboot Environment
      preboot.jpg
  5. Go to the Site Server [PXE]
    1. Check if the following process is running, you will need to wait until this is done before trying to boot into WinPE using PXE
      bootwiz.jpg
  6. Once this is done, run a task to boot the client computer into PXE and see if all the changes you made are correct.
  7. Don't forget that there is also an X64 folder that may need to be modified if you use 64bit PE

 

Attached Articles files:

Utilities Files:

  • Q-Dir – file explorer for WinPE
  • RocketDock – strip down version for WinPE
  • SysInternalsSuite – list of tools that we use in our environment, so if you need more just added to this folder
  • UnxUtils – couple unix tools that comes in handy when need it

Utilities Files:

  • BGInfo – display the info on the background
  • ultravnc.exe – remote desktop
  • ultravnc.ini – settings for vnc
  • wmiexplorer – allow to load the wmi environment in WinPE

 

  • WKU_Get_BGInfo_Vars.vbs
    • This script will retrieve machine Serial Number and Altiris GUID Number
    • Create a file call “SetVars.bat”
  • WKU_WinPE.bat
    • First run the WKU_Get_BGInfo_Vars.vbs
    • Run the Set Vars.bat (created by the previous call)
    • Create a hostname file call “hostname.txt”
      • We notice that BGInfo was not reading the write hostname in WinPE, so this was the work around we found it.
    • Registry imports for custom settings
    • Start
      • RocketDock
      • VNC
      • BGInfo

Extra Files

  • WinPESettings.bat – this will mount the WINPE.WIM file, added the preset packages and unmounts
  • unattend.xml (optional) – this have just a setting, so the resolution of your WinPE environment will be set to 1024x768 32bit

 

Conclusion:

After BootWiz.exe finished building the WinPE environment, than you can test it and make sure all your configuration is working. I hope this can help people add tools and the Microsoft components into WinPE to take full advantage of WinPE 4.

Also want thanks everyone that post articles before, that help me develop this process.

A Stored Procedure to Monitor Inventory Status Over-Time

$
0
0
  • Introduction
  • Design
  • SQL code
  • Usage
  • Conclusion

Introduction

Monitoring inventory updates (refresh rates) over time is an important task in large environments, and is beneficial in any environment.In this article we will create a stored procedure that will allow us to automatically track how many agents have sent an inventory (per inventory type) in the past few weeks to complement the built-in inventory status reports.

Design

The data will be collected in a custom table name 'TREND_InventoryUpdates'. If the table does not exist it will be automatically created when running the procedure.

The data collection should not happen many times a day, so to avoid this we verify if the last recorded data set was taken within the last 23 hour. If so we will return the last dataset to the caller. If yes we collect fresh data and return the fresh data to the caller.

The data gathered itself is based on the ResourceUpdateSummary table, for the following inventory types:

  • Basic Inventory (from the core agent)
  • Hardware Inventory
  • Operating System Inventory
  • Software Inventory
  • User Group Inventory

If you have custom inventory classes that have a standard name (for example 'MyCompany - ...) you could also add those as a specific type to the procedure.

The custom table storing the tracking data will contain the following columns:

  • _exec_id
  • _exec_time
  • Inventory Type
  • Computers
  • Updated in the last 4 week
  • Not updated in the last 4 weeks
  • % up-to-date

The last two fields are the result of a computation that could be done at run time (when we select data from the table) but I have decided to store the data so that the information is readily usable for SMP reports and other consumption by users.

Important note! I have chosen 4 weeks (28 days) as a threshold here. This is a good starting point, and you could change this however their is not plans to support such customisation in the upcoming custom UI to display the gathered data.

SQL Code

Here is the full procedure code, name spTrendInventoryStatus:

GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE procedure [dbo].[spTrendInventoryStatus]
  @force as int = 0
as

-- PART I: Make sure underlying infrastructure exists and is ready to use
if (not exists(select 1 from sys.objects where type = 'U' and name = 'TREND_InventoryStatus'))
begin
  CREATE TABLE [dbo].[TREND_InventoryStatus](
    [_Exec_id] [int] NOT NULL,
    [_Exec_time] [datetime] NOT NULL,
    [Inventory Type] varchar(255) NOT NULL,
    [Computer #] int not null,
    [Updated in last 4 weeks] int NOT NULL,
    [Not Updated in last 4 weeks] int NOT NULL,
    [% up-to-date] money
  ) ON [PRIMARY]

  CREATE UNIQUE CLUSTERED INDEX [IX_TREND_InventoryStatus] ON [dbo].[TREND_InventoryStatus] 
  (
    [_exec_id] ASC,
    [Inventory Type]
  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = 
OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

end

-- PART II: Get data into the trending table if no data was captured in the last 23 hours
if ((select MAX(_exec_time) from TREND_InventoryStatus) <  dateadd(hour, -23, getdate()) or (select COUNT(*) from TREND_InventoryStatus) = 0) or (@force = 1)
BEGIN

  declare @id as int
    set @id = (select MAX(_exec_id) from TREND_InventoryStatus)

  declare @basinv int, @basinv_utd int, @os int, @os_utd int, @hw int, @hw_utd int, @sw int, @sw_utd int, @ug int, @ug_utd int

  select @basinv = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'AeX AC%'
  select @basinv_utd = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'AeX AC%' and rus.ModifiedDate > GETDATE () - 28

  select @os = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'OS %'
  select @os_utd = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'OS %' and rus.ModifiedDate > GETDATE () - 28

  select @hw = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'HW %'
  select @hw_utd = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'HW %' and rus.ModifiedDate > GETDATE () - 28

  select @basinv = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'AeX AC%'
  select @basinv_utd = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'AeX AC%' and rus.ModifiedDate > GETDATE () - 28

  select @sw = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'SW %'
  select @sw_utd = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'SW %' and rus.ModifiedDate > GETDATE () - 28

  select @ug = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'HW %'
  select @ug_utd = COUNT(distinct(ResourceGuid))
    from ResourceUpdateSummary rus
    join DataClass dc on rus.InventoryClassGuid = dc.guid
   where dc.Name like 'HW %' and rus.ModifiedDate > GETDATE () - 28

  insert into TREND_InventoryStatus
  select (ISNULL(@id + 1, 1)), GETDATE() as '_Exec_time', 'Basic Inventory' as 'Inventory type', @basinv as 'Computers', @basinv_utd as 'Updated in last 4 weeks', @basinv - @basinv_utd as 'Not Updated in the last 4 weeks', cast(cast(@basinv_utd as float) /  cast(@basinv as float) * 100 as money) '% up-to-date'
   union
  select (ISNULL(@id + 1, 1)), GETDATE() as '_Exec_time', 'OS Inventory', @os, @os_utd, @os - @os_utd, cast(cast(@os_utd as float) /  cast(@os as float) * 100 as money)
   union
  select (ISNULL(@id + 1, 1)), GETDATE() as '_Exec_time', 'HW Inventory', @hw, @hw_utd, @hw - @hw_utd, cast(cast(@hw_utd as float) /  cast(@hw as float) * 100 as money)
   union
  select (ISNULL(@id + 1, 1)), GETDATE() as '_Exec_time', 'SW Inventory', @sw, @sw_utd, @sw - @sw_utd, cast(cast(@sw_utd as float) /  cast(@sw as float) * 100 as money)
   union
  select (ISNULL(@id + 1, 1)), GETDATE() as '_Exec_time', 'UG Inventory', @ug, @ug_utd, @ug - @ug_utd, cast(cast(@ug_utd as float) /  cast(@ug as float) * 100 as money)

END

select *
  from TREND_InventoryStatus
 where [_Exec_id] = (select MAX(_exec_id) from TREND_InventoryStatus)
 order by [Inventory type]

GO

Usage

Copy the SQL procedure code above or save the attached file to run it against your Symantec_CMDB database.

Once the procedure is created on the server you can call it from a SQL task on the SMP, with the following command:

exec spTrendInventoryStatus

Save the task and schedule it to run daily. during the night (anytime between 2100 and 0500. Personally I like to schedul it before 23:59 as this ensure the _exec_date field matches the day when the results where collected. If you run the task past midnight the data will be shown for day <d> but the execution time (and date label in any UI) would show <d +1> which can be confusing.

Conclusion

With a daily schedule you can now track the inventory status of your computers over time. But in order to show the data in a visualize appealing manner you will need a custom User Interface. But this will be the subject of another article or download!

Changes Introduced in aila2 - Version 2

$
0
0
  • Introduction
  • New data points
  • UI changes
  • Conclusion
  • References

Introduction

A new version of aila2 was updloaded on the aila2 download page [1] a few moments ago. This version 2 of the tool introduced a few new data points in the json file as well as changes in the UI. We will review here the additions and improvement made for this release.

New data points

Whilst troubleshooting some Managed Delivery execution problems with colleagues (for 2 distinct customers, but both had the very same issue with too many hits on the Inventory Rule Management web-service that is causing horrible problems on the console and server) it became clear that this was missing from the aila2 result set. So I added it.

Then there was another need that I had not address even in the C version of the tool (aila, the predecessor of aila2 that only ran on linux) which was detailed Task Management hits, related to the Task Server interfaces.

Also, the IIS return codes are another "cheap" data point that allows you to quickly check if there are big issues with teh server (status 50x) or a lot of authentication hits (normally from task servers or console usage) causing http 40x error (because of the nature of the challenge response, we first hit the server without passing any credentials, and the server sends back a challenge in the form or an access denied error).

And finally, one of the most useful data point from aila was brought into aila2: the ip address table. This table (which in fact is a sorted dictionary, so the data is sorted by ip address) is checked with every line that is parsed to increment the hit counter per ip address. Then we store the data in a sorted list to list (the key is the hit count) that is then parse in reverse order to generate the 'Ip hitter - top 20' data point and the ip hit file that is saved under the running directory using the IIS log file name (so parsing u_ex140306.log would produce an ip list under u_ex140306.txt).

Note that the ip list feature doesn't work when data is passed to aila2 via stdin (this is a feature I'll probably have to implement then ;).

After a fair few words lets jump into the visual documentation!

UI Changes:

Addition 1

Provided you have more than 8 ip addresses that connect to your server you will see in this table the 20 entries that have produced most hits on the SMP (or Site Server if you run aila2 against task or package servers). This is quite helpful to find rogue agents that need a re-install (or uninstall, for example if a 7.1 sub-agent was pushed to a 7.0 agent).

ip_hit_20.png

Addition 2

taskmanagement_graph.png

Addition 3

The Inventory Rule Management interface is quite regular, and the timings are quite important in some cases so this chart contains the hit count, average and max time-taken values. Note that on a production loaded system you may not see those columns (if you have 50,000 hits at an average 2,000 milliseconds the average column will barely register on teh chart).

inventory_rule_management.png

Addition 4:

http_status.png

Addition 5 and 6:

tables.png

Change 1:

Added the Task Management and Inventory Rule Management to the hourly chart. It doesn't show so well on a small test server however it really help pointing out critical time for specific interface that can be tied down to configuration (inventory rule quicking off at the same time of 5000 computers will show nice spikes on the Inventory Rule line).

hourly.png

 

Change 2

Mime type is not generally the most interesting chart, so I changed the colour to make it a little more appealing ;).

mime_types.png

Change 3

The navigation menu list the new additions in the page. So the in-page navigation is becoming a little lengthy, but it's still useful (but please let me know if you feel different).

Navigation.png

Conclusion

With more detailed information available on the hourly chart, on the detailed analysis page and with the full list of hit counts per ip address the aila2 toolkit can now offer more insight on what is happening on the environment at a glance (if you use the calendar view [3]) or with the detailed viewer.

This can be used to visual spot changes in an environment that are out of the usual. And here's a proof, ,again from a test environment:

quiz.png

Please add a comment if you spotted something out of the usual (sorry for the small image - we have to work with Connect limits here)

References

[1] aila2: A c# program to analyze Altiris IIS log files

[2] {CWoC} aila2-version1 sources files

[3] aila2-web: Introducing the Calendar View and siteconfig json file

Email Address Associated with Machines That Do Not Have the Latest Version of the SMA

$
0
0

A community member wanted to be able to identify machines that are not running the latest version of the SMA so that they could then email the user to inform them that they needed to connect to the network in order to bring their machine into compliance with corporate standards.

Although there is probably a cleaner way to achieve this by using a similar method as the agent upgrade filter, the following two queries fulfil the requirement:

 

1.  Run this query to find out what the latest version of the SMA is in the environment (the SMA upgrade policy needs to be enabled and at least one computer must have been upgraded for this to work): 

  SELECT DISTINCT [Product Version] FROM Inv_AeX_AC_Client_Agent
  WHERE [Agent Name] = 'Altiris Agent'
 

2.  Run this query to find the users email address

  SELECT vu.Email FROM vUser vu
  JOIN Inv_AeX_AC_Primary_User pu ON pu.[User] = vu.Name
  JOIN Inv_AeX_AC_Client_Agent ca ON ca._ResourceGuid = pu._ResourceGuid
  WHERE ca.[Agent Name] = 'Altiris Agent'
  AND ca.[Product Version] < 'enter latest version here'

List Select as a Replacement for Grid Components

$
0
0

Intro

In response to folks requesting it, this is my solution for getting around the use of the grid component in forms. Since IE 11 and the grid component don't get along well and some workstations may only have IE 11, or have it set as the default browser, I had to find a way around using the grid component. I only have experience using this method with WF 7.5, but it should work just fine with 7.1.

 

The downside to this approach is that you can't do in-line editing like you could with a grid component set to edit, but it's very useful as a main listing page. You list the important identifying information about each item and link to another form that shows the full detail of the item. You can also allow editing on the detail page if editing is needed.

 

Attached to this page is a demonstration workflow package showing how this idea works in practice. All screen shots and examples are derived from this package.

 

FinalProduct.png

 

Short Version (TLDR)

Don't use grids, use the "ListSelect" component instead. Above the ListSelect, put an html merge with a manually created table as your header row. In the item format of the ListSelect, create a table with a single row of whatever columns you need for your data. Add single or multiple outcomes from the ListSelect to suit your needs and use the outcome paths to go to detail pages, edit pages, delete confirmations, whatever.

 

Long Version

This longer explanation follows the path of the attached demo package. As prior workflow knowledge is expected, some steps are left out. Also, some HTML and CSS would be helpful if implementing this method.

 

For the demo, I chose to display something that should work in all environments, user info for users in the administrators group as listed in the Process Manager. This should be enough data to get you an idea of how this method works, but if you want to list more, simply change the name of the group in the first component. Other than the name of the group, I also changed the output variable name from the default. As good practice, I run the collection through a sort component just so it displays in an order you would expect on the next web form.

 

FormBuilder.png

 

In the form, I have only 3 components: a close button (to end the process), an HTML merge, and a ListSelect. The HTML merge acts as the header row for our "table" of results display in the list select.

 

The HTML merge is going to be the basis for our table, so take your time with it and do it right. As it's the header row, you need to know what columns you want to display, how wide they should be, and what format the text should have. For this simple demonstration, I'm only displaying the user Display Name and Email Address. I have a third column to hold a clickable link, but more on that later.

 

After looking at my data and deciding on formatting (which is a lie, really, I just went with defaults, here), I made an HTML table per the source below. A couple of important points here. First, using CSS, I specified fixed column widths on the table data tags. Second, the table tag itself gets some styling; fixed layout to ensure the TD widths stay put and an overall width of the table (make sure you account for any border in your overall width).

 

Header.png

 

HeaderSource.png

 

The ListSelect is the real meet of our simple form. Here we show the basic setup of it with our data type, collection, and outcome path.

 

ListSelect.png

 

A quick note about sizing and placement of your list select. Assuming you leave the overflow-y property to scroll, once you get enough data to scroll, a bar will appear on the right side. For this reason, you should size the list select component 15 pixels wider than your HTML merge. Note that this isn't the size of the tables inside these components, but of the component itself. Also, because the list select has left margin of a pixel or two, you may want to intentionally misalign the HTML merge and the list select so that the table and words line up better.

 

For this demo, I only created one outcome, or rather, renamed the default outcome. You can create multiple outcomes, though, depending on your needs. One for edit and one for delete would be a common example. Each outcome will populate a special variable to be used in the "Item Format" section of the list select component using the format of "_outcome_OUTCOMENAME_" where OUTCOMENAME is your outcome path name. The "Text" box is the text that will be displayed where the special variable is used and doesn't have to match your path name. These can be used anywhere in the formatting and don't have to be in a separate column if you don't want it to be.

 

Outcomes.png

 

In "Item Format" we setup a how we want each row to be displayed. In this case, we are going to setup an HTML table that is nearly identical to the one from our HTML merge header row. In fact, you can copy and paste the source from the HTML merge into the item format source. If you do that, though, make sure you remove any text formatting like bold or centering that you don't want in your data rows.

 

Instead of the column heading names that the HTML merge uses in it's table, the item format table needs to be populated with actual data. The list select component populates another special variable while editing item format that we use to populate the data. This is the "_select_list_item_" variable. It's structure will be based on the data type specified on the list select "Functionality" tab. In this case, we delete the column name and drag over the corresponding data from _select_list_item_ variable.

 

ItemFormat.png

 

Below you can see the finished source after replacing the column names with row data and removing some formatting. Because each row is getting its own HTML table element, there will be some white space between each row. To adjust for this, the main table element gets more CSS styling in the form of "Margin-bottom: -2px". Depending on any border options in your table, you may need to adjust that number of pixels up or down.

 

ItemFormatSource.png

 

Conclusion

There you have it: a simple replacement for the grid component that will function as a row selector and works in any browser. By adjusting style you can make it appear how you want.

Using JavaScript to Highlight Fields Based on Dynamic Values

$
0
0

On some forms, it may be helpful to provide visual cues and clues to the end user about how the form will be validated.  To accomplish this in the past, I've used methods such as exiting the form, performing validation in an evaluation model, marking any validation issues with error flags and setting error responses, and returning to the form with flags and messages in tow.  While much of this will likely still be necessary, using javascript to show users (as they fill out the form) which fields are required, or which fields have changed, may make for an easier time on the end-user for completing your workflow form.

On to the fun stuff:

I've attached a demo package if you'd rather look at that than at the picture-book explanation below.

For this proof-of-concept project, all we need is to initialize some variables, use a form component, provide some means of reloading the form (I've looped back into the form with a demo component) and close the workflow with an end component.  

3-12-2014 4-50-55 PM_0.png

For the form components, we'll use a drop-down component, a text box, and a subdialog component in order to accomplish a page postback (and a regular button).
3-12-2014 5-14-37 PM.png

Now the effect we want to see is that when a value is pre-loaded into a drop-down or text box, that we can highlight that directly when the value is changed.  

To control these style changes, some javascript is placed into the behavior panels of the components we want to control.

Dropdown Component:
Custom Events Overview.  Ensure Control IDs are set for the components you want to control.
3-12-2014 6-19-18 PM.png

onmousedown: (this ensures that the selection array doesn't stay highlighted until the blur event)
3-12-2014 6-20-28 PM.png

onblur: (upon leaving (blur) this component, the check is made vs the preserved variable to check for changes.)
3-12-2014 6-26-18 PM.png

Text Box:
Custom Events Overview.  Ensure Control IDs are set for the components you want to control.
3-12-2014 6-22-31 PM.png

onfocus: (this sets the text box back to white so you aren't typing in a yellow or red box)
3-12-2014 6-23-47 PM.png

onblur: (upon leaving (blur) this component, the check is made vs the preserved variable to check for changes.)
3-12-2014 6-24-49 PM.png

Then set up the form body.  Right-click on any blank space on the form, and "Edit Form".  
This action is taken so if a page postback occurs (leaving the page, loading a subdialog with refresh, embedded action, etc), we check the intended state of highlighting on the components on page load.
3-12-2014 6-31-36 PM.png

On the Behavior tab, add a Body Custom Event.
3-12-2014 6-32-41 PM.png

Add this entry as an onload event:
3-12-2014 6-33-49 PM.png

So the result is that the original values load with the page:
3-12-2014 6-45-19 PM.png

But if we change these values and continue with the form, we'll see indicators that the values have been edited:
3-12-2014 6-47-00 PM.png

Or, that the values are invalid:
3-12-2014 6-48-02 PM.png

Because we also created a javascript evaluation on the form body "onload" event, if we enter and return from the subdialog, or click the button component (that wraps right back into the form), the evaluations should take place and mark the fields accordingly.

Epilogue:
I'm not proficient with javascript as a tool apart from using it to extend the capabilities of Workflow.  I'm sure there are better ways to write the javascript than the way I've done it here - feel free to clean it up or change it, make use of functions, etc.  Hopefully this at least shows another way to use javascript and the form builder together to make for a better end-user experience.


ServiceDesk 7.5 SP1 Purge Utility

$
0
0

The intent of this project is to Search, Select and Purge Report Processes along with their associated references.

 

The Project is built on ServiceDesk 7.5 SP1 and is dependent on the associated Database Schema.

The Project includes a method for discerning the ProcessManager ConnectionString. If this fails, you can define the ConnectionString in the Project Properties.

 

The following is a description of the Project features:
 

Search.png

The 'Search ReportProcess' feature uses plain text pattern matching against the ReportProcessID, ProcessTitle and ProcessDescription data.
Found ReportProcesses will display in the 'Purge Candidates' section of the Form.
If the 'Description' scope is included with the Search, the text used for the search will be prepended to the ProcessTitle and enclosed with '{...}'.
This provides feedback that the search text was found in the ProcessDescription.
 

 

Range.png

The 'Select Range by Type' feature uses a 'Start:' and 'End:' item from a pull-down list.
This will perform a range type of search with the following conditions:

1) The ReportProcess Type is the same. For example: IM-
2) A 'Start:' and an 'End:' are selected.
 

 

DateTime.png

The 'Cut Off DateTime' feature selects any ReportProcess with a ProcessStarted date that is less than the defined DateTime.

 

Purge.png

The 'Keep Articles' switch allows for the retention of Knowledge Base articles.
Knowledge Base articles that are created using the 'Submit Knowledge Base Entry' form are ReportProcesses and will be purged if they are selected.
However, if the Knowledge Base process is Closed and the 'Keep Articles' option is checked, the KB Article will remain untouched in the Knowledge Base tab.

Begin the Purge...

Once confirmed, the Purge begins and will eventually end with a Count summary of the purged items.
One can 'Go Back', which starts over with a fresh copy of the remaining ReportProcesses.

 

———————–
PERFORMANCE
———————–
In an under performing test environment, one can expect approximately 1,000 items purged every 10 minutes.

ServiceDesk 7.5 Track Assignments via Process Type Actions

$
0
0

Summary

This project is intended to be used as a Process Action from within a Service Desk Process View.
The core functionality is provided by the 'Track All Assignments' SQL Integration component.
The SQL Query is a collection of (3) methods with the results aligned via SQL Unions.

The 'Current' method uses the Stored Procedure dbo.GetTaskAssignments().
The 'Task' method is an Analouge to the 'Current' method that itemizes all of the Task assignments.
The 'Ownership' method compares DatePosted entries in the ReportProcessComment table and provides an itemized list.

The results should align with the Process History in the Ticket with the added benifit of calculating the Assignment duration.

 

NOTE: The following Connect Forum article provides a more rigorous method for tracking Assignments:

https://www-secure.symantec.com/connect/videos/sen...

Running Symantec Ghost Over a Linux based PXE server

$
0
0

Let me start off by saying, this was done to reduce the need for loading network drivers on the WinPE discs and so far has worked for most systems without issue, with that being said, we have had a few machines that have displayed issues with the GPT partition.  When I spoke to some Symantec techs/engineers on the issue I was informed the Linux binaries have not been updated since the initial time the product was released with these binaries - would be nice to have them updated.  In my opinion, it is more convenient to boot to one source, and update the Custom Linux Distro that is used to boot and run Ghost, than to change the discs for each model of machine.  We ended up having 10 different discs to image our various machines as each one had a disc made to address the differences in the network driver.  The Linux Distro has offered the benefit of not having to set-up a boot media for each new machine, only needing to update when the driver had been changed enough to make the newest system have an issue (Once in 2 years).  This distro and ghost binaries have been working with MOST machines!! about 95% of what we are currently using and receiving.

 

NOW for my directions on how the system was set-up:

Steps to set-up the Linux PXE Ghost Server:

 

  1.  Install the current version of Edubuntu – with LTSP.
  2. On initial install set the following settings to be manual on Eth1 so it will connect to the network properly and maintain the proper hostname:

Hostname:<Hostname>

IP Address: <Static IP if assigned>

Netmask:  <Designated netmask>

Gateway: <Gateway if needed>

DNS Servers:  <Specified DNS Server if you have one>

Domain Lookup: <mydomain.com>

Start the update/upgrade process by pressing Ctrl-Alt-T (Which will open a terminal window)

                First set the root password by doing the following commands:

                Type: sudo su – <Enter>

                Type: (This users password)  <Enter>

                You should get a prompt that looks like this:

terminal1.png

 

Type:  apt-get update <Enter>

Then when it finishes updating you can type:

Apt-get upgrade

Answer yes by pushing y <Enter> to do the upgrade when asked.

 

Use this prompt to install software:

The template for installing software is as follows:

Apt-get install <Package Name>

(Read all prompts and answer appropriately)

  1. Install the following software – You can check the guide and see if there are specific instructions on loading, otherwise load using command line apt-get install, or through synaptic:
    1. Synaptic
    2. Samba
    3. Cifs-utils
    4. Webmin
    5. Bind9

 

These two packages are not available from the repositories and must be installed separately.

 

  1. And upload the DD_RHELP Folder – and create a link for DD_RHELP in /usr/bin
  2. Add the Ghost files – also install all files from the Ghost package in /usr/bin
  1. Open a terminal and sudo updatedb
  2. In the terminal type: locate memdisk
  3. Copy the memdisk from syslinux to the following path:  /var/lib/tftpboot/ltsp/i386/
  4. Create and copy an iso of the disc you want to load to the same directory as above, then add the labels as defined below.

 

You will need to point the DHCP Server to look for the path the ghost disc is looking for to the local machine – and set up a Samba share that is linked and shared on the local system:

 

The access is defined in the following path:  /etc/fstab

This is an example of the file used to set-up this connection:

//<serverlocation1>/e  /media/samba  cifs  username=<Username>,password=<Password>  0  0            

//<serverlocation2>/images /media/samba2 cifs username=<Username>,password=<Password> 0 0

Or another alternate:

<Serverlocation>:E /media/samba cifs user=<Username>,pass=<Password> 0 0

 

Modifications to PXEBOOT server

 

1. Enabled local DNS server

                a) apt-get install bind9

                b) wget http://prdownloads.sourceforge.net/webadmin/webmin... dpkg -i webmin_1.590_all.deb

                c) navigate to http://localhost:10000/ in web browser

                                i) Servers -> BIND DNS Server

                                ii) Click on Create master zone

                                iii) Domain Name: <Domain>     

                                                Email address: doesn't matter, but has to be present - root@localhost.localdomain

                                iv) Click Create

                                v) Click on the <Domain name> zone

                                vi) Navigate to Address records (A records)

                                vii) Create an A record for <Name of shared system>with the PXEBOOT server's IP for the PXE network (192.168.0.254)

                                viii) Save, Apply zone, and Apply Configuration

                d) verify functionality by going to the command line on the PXEBOOT server and typing "nslookup <Serverlocation1>.<Domain Name> localhost" and verifying that the answer returned contains the proper IP

2. Modified DHCP server config to use local DNS server

                a) Open /etc/ltsp/dhcpd.conf

                b) Modify the line for dns-name-server to reflect the IP of the PXEBOOT server - 192.168.0.254

                c) Modify the line for domain-name to reflect the following - “<Domain Name>”

 

3. Exported SAMBA share of <serverlocation1> & <Severlocation2>

                a) Ensure that <Shared location> is mounted to /media/samba

                b) /etc/fstab should contain the following: # Samba

//<serverlocation1>/e  /media/samba  cifs  username=<Username>,password=<Password>  0  0

//<Serverlocation2>/images /media/samba2 cifs  username=<Username>,password=<Password>  0  0

               

i) verify using the 'mount' command and by navigating to /media/samba and verifying the contents

                c) create a user account for Samba to use (Administrator)

                                i) useradd Administrator

                                ii) smbpasswd -a Administrator

                                iii) Enter the password stored in /etc/samba/credentials

                                                Repeat the above for pxeghost

                d) Modify /etc/samba/smb.conf to contain the following:

 

                                [images]

                                                comment = "ISO Boot Images"

                                                path = /media/samba

                                                guest ok = yes

                                                browseable = yes

                                                valid users = Administrator

                                [images2]

                                                Comment = ”ISO Boot Images”

                                                Path = /media/samba2

                                                Guest ok = yes

                                                Browseable = yes

                                                valid users = pxeghost

 

 

                e) Restart samba (/etc/init.d/smbd restart)

 

 

 

Setting up the Menu for selection:

 

  1.  Go to the following path:
    1. /var/lib/tftpboot/ltsp/i386/pxelinux.cfg
  2. Open the file – default (will need to be root to change it)
  3. The original file will start up the terminal – you can add labels and customize this list – see examples below:

 

Original default file:

 

default linux

prompt 0

 

label linux

  kernel vmlinuz

  append ramdisk_blocksize=4096 initrd=initrd root=/dev/ram0 ramdisk_size=524288 console=ttyS3

  ipappend 1

 

 

Customized default file:

 

                default vesamenu.c32

timeout 600

ontimeout <Selected Label> e.g. Optiplex790

prompt 0

menu include pxelinux.cfg/pxe.conf

label BootLocal

                localboot 0

                text help

                Boot to Local Hard Drive

                endtext

 

 

##label ltsp

##kernel vmlinuz

##append ro initrd=initrd.img root=/dev/nbd0 init=/sbin/init-ltsp quiet splash plymouth:force-splash vt.handoff=7 nbdroot=:ltsp_i386

##text help

##Linux Terminal Server Project Boot

##endtext

 

label Optiplex790

kernel memdisk

append initrd=Optiplex790.iso iso raw

text help

Optiplex 790 - Any version

endtext

 

label Optiplex780

kernel memdisk

append initrd=Optiplex780.iso iso raw

text help

Optiplex 780 - Any version

endtext

 

label Custom Version Linux

kernel memdisk

append initrd=CustomLinux.iso iso raw

text help

Custom Version of Linux

endtext

 

label ubcd4win

kernel memdisk

append initrd=UBCD4Windows.iso iso raw

text help

Ultimate Boot CD 4 Windows

Endtext

 

label puppy Linux distro

kernel /<Folder for OS>/vmlinuz

initrd //<Folder for OS>/initrd.gz

append boot=live pfix=copy nosmp root=nbd0 nbdroot=//<Folder for OS>

 

 

  1. Make sure you do the following commands in the command line:
    1. updatedb
    2. Locate vesamenu.c32 (Once you know the location, you can copy this file using command line or nautilus) – Copy this file to /var/lib/tftpboot/ltsp/i386/
  2. You will also need to create a pxe.conf file in,  /var/lib/tftpboot/ltsp/i386/
    1. See the following example:

 

                menu title PXE Ghost Server

MENU BACKGROUND pxelinux.cfg/logo.png

##logo.png needs to be 640x480px

noescape 1

allowoptions 1

prompt 0

menu width 40

menu rows 14

menu tabmsgrow 24

menu margin 10

menu color border          30;44      #40000000 #00000000  std

menu color title                1;35;44    #0099FF #00000000 std

 

Appendix Puppy

 

Getting Puppy to boot over PXE (Taken from the following link)

Create YOUR Distrubution of Puppy Linux install any tools you might need or want.  When setting up YOUR distro – be sure to install the Ghost (Linux) Binaries and then create your save file – as this is how the save file will have what you need to run ghost

These binaries were copied into thr /usr/bin folder and made sure they were executable.

 

Save your distro in the #.#.#.sfs file, or whatever you named it.  Then create a disc of the current distribution, or a thumb drive (you can still pull the required files from either).

Here are a few recommended programs that may be useful:

  1. Testdisk
  2. mc
  3. nmap
  4. chntpw
  5. ddrescue
  6. etherape
  7. iptraf

http://sirlagz.net/2011/06/13/how-to-boot-puppy-5-2-5-over-pxe/

(I have used the same instructions for the current version of Puppy Linux 5.7.

Download the .iso for the most current distro 5.2.8 (Ubuntu Compatible)

 

Mount it and extract the following files:

 

  • Initrd.gz
  • Vmlinuz
  • #.#.#.sfs

               

 

Mount the image (.iso) as follows

 

Mount –o loop lupu_5.2.5.iso /mnt

 

Then we need to work with these files:

 

The Lupu_5.2.5.sfs (or whatever version you use) will need to be packed inside the initrd.gz file, and we do that by doing the following:

 

  1. Make a working directory -  mkdir /puppycustom
  2. Change into that directory – cd /puppycustom
  3. Then extract the intird.gz into this directory – zcat /Location/of/initrd.gz | cpio –I –H newc -d
  4. Move the Lupu_5.2.5.sfs file into this directory – mv /location/of/Lupu_5.2.5.sfs /destination/folder
  5. Then re-pack the initrd.gz – find | cpio –o –H newc | gzip -4 > ../newinitrd.gz

 

 

Then add the vmlinuz and newinitrd.gz to the appropriate location in the tftpboot directory:

/var/lib/tftpboot/ltsp/i386/Designatedfolder -   /var/lib/tftpboot/ltsp/i386/GhostPup/

 

Then the following information needs to be added to the following file:

 

/var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default

 

 

LABEL GhostLinux

MENU LABEL Ghost Linux Distro

Kernel GhostPup/vmlinuz

Append initrd =GhostPup/newinitrd.gz

text help

Linux Distro with Symantec Ghost

endtext

 

You can now boot from this distro on the PXEBoot Server

 

I found the need to create a Bash Script to mount the shared folders when booting to Puppy – created a file called ghostmount.sh with the following code:

 

#!/bin/bash

 

echo making and mounting the needed directories, Please wait ....

 

# make the directories to mount ghost images

 

mkdir /mnt/images

mkdir /mnt/images2

 

# Insert mount points into /etc/fstab

 

echo '//192.168.0.254/images /mnt/images cifs username=<username>,password=<Password> 0 0'>> /etc/fstab

echo '//192.168.0.254/images2 /mnt/images2 cifs username=<username>,password=<Password> 0 0'>> /etc/fstab

 

# Mount the new mout points in fstab

 

echo currently mounting:

 

sleep 2

mount -a

 

echo If there are no errors present, all drives mounted!

 

Please note the Puppy Linux version was customized, then a Puppy Live CD was made from that customized version – but some of the settings did not save – i.e. FSTAB settings and network/video settings – this is why the need to create that BASH file.

Please see the section at the end that relates to how this was set-up for the Puppy Ghost Distro created to supply the PXE Boot server with access to these shares.

 Do IFCONFIG and see what IP the PXE Server is giving for the internal network, this is the DHCP Server address (In our case – this is the number given).

 This could be smbd, smb, or samba – may need to find the one that works for you.

 This file allows the GUI selection once the system is booted to PXE.

 This is searching for it in the pxelinux.cfg folder – and you can name the file whatever you want – just make sure it is reflected properly here

 There are 7 zeros here

 There are 8 zeros here

 This is the version i.e 5.4.3

 Used in my distro

 Ensure that you make this executable:

 

chmod –x or through a gui (Thunar, Nautilus, etc….)

 

 

 

Hopefully this might help someone else - give an option on how to set up the service to image machines in an alternative manner and gain support for the update of the Linux binaries so they can be used with future technologies.

Configurações Avançadas do IT Analytics

$
0
0

O IT Analyticsé uma solução que complementa e expande a capacidade de oferecer análises e relatórios que são oferecidos pelo Altiris Client Management Suite.

Os recursos disponíveis atualmente dentro da solução permitem que os clientes extraiam o máximo de valor dos dados que estão armazenados na base de dados principal, geralmente chamada de "Symantec_CMDB".

Outros artigos existentes aqui no Connect abordam os procedimentos necessários para instalação e configuração da solução e esse não é o objetivo desse artigo; o objetivo aqui é um procedimento de correção que em algumas situações pode ser necessário aplicar.

O cenário inicial é que ao tentar abrir a página de configurações do IT Analytics, essa página não abre, e isso impossibilita alterar e adequar algumas configurações. A mensagem exibida é “Server Error in '/Altiris/ITAnalytics' Application”.

O caminho para abrir essa página é Settings > Notification Server > IT Analytics Settings > Configuration.

configuracoes.png

Figura: página de ajustes de configurações do IT Analytics.

 

Outros sintomas podem incluir problemas ao instalar, processar e desinstalar cubos além de problemas no acesso a relatórios que recuperam informações deles. Outros erros também podem estar relacionados, como problemas ao se conectar ao servidor.

Geralmente esse tipo de problema ocorre por alguma mudança ou exclusão de informações no SQL Server Analysis Server (SSAS) ou no SQL Server Report Server (SSRS).

 

SQL Server Analysis Server (SSAS)

O SSAS é utilizado para a construção de On-Line Analytical Processing Component ou (OLAP) do Microsoft SQL Server. Esse recurso possibilita que organizações analisem e recuperem informações que façam sentido para o negócio e que estão em vários bancos de dados ou tabelas diferentes. A partir daí o SSAS permite que sejam construídas estruturas multidimensionais, chamadas de Cubos, para pré-calcular e armazenar agregações complexas, e também para a construção de modelos de mineração para realizar a análise de dados, tendo como objetivo a identificação de informações valiosas, como tendências, padrões, etc.

O Analysis Services nada mais é que ter uma visão multidimensional do banco de dados, desde o conceito de construção cubos com o auxílio do BIDS (SQL Server) Development Studio para o conceito de Unified Dimensional Model (MDX).

Em SSAS, as dimensões são um componente fundamental de cubos. Dimensões servem para organizar os dados com relação a uma área de interesse, tais como computadores, compras e compliance de software, etc. Dimensões em SSAS contem atributos que correspondem às colunas em tabelas de dimensão.

Então, para exemplificar o que é uma estrutura multidimensional, basta visualizarmos as informações por dimensões, como por exemplo a quantidade de computadores onde ainda não foram aplicadas determinadas correções do fabricante, e visualizar essa informação nos meses de abril e maio e se necessário, fazer uma dimensão por ano para obter uma visão em cubo do negócio.

 

SQL Server Report Server (SSRS)

O SQL Server Reporting Services oferece um conjunto completo de ferramentas e serviços que ajudam a criar, entregar e gerenciar relatórios para a sua organização, bem como recursos de programação que permitem estender e customizar as funcionalidades dos relatórios.

O Reporting Services é uma plataforma de relatório baseada em um servidor que provê funcionalidades para uma variedade de fontes de dados, com APIs que permitem que desenvolvedores usem os recursos de relatórios em suas aplicações customizadas. O Reporting Services é completamente integrado com ferramentas do SQL Server e outros componentes.

Com o Reporting Services é possível criar relatórios tabulares, com gráficos, a partir de bases de dados relacionais, multidimensionais, ou fonte de dados no formato XML. Ainda oferece a capacidade de agendar o processamento de determinados relatórios ou acessá-los sob demanda.

 

Como os serviços acima são essenciais para o bom funcionamento do IT Analytics, qualquer informação corrompida ou não exata vai ter como consequência a falha ao abrir a página de configurações. As possíveis causas no SQL Server podem ser:

  • O banco de dados do SSAS foi excluído e um novo banco de dados com um nome diferente foi criado mas essas informações não foram inseridas na página de configurações na console do Notification Server antes do procedimento.
  • O banco de dados do SSRS foi alterado no “Microsoft Reporting Services Configuration Manager”, na aba “Database”.
  • O diretório de relatórios do SSRS foi alterado ou excluído.

 

Uma solução não-oficial mas que pode ser aplicada por sua própria conta e risco é executar uma consulta para recuperar algumas informações e depois setar outras na base de dados para possibilitar o carregamento da página de configurações.

O primeiro passo é recuperar as informações de configuração que estão no banco de dados Symantec_CMDB onde o IT Analytics está instalado, como o nome do servidor SSAS e o nome do servidor SSRS que o IT Analytics está utilizando.

USE Symantec_CMDBDECLARE @Servidores_ITA NVARCHAR(MAX)SELECT @Servidores_ITA = StateFROM ItemWHERE State LIKE '%<asserver%'AND State LIKE '%<asdatabase%'PRINT @Servidores_ITA

 

Confira no resultado exibido se as informações estão corretas. Se não estiverem, é possível modificar essas informações para um padrão em branco, o que vai possibilitar o carregamento da página dentro da console do Notification Server.

O script que faz o reset das informações está abaixo e só deve ser utilizado como último recurso, por conta e risco do usuário, e somente se as informações que ele encontrou na consulta anterior fizerem sentido no ambiente dele.

A primeira parte do script é para encontrar o GUID do IT Analytics. Se vários resultados forem retornados, procure na descrição por “ITAnalytics Language Pack Installation”.

USE Symantec_CMDBSELECT *FROM ItemWHERE State LIKE '%<asserver%'AND State LIKE '%<asdatabase%'

 

Na segunda parte do script é onde efetivamente efetuamos a alteração dos valores na tabela no banco de dados. Não se esqueça de alterar o GUID para o que foi retornado na consulta acima.

USE Symantec_CMDBUPDATE [Item]SET State = '<item>    <ASServer />    <ASDatabase />    <RSURL />    <RSFolder />    <RSAuth />    <RSBrowserRole />  </item>'WHERE Guid = 'ce7c50d0-7f50-413d-bb4a-44e28b60245a'

Depois de executarmos os passos acima é possível abrir a página de configurações e continuar utilizando a solução normalmente.

Obrigado!

Dynamic Product Detection Trace

$
0
0

Dynamic Product Detection (DPD) is the method that Shavlik’s scan engine uses to determine what supported products are installed on the machine.  This tool was created for troubleshooting patch scan issues where Shavlik needs to know what is going on during the DPD process.

Running this tool as part of your data gathering excercises that the PMS Team will ask you to run through will speed up any PMS 7.1.2 or later rule issues you may report to Support, as the PMS Team can then contact the vendor right away without the need to try and reproduce the issue and then run this tool against their machine.

 

Note:  This tool requires .Net Framework v4.0.30319 or later in order to work.

 

Steps:

  • Extract the attached DPDTrace.zip file into a folder on the root of C:\ on the problem machine.
  • Read Disclaimer.txt.
  • Open an Administraive Command Window and change directory to C:\DPDTrace.
 
cmd1.png
 
  • Enter the following command, replacing {MACHINE_NAME} {ADMIN_USER_NAME} {PASSWORD} and {PATCHTYPE} with corresponding values. ({MACHINE_NAME} has to be the Target machine that is having the detection problem

          DPDTrace.bat {MACHINE_NAME} {ADMIN_USER_NAME} {PASSWORD} {PATCHTYPE}

 

Example of the command:

cmd2.png

 

Notes:  Failure to supply any one of these values ({MACHINE_NAME}  {ADMIN_USER_NAME} and {PASSWORD}) will cause the test to fail.

{ADMIN_USER_NAME} needs to be in the format domain\username

{PATCHTYPE} has the following possible values:

1  - Security patches

4  - Security tools

8  - Non-Security patches

9  - Security and non-security patches

13 - Security, non-security and tools

 

Other options:

If you want to use a specific hf7b.xml, just copy it into the Extracted folder\HF7B.

If you are in an offline environment, you must download the HF7b file directly and place it in the Extracted folder\HF7Bfolder

Link to latest HF7b File  http://xml.shavlik.com/data/hf7b.xml

 

If you need to scan with an older scan engine, you may do so. Please add the VERSION number to the end. If no version is specified, it will use the 9.0.651 scan  engine.   

Possible values:

7.8.5

8.0.43

9.0.651

 

Example:  DPDTrace.bat {MACHINE_NAME} {ADMIN_USER_NAME} {PASSWORD} {PATCHTYPE} {VERSION}

 

  • When the command line is run, a window titled 'Rename HF.1 Log' will appear with an OK button. Do not close this window as the scan continues.
 
window.png
 
  • When the scan has completed the command prompt window will say 'Test Complete  Please zip up HFCLi folder and send it back to Support. Please verify that an XML document has been created in the HFCLI folder. If it has, please zip up the directory "C:\DPDTrace\HFCLI" and send it back for analysis.

cmd3.png

Viewing all 861 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>