tag:blogger.com,1999:blog-63125182024-02-28T13:50:14.211-05:00Systems Management BlogThe author is an ITSM consultant specializing in Microsoft SCOM, SCSM and Orchestrator.Anonymoushttp://www.blogger.com/profile/10489460865905469020noreply@blogger.comBlogger18125tag:blogger.com,1999:blog-6312518.post-68091815091163853772012-09-14T09:32:00.002-04:002012-09-14T09:37:47.327-04:00IT Services Management With Zenoss Core 4<o:p> </o:p>The latest development in <a href="http://www.jmacinc.com/" title="IT Services Management">IT Services Management</a> tools may well be the
release of Zenoss Core 4 (see this <a href="http://www.zenoss.com/about/news/press/Zenoss_Releases_Open_Source_Zenoss_Core_4.html" rel="" target="_blank">Press Release</a>).<br />
<div class="MsoNormal">
<br /></div>
Zenoss is a component-level IT monitoring tool with two versions: Zenoss
Core, a free and Open Source product, and the commercial version, Zenoss
Enterprise, which is sold by Zenoss Inc., the corporate sponsor of Zenoss Core. Large distributed IT departments will probably consider Zenoss Enterprise as it extends the Core with multiple collectors. As always, all components of the Zenoss 4 architecture are built from Open Source tools.<o:p></o:p><br />
<br />
Major improvements in the new release include:<br />
<ul>
<li>enhanced ZenPing suppression (essentially, the python code in ZenPing <span style="font-family: inherit;">was <span style="background-color: white; line-height: 20px;">rewritten to include better layer 3 link attributes)</span></span></li>
<li><a href="http://www.nmap.org/" rel="nofollow" target="_blank">NMAP</a> for ICMP packet generation</li>
<li>a new auto-deploy script, making Zenoss Core 4 easier to install than past releases</li>
</ul>
The new release is initially available as the Core version.
Zenoss Service Dynamics version 4.2.2, which provides additional
analytics and resource management capabilities, is set to be released later this year.<br />
<br /><o:p></o:p>
Today's dynamic data center relies crucially on virtualization and cloud services,
and Zenoss continues to advance in that direction. For an overview of the
features of the Enterprise product, check out the <a href="http://www.jmacinc.com/downloads/Zenoss_Solution_Overview_Brochure.pdf" rel="nofollow" target="_blank">ZenossSolution Overview Brochure.</a><o:p></o:p><br />
<br />
I have found Zenoss to be a near perfect tool to help IT departments meet their
monitoring needs in a more cost-effective manner. While my focus today is on Microsoft SCOM, Orchestrator and Service Manager, I encourage you to consider Zenoss as a long-term, thoroughly professional alternative.<o:p></o:p><div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymoushttp://www.blogger.com/profile/10489460865905469020noreply@blogger.com3tag:blogger.com,1999:blog-6312518.post-68382404467062783082012-04-03T23:15:00.000-04:002012-04-03T23:15:39.670-04:00<br />
<div class="MsoNormal">
<u><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Slaying the SCOM Auditability Dragon</span></u><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;"><o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">In
the library of boring topics, software configuration auditing must surely rank
close to the top. While end user
applications such as messaging and database servers spark swarms of such
products, SCOM and its System Center siblings have yet to attract major
interest. The reasons for this are open
to debate, but there can be no doubt as to the fact that Microsoft’s
infrastructure products are rife with configuration interfaces and rich APIs
that make it all too easy for configuration drift to rear its ugly head. A
typical SCOM deployment will have over 10,000 rules and another 10,000 monitors. In fact, the job boards over the recent past have
been thick with openings for SCOM engineers (a search for “SCOM” on Dice
returns 253 postings), many of which are to reign in existing deployments
sinking under their own clumsy weight.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">So
how much should you worry about “what changed” in your SCOM deployment? Well, the truth is that, in the absence of
tight change controls, SCOM deployments evolve like any other complex system in
your data center: instead of being a
ready tool in helping detect and solve problems, it becomes part of the noise
overwhelming administrators. It is
another example that even the monitor needs monitoring, and IT managers pay a
price for ignoring that reality. <o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<u><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Enter
The Dude<o:p></o:p></span></u></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">These
questions of SCOM configuration controls were the subject of a recent conversation
I had with a very capable IT engineer, whom I enjoy calling the “Dude” on
account of his unshakable confidence and periodic exhibits of great flair. The specific point of our debate was the use of
naming conventions for SCOM authoring. (Don’t say you weren’t forewarned some
dull topics were afoot!) Dude had
inherited a SCOM deployment that was perfectly devoid of documentation,
courtesy of his predecessors, all reputedly SCOM experts. Dude had earned his SCOM spurs in this school
of hard knocks: deciphering the meaning
of each SCOM alert, and investigating how to ensure SCOM alerted when serious problems.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">From
his first tentative overrides to his eventual mastery of the Authoring pane, he
carried on his company’s tradition of documenting “nada”. And “nada” means no run guide, no change log,
heck not even a lazy description sprinkled here and there. His disregard for documentation in any form was
an article of simple faith: why waste
time on documentation when you will always remember the changes you made, and
if somehow your memory fails you, then it’s no problem to figure it out when
the need arises. Such self-assurance was
so charming – ah, how good it is to work in IT when you’re outside the scope of
any quality controls, audits or best practices reviews. <o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Some
more context would be helpful. Dude was
responsible for only one SCOM management, so the question of code consistency across
more than one system never crossed his mind.
Further, since Dude was the only SCOM administrator, it was actually
possible, assuming a heroic memory, that he might remember all his overrides
and customizations. Lastly, since it was
not a customer-facing application, Dude’s managers accepted the frequent
problems with SCOM as inevitable annoyances.
They never challenged Dude to set any quality goals or continuous
improvement plan. It was ‘the best of
times, it was the worst of times.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<u><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Under
the Covers<o:p></o:p></span></u></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Microsoft’s
<u>Management Pack Authoring Guide</u> for SCOM gives a clear explanation why the
key attribute of every element in a MP is the ID field. As with SCSM, SCOM constructs a class
hierarchy ordering all the elements in every imported MP, and maintaining this
in memory is one of the key roles of the RMS.
When your primary tool for creating new rules and monitors is the Ops
console, SCOM conceals the ID field, automatically constructing one on the fly from
the element’s type and a GUID-like string of numbers. All the author controls in the console are
the Display Name and the Description. Nice
and easy, but not the best design for configuration auditing.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">The
uniqueness function provided by a SCOM element’s ID is similar to the function
of a hostname in a DNS namespace, in that the hostname must be unique within a
DNS zone. Further, the ID of the MP anchors
the namespace in SCOM as the name of the zone does in DNS. The display name, on the other hand, is like
the comment field in that, just as you can give many hosts in a zone the same
comment (or no comment at all), you can give the same display name to any
number of SCOM elements – think “VIP Application Service Down Monitor.”<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<u><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Restoring
Order<o:p></o:p></span></u></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">The
Authoring Console is actually the intended tool for extending SCOM with custom
classes in enterprise deployments. The
Authoring Guide recommends that you standardize your IDs just as Microsoft does
in its own MPs. And how does that
work? Well, as I explained to Dude, the
basic process is:<br />
<!--[if !supportLineBreakNewLine]--><br />
<!--[endif]--><o:p></o:p></span></div>
<div class="MsoListParagraph" style="mso-list: l0 level1 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">1.<span style="font-family: 'Times New Roman'; font-size: 7pt;"> </span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Create a concisely
named MP, using the model evident in all the SCOM system MPs and most of
Microsoft’s application MPs <o:p></o:p></span></div>
<div class="MsoListParagraph" style="mso-list: l0 level1 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">2.<span style="font-family: 'Times New Roman'; font-size: 7pt;"> </span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">create the element
in the Ops console as usual and host it in the well named MP<o:p></o:p></span></div>
<div class="MsoListParagraph" style="mso-list: l0 level1 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">3.<span style="font-family: 'Times New Roman'; font-size: 7pt;"> </span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">export the MP to an
XML file<o:p></o:p></span></div>
<div class="MsoListParagraph" style="mso-list: l0 level1 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">4.<span style="font-family: 'Times New Roman'; font-size: 7pt;"> </span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">open the file in an
editor and locate the console-generated ID<o:p></o:p></span></div>
<div class="MsoListParagraph" style="mso-list: l0 level1 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">5.<span style="font-family: 'Times New Roman'; font-size: 7pt;"> </span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">replace all
occurrences of that ID with a new ID comprised of the following parts, each
delimited by a period<o:p></o:p></span></div>
<div class="MsoListParagraph" style="margin-left: 1.0in; mso-list: l0 level2 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">a.<span style="font-family: 'Times New Roman'; font-size: 7pt;">
</span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">the
ID of the MP<o:p></o:p></span></div>
<div class="MsoListParagraph" style="margin-left: 1.0in; mso-list: l0 level2 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">b.<span style="font-family: 'Times New Roman'; font-size: 7pt;">
</span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">the
element type (such as “Group”, “Rule”, or “Monitor”)<o:p></o:p></span></div>
<div class="MsoListParagraph" style="margin-left: 1.0in; mso-list: l0 level2 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">c.<span style="font-family: 'Times New Roman'; font-size: 7pt;">
</span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">one
or more descriptors that indicate the essence of the element (such as AppLog.Error1102
for a rule that alerts when an Error event occurs in the Application log with
an ID of 1102)<o:p></o:p></span></div>
<div class="MsoListParagraph" style="margin-left: 1.0in; mso-list: l0 level2 lfo1; text-indent: -.25in;">
<!--[if !supportLists]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt; mso-bidi-font-family: Verdana; mso-fareast-font-family: Verdana;">d.<span style="font-family: 'Times New Roman'; font-size: 7pt;">
</span></span><!--[endif]--><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Re-import
the MP<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">My
favorite editor for this purpose is Notepad++, which is freely available at </span><a href="http://notepad-plus-plus.org/"><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">http://notepad-plus-plus.org/</span></a><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">And
what have you accomplished? Your new IDs
now reflect a path in a clean amespace from a root or branch to a leaf node in
terms that are easy to follow. These IDs
will reinforce the design integrity of the deployment and protect the relevance
of the documentation. When you export
your MPs to a common area, you can search across multiple files for elements
targeting the same class or object, which can answer many questions such as
identifying redundancies. If you ever
run the Alert report, your IDs will no longer read like the outputs from a
runaway random generator.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<u><span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Y.A.M.
-- Young Admin Myopia – and the Upside of Planning<o:p></o:p></span></u></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">At
this point, Dude was rolling his eyes and groaning in disbelief. “Why all this fuss and bother if it produces
no visible benefit in the console?” He
might well have added, “And why spend my good time assisting future
administrators if it’s not in the job description?” Well, a good retort might follow the logic Robert
Duvall gave Sean Penn in “Colors” on the question of running down a hill vs.
walking down, but your mileage may vary.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Well,
returning to the question we started with, if you ever hope to implement change
control audits for your SCOM configuration, especially if you have multiple
administrators who make frequent changes, it’s simply unrealistic to try to do it
with manual tools such as screenshots and spreadsheets. There are just too many elements to track without
an automated tool.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">In
the meanwhile, hopefully this article has persuaded you to take the extra
effort to convert your GUID-like IDs to a more meaningful namespace style. Here are some explicit examples of the two
styles we’ve been discussing.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Class
ID: <o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Before:
UINameSpace172861e718614744a224992d5237de31.Group<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">After:
MyCompany.Messaging.Monitoring.MyAppServers.Group<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Rule
ID:<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Before: MomUIGeneratedRule3d232ca92a3a4e9e9c53e70c6838439b<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">After: MyCompany.Messaging.Monitoring.Alert.AppLog.Error1102<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Rule
Property Override ID:<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">Before: OverrideForRuleMomUIGeneratedRule083aa6fe88f34aedb0c871e3da8843a1ForContextUINameSpace59a7638242334435824fcf4ebbf3450bGroup75fba1242f0d4f038b8590e169a123d0<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">After: MyCompany.Messaging.Monitoring.Alert.AppLog.Error1102.Override.Interval.MyAppServers<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: "Verdana","sans-serif"; font-size: 12.0pt;">SCOM
has been around for a while, but good practices are never too late to start. I hope this article has shown you that these
default IDs are not beyond your control, and they’re worth controlling.<o:p></o:p></span></div><div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymoushttp://www.blogger.com/profile/10489460865905469020noreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-86754615762349201182010-02-26T10:38:00.000-05:002010-02-26T10:38:52.033-05:00IT Security Best Practices and Why Users Could Care Less<div style="font-family: Verdana,sans-serif;">In the November issue of <a href="http://portal.acm.org/citation.cfm?id=1592761.1592773&coll=ACM&dl=ACM&CFID=1335768&CFTOKEN=38420272" target="_Blank">Communications of the ACM</a>, Butler Lampson, a Technical Fellow at Microsoft Research, offers an incisive analysis of the sad state of affairs in Security Management. I'm not a security practitioner, but I've been around long enough to have witnessed many an IT department be brought to its knees for hours and days while battling a security breach. Lampson's simple argument is that security experts have set perfection as the goal, and both vendors and customers have bought into this assumption. He reasons that perfection is missing the point because security management is essentially "risk management: balancing the loss from breaches against the costs of security. Unfortunately, both are difficult to measure."<br />
<br />
That the costs are difficult to measure is generally obvious to anyone in IT, which typically doesn't even take the time to quantify the impact of component or application outages per hour [Numerous blog postings to follow!]. From the users' perspective, access and authentication interfaces become mere hindrances to doing productive work, so their universal response is to just say yes to any security question--no understanding or sense of ownership required. Lampson sums up the ramifications of this linkage between economic uncertainty and user indifference with an implicit rebuke of security vendors:</div><div style="font-family: Verdana,sans-serif;"></div><br />
<i style="font-family: Verdana,sans-serif;">The root cause of the problem is economics: we don’t<br />
know the costs either of getting security<br />
or of not having it, so users quite<br />
rationally don’t care much about it.<br />
Therefore, vendors have no incentive<br />
to make security usable.</i><br />
<br />
<span style="font-family: Verdana,sans-serif;">I hope this encourages you to download the whole article for yourself. I will be watching how Security managers and vendors solve these self-limiting practices in the future.</span><div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-25982476717422916952010-02-25T11:07:00.002-05:002010-02-25T11:10:05.390-05:00Notes on Bill Powell's March 2009 Presentation -- Impact of Economic Uncertainty on Service Management Plans<div style="font-family: Verdana,sans-serif;"><span style="font-size: small;">In March 2009, Bill Powell of IBM presented a super draft presentation to the <a target="_blank" href="http://www.itsmfny.com/">NY LIG (Local Interest Group) of ITSMF USA</a> that couldn't have been more interesting. I wrote up my notes on his talk <a href="http://jmacinc.com/downloads/Impact%20of%20Economic%20Uncertainty%20on%20Service%20Management%20Plans.doc">here</a>, but I encourage you to <a target="_blank" href="http://itsmfusa.brighttalk.com/node/563">download the podcast and his slides</a> from the final presentation. Here's the summary text from the ITSMF conferences posting just to give you an overview.</span></div><div style="font-family: Verdana,sans-serif;"><span style="font-size: small;"><br />
</span></div><div style="font-family: Verdana,sans-serif;"><i><span style="font-size: small;">Amid the global financial turmoil and toughening business conditions, businesses continue to look to IT to provide leadership in responding to challenges and emerging opportunities. This presentation covers the implications and recommendations for leadership in an uncertain economy based on a recently completed IBM research of over 400 IT organizations. This session will focus on the US results, how Service Management is transforming from an IT to a business discipline, and provide practical advice on how best to weather the storm. </span></i></div><div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1155659808970872962006-08-15T12:10:00.000-04:002006-08-15T12:47:07.486-04:00Security & Compliance: Microsoft's Acquisition of Whale CommunicationsI've been trying to keep one eye on the products vendors are introducing to address the growing IT problem of managing <strong>security and compliance</strong>, and the recent acquisition of Whale Communications by Microsoft is certainly interesting. Here's what I've learned.<br /><br />The Whale Communications products are essentially remote access solutions that are designed to provide high levels of security. The company has an excellent technical pedigree, and will probably become a profitable subsidiary in this arena. That said, this acquisition in no way moves Microsoft towards a role as a security and compliance vendor--the Whale suite are merely alternative access methodologies for the huge base of Windows applications and servers, especially the ISA 2006 server.<br /><br />Here's Microsoft's own statement on their strategic direction, from a June 12 "Press Pass" interview with Ted Kummert, VP of Microsoft's Security, Access and Solutions Division (SASD):<br /><br /><em>Press Pass: What are the key customer pain points Forefront products seek to address?<br />Kummert: Customers are facing a broader, more complex and diversely motivated threat landscape. Attacks are increasingly advanced, more carefully targeted and often aimed at specific applications. In protecting themselves from these threats, customers are faced with a vast array of solutions, each of which will protect a given point against a specific threat. However, implementing such a combined collection of security solutions can provoke configuration and integration difficulties, making it more costly and complex to manage, control and report on the security of their environment.<br />By equipping customers with the ability to effectively secure their environment and securely enable the access scenarios their businesses require, Forefront products will help them unlock the full business value of IT applications and infrastructure.</em><br /><br />See the full interview at <a href="http://www.microsoft.com/presspass/features/2006/jun06/06-12Security.mspx">http://www.microsoft.com/presspass/features/2006/jun06/06-12Security.mspx</a><br /><br />In conclusion, IT managers are increasingly pressured to ensure that every system and application is secure from attack on the one hand, and in compliance with increasingly onerous governamental regulations on the other. Truly helpful solutions will continue to come from those vendors who are automating these concerns directly, as opposed to reducing the threat surface of individual applications and protocols. JMACINC will continue to study the NetIQ (now Attachmate) Security & Compliance Suite of products as a feasible and cost-effective approach. See <a href="http://www.netiq.com/solutions/regulatory">http://www.netiq.com/solutions/regulatory</a> and <a href="http://www.netiq.com/solutions/security">http://www.netiq.com/solutions/security</a> for the full details.<br /><br />Refer to www.JMACINC.com for the rest of the story.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1151620010889043702006-06-29T17:56:00.000-04:002006-09-04T13:10:16.700-04:00Netconnect 2006I attended the 6th annual NetIQ Global Users conference in May, held in Orlando, FL. It was a wonderful chance to meet a lot of the people that I have worked with over the past 6 years as a NetIQ employee in the Technical Support and the Professional Services departments. I was also flattered by the fact that several customers remembered me from my days in Technical Support. Staying on after the conclusion of the conference, I attended two days of training on NetIQ's Security Manager product.<br /><br />Netconnect 2006 was organized into five product demonstration and education tracks, including:<br /><br /><strong>IT Automation</strong><br />This track covered customizing, automating and tuning the AppManager (AM) Suite, including threshold automation and workload management with AppManager Performance Profiler (AMPP).<br /><br /><strong>IT Service Management</strong><br />This track focused on the convergence of security management with service level management, and how to transition your IT services from event management to service management. Products covered within this track included AM, VigilEnt Policy Center (VPC) and Analysis Center (AC).<br /><br /><strong>Compliance and Risk Management</strong><br />This track reviewed the impact of governmental regulations on IT from many perspectives, including preparing for audits, rules of evidence, organizational policy management, etc. Products examined were the Security Compliance Suite and the Risk and Compliance Center.<br /><br /><strong>Security Management</strong><br />This track presented NetIQ's broad coverage of security monitoring, automated response and reporting. Products presented were Security Manager (SM) and the Security Compliance Suite.<br /><br /><strong>Change Control and Windows Administration</strong><br />This track reviewed issues of managing operational changes and enforcing IT policies for Windows systems in a cost-effective manner. NetIQ's product lineup in this area recently expanded with the introduction of Change Administrator for Windows. Additional products covered included NetIQ Change Guardian for Active Directory (CGAD), Directory & Resource Administrator (DRA) and Group Policy Guardian (GPG).<br /><br />As my personal goal in attending NetConnect was to broaden my awareness of NetIQ’s security and policy compliance solutions, I focused on the Security Management track, specifically SM. Workshops I attended included an overview of new features in SM 5.5; using the Security Compliance Suite to ensure compliance; integrating SM with AM, and the SM Essentials class. Here are some quick highlights.<br /><br />SM is made up of three major components – Event Manager, Intrusion Manager and Log Manager.<br /><br />-- Event Manager monitors the Windows event logs for security related incidents and executes responses and notifications based on best-practices rules. All incidents and responses are collected into a backend SQL database. This is the first phase of the complete event management life-cycle.<br /><br />-- Intrusion Manager builds on Event Manager to help secure systems from internal/external, malicious/benign, or accidental/policy-based violations. For example, Intrusion Manager lets you monitor root and administrator logon failures, security configuration changes, or possible buffer overflow attacks. The monitoring rules are based on security industry best practices, and can be extended to custom configurations.<br /><br />-- Log Manager copies all the information collected by Event Manager and Intrusion Manager to a separate SQL Server database designed for the analysis and reporting of security status across the enterprise. Log manager exposes knowledge articles on the analyzed events to supplement the administrator's understanding of each security scenario.<br /><br />A core feature of SM is its ability to monitor with "event correlation", in which rules are configured to cover sequences of events filtered for various attributes such as criticality, time and number of occurrences.<br /><br />Some of the new features in SM version 5.5 include:<br /><br /><strong>AutoSync Technology</strong><br />NetIQ provides new modules and module updates based on requested features or newly discovered security vulnerabilities. These updates are posted to the AutoSync Server. In the SM administrator console, these updates can be obtained by running the Module Installer. The Module Installer queries the AutoSync Server and distributes any updates available to the deployed agents as required.<br /><br /><strong>Agentless Monitored Computer </strong><br />The newest version of SM now supports agentless monitoring. An agentless computer is monitored by a proxy agent on another computer.<br /><br /><strong>Protection for Oracle Database Servers</strong><br />Security Manager now offers monitoring for Oracle database servers. Changes to security roles and user accounts can be monitored, along with the status of the audit subsystem.<br /><br />The value of NetConnect was certainly worth the time and money to attend. I plan to release more detailed reports on the Security Compliance Suite in the months ahead.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1146235735937166452006-04-28T10:37:00.000-04:002006-04-28T10:48:55.946-04:00OIS 5.0I'll be working on a review of Opalis Integration Server version 5.0, specifically covering the product value and how it integrates with other Systems Management platforms, namely NetIQ AppManager and Microsoft Operations Manager.<br /><br /><br /><br />Refer to www.JMACINC.com for the rest of the story.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1137165278630840122006-01-13T10:05:00.000-05:002006-02-12T22:04:41.086-05:00itSMF USA 2006 Conference & Expo -- Call for PresentationsThis is the outline of the topics for which the <a href="http://www.jupiterevents.com/itsmf/fall06/index.html">IT Service Management Forum</a> is seeking proposals around the following suggested topics or others you believe would be of interest to. I post their outines here as it is an excellent summary of the current areas of interest to the Systems Management industry.<br /><br /><strong>Metrics and Measurements</strong><br />- Performance Management<br />- Score Card, making metrics useful and practical, etc.<br />- Critical Success Factors when implementing the processes<br /><strong>Financial Management</strong><br />- Budgeting<br />- Costing of Services<br />- Activity Based Costing<br /><strong>Governance</strong><br />- BS 15000/ISO 20000<br />- SOX<br /><strong>Configuration Management</strong><br />- CMDB – how to plan and implement in a multiple authoritative database environment<br />- Auto discovery versus manual population of Configuration Items using CI attributes<br />- Success stories<br /><strong>ITIL v3<br /></strong>- Web-based products<br />- Lifecycle model<br /><strong>Service Level Management<br /></strong>- User Satisfaction measures<br />- Reliability of IT (Availability/Capacity)<br />- Service Level Management (SLAs)<br />- Writing and Negotiating OLAs<br />- Where to start with SLOs<br />- How to gather requirements<br />- Approaching the business partner<br />- How to measure and report<br />- Service Catalog<br />- How to find/define the services<br />- What should a catalog look like<br />- Who is the audience<br /><strong>IT Service Continuity<br /></strong>- Lessons/success stories learned from 2005<br /><strong>Maturity Models (CMM, Service, Asset Management)<br /></strong>- Making sense of them all<br /><br />Refer to www.JMACINC.com for the rest of the story.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1135643857415148942005-11-11T19:25:00.001-05:002010-02-23T22:26:32.352-05:00Thoughts on 'ITIL IT Service Management Essentials'By Dipendra Bantawa.<br />
<br />
In October 2005 I attended the two day workshop on “ITIL IT Service Management Essentials (Certification Course)” conducted by <a href="http://www.blogger.com/www.pinkelephant.com">Pink Elephant</a> at the “IT Infrastructure Management Conference” in Orlando, Florida. The course is divided into two days with an exam at the end of the second day. It introduces ITIL terminology and concepts and prepares students for the “Foundation Certificate in IT Service Management” exam, the pre-requisite for other ITIL certifications.<br />
<br />
While I passed this exam on my first attempt, the teacher warned us that there is a not insignificant failure rate. The most honest tip I can offer for passing this exam is to put aside your career experiences and familiar terminologies and instead immerse yourself with ITIL concepts and terminology--unless, of course, you have participated in an ITIL-based project and are already well versed in the framework. I would also advise you to read some official ITIL materials such as the “itSMF Pocket Guide” in advance of the training.<br />
<br />
After a thorough introduction to ITIL definitions, the focus of the workship becomes ITIL’s five operational processes and five tactical processes. The course invites attendees to imagine how they might utilize ITIL to fill gaps in or reshape their current IT processes, and our breaks were marked by lively exchanges on the differences between ITIL and other approaches.<br />
<br />
Many organizations around the globe have adopted and implemented ITIL processes successfully, and I highly recommend this workshop as a good starting point. For general information on ITIL activities and certification, check out the <a href="http://www.itsmf.com/">IT Service Management Forum</a>. I also recommend <a href="http://www.itilcommunity.com/">The ITIL Community Forum</a> as a peer resource.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1139796554137746792005-10-10T21:05:00.000-04:002006-02-12T22:04:00.416-05:00INTEROP 2005By Dipendra Bantawa<br /><br /><a href="http://www.interop.org">Interop 2005</a>, the infrastructure trade show, came to New York in the fall. I will not cover each and every exhibitor, or every featured presentation. Instead I will focus on some products exhibited at the show that have good potential for being integrated with current Systems Management products.<br /><br /><strong>Avaya ExpertNet VoIP Assessment Tool</strong> is an example of a product for extending Systems Management deployments. Other vendors, such as NetIQ, also have VoIP assessment tools assess the readiness of a network to support voice and video traffic. The reason I liked the product is that there is huge need for pre-SM deployment assessment.<br /><br /><strong>NetQoS</strong> uses what it calls <strong>SuperAgent</strong> to monitor end-to-end performance passively without deploying any desktop or server agents. SuperAgent separates delays due to applications, network and server, enabling more rapid troubleshooting. This product almost acts like “SI”, the performance monitoring product from Netuitive. It generates baselines and compares the metrics that it collects and automatically tries to investigate the cause of the problem, as it gathers data, after it detects a problem by capturing filtered packet data, SNMP polling and traceroutes.<br /><br /><strong>Network Instruments nTAPs</strong> provide monitoring devices with access to all network traffic without disrupting or adding any more network traffic. In simple terms, it makes a copy of the traffic that flows in and out that can be fed to any analysis tool or monitoring system. This out of band monitoring approach can be helpful when the network/infrastructure is already severely suffering as the monitoring system will only add more traffic and use already over utilized resources.<br /><br />We at JMACINC are always interested in serious forums such as Interop that offer a broad range of innovations for optimizing and extending the leading Systems Management products.<br /><br />Refer to <a href="http://www.JMACINC.com">www.JMACINC.com</a> for the rest of the story.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1107450957521016462005-02-03T11:53:00.000-05:002005-02-03T12:28:31.216-05:00JFFNMS: Another Half-baked IdeaI couldn't help noticing the offer for a "FREE Monitoring Tool Inside" on the cover of this week's issue of "Windows IT Pro" magazine, <a href="http://www.windowsitpro.com" target="_blank">a Penton publication</a>. Unfortunately, a cursory glance at the contents showed me that this FREE solution was more of a curiosity than a really useful tool.
<br />
<br />Why a "curiosity"? Because it is assembled from a number of open systems tools, and that's a good thing.
<br />
<br />What would make it a "really useful tool"? The ability to function as an agent on the server on a 24x7 basis. What this tool offers, as do so many other partial solutions, is a central poller facility rather than a locally deployed agent. I know many IT shops that have developed their own monitoring tools to run as an agent service simply by conversion of their programs into a Windows service. Without the ability to run on the local agent, you can't alert when there are network problems, and you can't remediate the problems until the network is back up.
<br />
<br />If you're a small shop seriously looking to implement a monitoring solution with as few pesos as possible, consider JFFNMS only as a curiosity.
<br />
<br />Refer to www.JMACINC.com for the rest of the story.
<br /><div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1090296469349755842004-07-19T20:45:00.000-04:002004-07-20T00:10:07.230-04:00Updates to AppManager 6.0 ReviewLast month I posted a <a target="_blank" href="http://www.jmacinc.com/reference/reviews/am60/AM60.htm">review</a> of the beta version of AppManager 6.0. This month (July 15, 2004) I assisted NetIQ by demonstrating some of the new features of the product in a <a href="http://www.placeware.com/cc/netiq/view?id=ama0715&pw=CWJC2G">Placeware audiocast</a> conducted by NetIQ.
<br />
<br />There are some features of the product that were discussed in the audiocast but were not covered in the review, and they include the following:
<br />
<br />1. Action_RunKS -- this knowledge script allows the administrator to launch up to three knowledge scripts dynamically from a job action
<br />2. Ability to resize the values tab of a knowledge script
<br />3. Knowledge scripts for collecting data for the Diagnostic Console. This integration with Diatnostic Console includes both Exchange and NT server core metrics.
<br />4. Action Severity Configuration -- this refers to the ability within many Action knowledge scripts to fire only when the severity of the triggering event is within a range defined as part of the Action knowledge script.
<br />
<br />As I think of other features I've left out, I'll post them here.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1078269498797643252004-03-02T17:54:00.000-05:002004-03-02T19:12:20.436-05:002003 Market Research on SLAs from VeritasIn a September 2003 survey commissioned by Veritas, data-center managers and their non-IT counterparts at 604 organizations with at least 500 employees responded to questions on the nature of their usage of SLAs. Veritas sponsored the survey as part of its "Utility Computing" product focus.
<br />
<br />I've culled some of their statistics here, but you can download <a ref="http://www.veritas.com/news/press/FeatureArticleDetail.jhtml?NewsId=61273">the original report</a> for the full analysis.
<br />
<br />1. Over 59% of the respondents do use SLAs.
<br />
<br />2. Among organizations that have SLAs in place, the following key IT performance areas are covered:
<br />
<br /> - Processing performance (37 percent)
<br /> - System availability and uptime (35 percent)
<br /> - Restoration times following an outage (29 percent)
<br /> - None of the above (39 percent)
<br />
<br />3. Twenty five percent of all cases said the SLAs were crafted without the involvement of the department heads.
<br />
<br />4. With respect to the IT reports generated in support of SLAs, non-IT managers used them as follows:
<br />
<br /> - to make department operations more efficient (34 percent)
<br /> - to work with the IT department to lower costs (28 percent)
<br /> - not used successfully (39 percent)
<br />
<br />The research covered a wide array of companies in the United States, nine countries throughout Europe, the Middle East and South Africa. It was conducted by Dynamic Markets of the UK.
<br /><div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1076591368460109672004-02-12T07:58:00.000-05:002004-02-12T08:11:59.746-05:00Insights on IT organizations and their use of metricsA recent posting by Dave Morgan, <a href="http://clickz.com/analysis/article.php/3299331">Common Mistakes in Selecting and Implementing Analytics Systems</a>, focuses on the application of metrics to web site analysis. However this article also contains useful observations for the issue of metrics for Systems Management. Dave is a highly successful consultant and entrepreneur in the media field, having founded <em>Real Media</em> in the 1990's and <a href="http://www.tacoda.com/about_tacoda.htm">Tacoda Systems</a> in 2001. This article will convince you that opinions like his are an essential counterweight to the hype and exaggeration offered by most vendors.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1076594771845922872004-01-27T20:55:00.000-05:002004-02-12T09:15:36.326-05:00Microsoft SQL Server 2000 Reporting Services LaunchAs described in a previous post (see correction below), the Reporting Services for SQL Server 2000 has now been released. The New York launch event today was very well attended, although technical difficulties caused frequent freeze-ups of the web broadcast portion of the event.
<br />
<br />From a Systems Mangement perspective, this product has great potential. Good reports are one of the core values of a Systems Management (SM) product. Reporting Services holds out the promise of maximizing the reporting potential of an SM tool.
<br />
<br />I am working with Reporting Services against NetIQ AppManager and Microsoft MOM databases to see how much effort is involved in producing effective reports. Results will be posted to the <a href="http://www.jmacinc.com/reference/samples.htm">Sample Reports</a> page of the JMACINC.com site.
<br />
<br /><strong>Correction to post Microsoft Reporting Products (Friday, January 16)</strong>: Because Reporting Services can read from OLE DB data sources, it can generate reports from OLAP databases such as Microsoft Aanalysis Services.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1074519569407752092004-01-16T16:00:00.000-05:002004-01-19T08:57:33.123-05:00Microsoft Reporting ProductsMicrosoft's upcoming release of Reporting Services, now in Beta 2, is an extension to SQL Server that could become a prominent tool for Systems Management deployments. In addition to custom reports for MOM 2004, reports can be created from any SQL Server or Oracle database, such as AppManager or MOM 2000. What IT managers will probably most like about Reporting Services is that it allows users to subscribe to reports on their own schedules.
<br />
<br />It seems it can't report against OLAP databases, but this may be an incorrect assumption. For this type of enterprise data reporting, the best alternative may be Crystal Analysis Pro, which also offers managed subscriptions and web-based authoring. Crystal Decision was acquired in 2003 by <a target="_blank" href="http://www.businessobjects.com/">Business Objects</a>.
<br />
<br />One Systems Management vendor who is planning on releasing a reporting product that uses Reporting Services is <a target="_blank" href="http://www.netiq.com/">NetIQ</a>. More on this to follow<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1074520180664717032004-01-13T20:42:00.000-05:002004-01-19T09:07:02.606-05:00Presentation to NetIQ Executive Briefing in NYC, January 13, 2004The text of the presentation is available as a Microsoft Word file <a href="http://sqlsrv1.jmacinc.org/blog/Docs/NY_UserGroup_20040113.doc">here</a>. The text of the presentation follows:
<br />
<br />Good morning, everybody. My name is John MacLeod, and I've worked with two AppManager clients here in Manhattan. I'm going to discuss them with you as CUSTOMER CASE STUDIES 1 AND 2.
<br />
<br />I. CUSTOMER CASE STUDY (1)
<br />
<br />This customer's Systems Management project initially involved only the messaging department's migration to Exchange. In the initial migration design there were about 80 Exchange 5.x servers - these were grouped into a single backbone site and about a dozen regional sites. Eventually, the messaging agents numbered closer to 250 as we added dedicated public folder servers, a second backbone for the Asia Pacific region, Blackberry servers, Exchange 2K, IMS/World Secure servers for Compliance, and KVS servers for archiving. A little over two years into our deployment we merged with the infrastructure department, which added responsibility for about 150 NT4/W2K PDC/BDC/WINS and software distribution servers.
<br />
<br />For messaging statistics we relied on a 3rd party SQL server solution which didn't scale very well-the calculations took longer and longer as we added more users, and finally were taking more than 24 hours to complete-so after a certain point in our migration we had to start extrapolating the totals from the data provided by a subset of the servers. This approach eventually was replaced with a custom solution developed in house from Perl scripts, and today it is AppAnalyzer.
<br />
<br />The following are the five main challenges confronted by customer 1:
<br />
<br />A. For our core OS/HW monitoring, we needed to replace an existing monitoring product, Sentry (Mission Critical) that was globally deployed to every Windows server in the firm. Sentry had the unfortunate habit of flooding the Windows event logs with meaningless information such as "Sentry is detecting an event" and "Sentry is escalating an event". Either the customer had not configured it effectively, or it was just too noisy a product for our busy environment. So we needed to find a product that was right-sized for our global deployment.
<br />
<br />B. We needed to find a product to monitor our Exchange system thoroughly. We knew that we didn't want any product that would require server-specific configuration files as was necessary with Microsoft's native tools (link-monitor and server-monitor). As you know the connections in an Exchange site are a mesh of two-way links, so the number of queues is 2(n-1) factorial. Since we were anticipating sites with about a dozen queues, this would have been a nightmare to administer manually. So we needed to find a product that could see new servers and queues dynamically.
<br />
<br />C. We were expected to provide summary status reports to upper management. Because Exchange was fairly new to us at that time, we didn't know exactly what reports would be useful and necessary, so we wanted a product that provided a good range of application reports to get us going.
<br />
<br />D. We had to satisfy our SQL Server DBA team that our application supported Windows authentication and would not require 'SA' or 'Local Admin' privileges. Not every company will have as many DBA constraints as these, but fortunately AM was able to run with these modifications.
<br />
<br />E. We knew we needed a product that was open enough that we could extend it fairly easily. What we wanted was the functionality that the RunDOS script provided, as it allowed us to distribute local tasks easily and see the results of the tasks in the console.
<br />
<br />But we also wanted to customize the monitoring with new tasks that were specific to our environment. This capability, of course, was provided through the developer's console, which allowed us to do virtually anything that fit the model of a scripted job. As it turned out, we were able to script a crucial software distribution job-namely, the rollout of new anti-virus pattern files from TrendMicro-as a totally automated system, which in fact pre-dated the AM module for ScanMail.
<br />
<br />The AM deployment: We initially deployed four QDB's, one on each of four SQL servers. These were deployed in our regional data centers. Each SQL server doubled as the MS for the region. The version of AppManager we started with was 2.0, and we finished with 4.3, so we went through two major upgrades and a few minor upgrades. The customer is today at 5.0.1. We trained each of the local Exchange administrators in handling the AM console and understanding the events. It was widely deemed a successful system within the firm.
<br />
<br />II. CUSTOMER CASE STUDY (2)
<br />
<br />The second customer was again a messaging department and again it involved a migration to a new Exchange system, but it was a simpler environment since we were monitoring only the Exchange 2000 application, and a separate department was monitoring the OS/HW with BMC Patrol. Also, Exchange 2000 is bit easier to administer than Exchange 5.x because the Global Address List is no longer hosted on each Exchange server, so the database health and replication monitoring were moved to the department that maintained the AD. Like the first customer, the messaging system also supported Blackberry servers and was planning to use an anti-virus product, but the decision between TrendMicro and Sybari had not been made.
<br />
<br />We cut our teeth with an initial pilot that consisted of only 12 Exchange 2000 servers in four routing groups, distributed to four regional IT centers. In production this deployment grew to 60 servers with clustered mailbox servers, two dedicated routing groups for backbones, and dedicated front-end servers for OWA users. For Exchange statistics this firm had already selected AppAnalyzer.
<br />
<br />The following are the five main challenges confronted by customer 2:
<br />
<br />A. We had to do a head-to-head comparison of AM with MOM, but also consider other Exchange 2000 monitoring tools such as Quest, Bindview, Microsoft's native tools, etc. The reason a comparison to MOM specifically was required was that the firm was converting their OS/HW level monitoring from BMC Patrol to MOM. We were given 30 days for this comparison.
<br />
<br />B. The monitoring and reporting needed to be ready for Day One of the pilot deployment, which was scheduled for 30 days after the comparison project ended. The assumption was that servers from the evaluation would be reusable for the pilot. As for the reporting requirement at this customer, they clearly expected the reporting would be useful, flexible and entirely web-based. As it turned out, since we went with AM 5.x, this was actually not a great problem. Had we gone with MOM, we would have been writing reports with the Microsoft Access report designer and having to schedule them with static batch files.
<br />
<br />C. We needed a two-way link to Micromuse NetCOOL, which was the firm's Manager of Managers. As you are all aware, AM provides numerous connectors to other monitoring programs including NetCOOL. It turned out that we installed this connector in about 90 minutes and it ran successfully for the duration of the pilot.
<br />
<br />D. Easy to use and extend. A key requirement for extensibility was in automating the weekly reboots of our Exchange clusters. The reboot had to be performed with complete control of the logical application so that at no time was a node rebooted if the application was not running on the other node.
<br />
<br />E. We also needed the main components of the monitoring to be redundant, so that we could tolerate an outage in a data center without losing our monitoring. As you all know, today Business Contingency Planning is a must-have on all projects.
<br />
<br />The AM deployment: The production deployment consisted of five servers for AM, all located in the NY/NJ data centers: a clustered (active/passive) SQL Server for the QDB and the AppAnalyzer databases; one reporting agent that also served as the web console server, and three management servers, each one dedicated to a continent. There was also one OLAP server for AppAnalyzer - this did not have any redundancy, but this was acceptable since it was only for reporting.
<br />
<br />I just want to mention that our choice of AM over MOM hinged primarily on three technical merits:
<br />- Existing modules to support Blackberry and either TrendMicro or Sybari AV
<br />- Better integration of reports, especially the AM reports portal
<br />- Roughly equivalent coverage of the core monitoring requirements but with fewer discreet tasks
<br />
<br />III. Best Practices - Four Lessons Learned
<br />
<br />A. Architectural planning is key. The more complex your environment is, whether in terms of the number of QDB's you've deployed, redundancy, the impact on the agent of monitoring jobs, or other factors, the more important it is to get your architecture right. I'm sure I'm preaching to the choir on this point.
<br />
<br />A corollary point is that the more complex your environment is, the more likely you'll need to customize it. I'll speak more about customization in a minute when I talk about scripting.
<br />
<br />A second corollary is to maintain a current lab setup. I think too often Systems Management is not considered high enough a priority when budgets are tight to justify the extra expense of a lab, but without it you're really never sure when a new job will have harmful side-affects.
<br />
<br />B. Documenting the environment is critical. Be rigorous in your documentation of installations/upgrades. I suggest the best documentation is pre- and post-installation snapshots of your servers' configurations, including every file and registry setting and, in the case of the QDB, every object in the database. There are sophisticated and expensive tools to collect these snapshots, but you can really use fairly simple ones as well. To record changes to the database you can even use SQL Server's native database scripting tool.
<br />
<br />This is obviously required NOT for every machine but for every configuration-i.e., at least one cluster if you're monitoring any clusters, at least one MS, at least one agent for every server type, and of course the SQL server.
<br />
<br />A corollary rule is to keep on top of the changes made to your environment. While it's standard practice for every IT shop to announce all changes in advance, I'm suggesting you need to tie in your snapshot procedures with these change plans so that your snapshots are as close to the before and after picture as possible.
<br />
<br />C. The AM online forum is a big help. It seems everyone who gets help with a problem tries to help someone else, and the NetIQ moderators are excellent. I also find the forum's search tool very helpful. Another great source of online help, especially for newcomers to AM, is the KS depot.
<br />
<br />D. Validate that your monitoring system is doing what you expect. By this I mean you have to be diligent about tracking down the cause of any anomalous behavior from any monitoring component. The value of this rule is multiplied for larger deployments where small problems can multiple very quickly.
<br />
<br />If you have a new agent installation that is failing, you should rectify that immediately before rolling out any other agents. You need to work with tech support on these problems as soon as possible. Simply stated, you need to maintain the following standards:
<br />
<br />- Every agent should run every job reliably, 24 x 7.
<br />
<br />- Every policy should be reflected on every agent promptly.
<br />
<br />- Every report should always run correctly.
<br />
<br />A corollary note is that one of the core dependencies of AM is also one of the hardest components to troubleshoot when it fails. That's the RPC services between the agents and the MS. If an agent has problems with its RPC, it may not be on account of any change in your AM configuration-in other words it can be the result of a configuration change made by another application-but it will halt your monitoring dead in its tracks all the same, and could drive you a bit crazy in the process. This is an excellent time to take a new configuration snapshot to see what's changed since the last known good configuration.
<br />
<br />E. I've found it very helpful to have a good tool for file distribution that is independent of the monitoring system. By this I mean a program that lets you easily push or pull files to or from every server in your system. One ad hoc example when this is handy is during a virus outbreak and you're given a list of possible places to look for signs of the virus. By pushing out a simple command file that looks for it, you enable your agents to scan for it with the easy RunDOS KS-which is exactly what we did at Customer 1 for both the Code Red and the I-Love-U viruses. With the pull facility you can collect AM-generated files from the agents to use as inputs to a report or to confirm the consistency of your agent deployment.
<br />
<br />IV. Thoughts on scripting
<br />
<br />I want to discuss scripting for Systems Management because I think many administrators are still reluctant to customize their solutions for fear that they will be unable to upgrade their product or that they will break certain Report dependencies. Obviously, the standard practice with regard to the first concern is to rename your KS to a proprietary naming convention that will not conflict with the product upgrade, and in the latter concern you can usually find and fix the Reports that use hard-coded KS names.
<br />
<br />A. On the Windows platform, Microsoft has made it easier and easier to access HW/OS configuration and status information in scripts - something which was always taken more or less for granted in the Unix platform.
<br />
<br />The main advantage that more pervasive scripting offers to Systems Management is that more KS can be self-sufficient in terms of querying their environment. By that I mean that you can now accomplish more tasks within the KS compiler without having to shell out to the system. The disadvantage of shelling out to the system to call an external program is that it adds a layer of overhead and error checking. Overhead is a bad thing when a system is stressed, and too many sources of error is a bad thing for the developers who have to write and maintain code.
<br />
<br />B. Another thought on scripting is that every vendor now uses XML files for their application interfaces, and I think we all have seen the power of XML for simplifying development. One place XML formatting can be immediately useful to Systems Management administrators is in the area of reports. Whereas in the past IT reports were formatted in hard-coded text layouts or comma-delimited records for display in a spreadsheet, today we want most of our reports in HTML. By generating report data in XML files, one has the option of presenting it in any HTML page. HTML pages generated from XML data files can support sorting and filtering within the browser instead of requiring a round trip back to the web server to execute a CGI or ASP script. I assume that most of you are already using XML techniques, but I wanted to mention it for those who maybe have not yet taken this direction.
<br />
<br />V. Integration of AppManager with other solutions
<br />
<br />I'm going to discuss three categories of products that cohabit AppManager's monitoring space: M-O-M, other NetIQ products, and other non-NetIQ products.
<br />
<br />A. Manager of Managers
<br />
<br />1. I have heard fairly negative feedback about most M-O-M products except for two: Micromuse NetCOOL and Managed Objects. I think what differentiates these two from the M-O-M offerings of the usual suspects (Tivoli, CA and HP) is that they were designed from their inception to be M-O-M's rather than a component monitor with certain M-O-M features bolted on. Of the two, I'm most interested in Managed Objects for deployments where business unit managers want to see their service status.
<br />
<br />2. NetIQ may add an event correlation engine to AM 6. This should be very exciting if it's done with an API that allows users and third parties to define their own relation end-points. On the other hand, for a multi-platform enterprise, which typically already has several point solutions in place, the correlation determination will need to be made at the level of a M-O-M.
<br />
<br />B. Other NetIQ products
<br />
<br />Obviously, if you are monitoring Exchange 5 or 2K or 2K3, NetIQ AppAnalyzer for Exchange is a very cool product. It was one of the first systems management products to use OLAP services, and it was also one of the first to use the .Net Framework. So it has a history of being cutting edge.
<br />
<br />I found the combination of AppManager agents with AppAnalyzer KS to run the local Exchange data gathering tasks is a very efficient combination, and one that scales very well. I believe it's the IRS that uses AppAnalyzer to report on something like 75,000 mailboxes.
<br />
<br />Secondly, the Diagnostics console for Windows and for SQL Server are extremely useful for real-time graphs and performance snapshots. They're priced very reasonably, and if you don't already have a product in-house that delivers this real-time perspective then you really should take look at them.
<br />
<br />C. Other Non-NetIQ products
<br />
<br />Only one - Netuitive Analytics. This is an excellent performance bench-marker that can't function by itself - it needs either AM or BMC Patrol. It may integrate with other monitoring solutions in the future.
<br />
<br />Netuitive's primary focus is on dynamic alarm thresholds that are generated uniquely for each machine based on its past performance over time. These dynamic thresholds are not derived from simple moving average calculations of a single performance metric. The data from multiple performance counters that are collected by the AM agent are used as inputs to a patented neural network that can predict the servers key performance indicators up to two hours in the future.
<br />
<br />NA also provides excellent value for its intuitive baselining capabilities. Some scenarios where these baselines would be useful are
<br />
<br />– Your most critical servers, since you want to know you've tuned their performance as much as possible;
<br />– Your heaviest used servers, since you want to know what limits to set, if any, for your static thresholds, and
<br />– Your lab servers, since you want to compare performance from multiple configurations.
<br />
<br />Another note on NA: the company's latest release uses some very exciting Open Systems technology from the Apache project that, I think, will attract a growing number of customers. Previously the reporting interface required Microsoft IIS exclusively, but its new interface is written on top of Jakarta and TomCat, which are Java/XML projects of the Apache organization - that's www.apache.org. This allows the user interface to run identically on any web server that supports Java and XML.
<br />
<br />I'd like to thank NetIQ and Bekim Protopapa in particular for inviting me to speak today. I hope my comments were helpful, as I think AppManager is a great product that still has a lot to offer. I'd be happy to answer any of your questions after the briefing.
<br />
<br />
<br /><div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-6312518.post-1073784848601697042004-01-10T20:34:00.000-05:002004-01-10T20:34:29.076-05:00Refer to www.JMACINC.com for the rest of the story.<div class="blogger-post-footer">Refer to www.JMACINC.com for the rest of the story.</div>Unknownnoreply@blogger.com0