Should I bring all my shoes and glasses?

//Effective Vulnerability Management — A Preface

While its fresh in my mind, I thought I would write a little bit tonight about implementing a sound vulnerability management program within medium to large sized businesses. The good folks here at Malos Ojos and I are only starting to improve upon the VM processes in our day jobs, so the following posts in this category will be a mixture of me talking out loud; a splash of advice mixed with a dash of growing pains as we step down this well-traveled-but-not-quite-understood path.

Vulnerability management means many things to different people. Put simply, it is the act of obtaining a (semi?)-accurate depiction of risk levels in the environment, with the eventual goal of bringing those risks down to business-tolerable levels. At many companies, this translates into the following:

  1. A lone security analyst, sometime between the hours of 6pm-7am, will scan the internal/external networks with a vulnerability scanner of his/her choosing. The resulting output will be a 9MB XML or PDF file that is the equivalent of approximately 5,000 printed pages.
  2. This file is sent out via email to IT staff (print form would result in an arboreal holocaust which eventually leads to the Greenpeace reservists being called up to active duty…)
  3. IT staff member opens the attachment, notices a couple things: Many confusing words/numbers and systems that don’t belong to them. Attachment is closed and never opened again.
  4. GOTO 1

If you look at the people, technology, and processes used to deploy the vulnerability management solution in the scenario described above, I wouldn’t rate either of the three pillars very high. Well, maybe for people, at least this company HAS a security analyst…

There are also some of you that will also point out the laziness of the IT staff for either closing the document prematurely and not asking the analyst for help on making heads or tails of the report. While this is true, it is our jobs as security professionals to anticipate this laziness and create a report that will help ease their digestion of the information. Thats why they pay us the big bucks *cough* 😀

Back to my point, I’ve sat down and mused upon what a good vulnerability management program should consist of. Here are my notes thus far:

  1. Inventory of assets
  2. Data/system classification scheme
  3. Reoccurring process for identification of vulnerabilities -> risk scoring
  4. LOLCats
  5. Remediation and validation
  6. Reporting and metrics

The list above will be revised over time, almost assuredly as soon as I arrive into the office tomorrow morning with my coworkers having read this post. Because I’m a tad hungry, but more importantly because I don’t get paid by the word (Schneier, I’m looking at you), I will end the post here.

Fear not true believers, I have many more posts to come that expand on my vague aforementioned scribbles.

Comments

  • Deron Grzetich says:

    I’m going to agree with your start on the VM process, and not because I had a few too many tonight at dinner. I think the most difficult is asset management…it is sad when IT needs to ask security where their assets are. I’m thinking of a particular large oil company we worked for which were surprised to find entire subnets of servers they didn’t even know about.

    We should not forget that this often overlooked management area is also the foundation for patch and vulnerability management, but also business continuity/disaster recovery and incident response.

    No one has really been discussing the affect that virtualization on the asset management process…we no longer tie a single system with one physical system.

  • aj says:

    I have thoughts on this. I do. But am too lazy to share right now. Instead, the following questions:

    -How well can things be prioritized (or risk scored) unless you know what’s on the systems in question?
    -How would you force rank the following by order of ‘importance’ – VM, AM, PM, or A/V?
    -in order for a VM program to show it’s value, it needs to be summarized in a single picture or page. So Why do people end the process with the 200 report?
    -Are sparklines the answer?
    -methinks the bloggers on this site have too much time on the their hands. What – are you guys done with the clans on WoW or whatever the heck it is?

  • Deron Grzetich says:

    I’ll start with the last question first and work backwards…it is Battlefield 2 or Counter Strike…never WoW. Speaking of WoW, I saw your old buddy at a conference recently. As far as time, I wouldn’t say we have too much on our hands although I’m sure our work week is shorter than a consulting work week by about half.

    I would agree that middle management would like the single page report. I would argue that most CIO’s could care less about the pictures, graphs, sparklines, whatever and really only want the answers to the “Are we secure? Does this makes us (more) compliant with X?” questions. I do think this single page summary is a good talking point and has value, but doubt it would be looked at or reviewed to any degree outside of the middle management level at most organizations. My point – the value only means something in relation to the CIO’s questions or to those who actually care about security (read: the security group). The report doesn’t need to be 200 pages if you have a decent VM system that tracks vulnerabilities, assets, and has good reporting functionality. Even those ranked highest by Gartner, i.e. nCircle, only have acceptable reporting at best out of the box. Add-on’s, like the Security Intelligence Hub, make up for a lack of good “on-system” reporting.

    Rank the process areas…I still think asset management, followed by configuration and patch management rank higher than VM. I would say AV is a standard process these days, but not sure where I would rank it. I would say it is a fairly mature process at most organizations so I often overlook this in the VM, PM, AM, CM, etc. debate. I say AM first because it is the foundation for BCP/DR, IR, PM, VM, and CM. If you don’t know where the asset is and who manages it are we sure it is being patched, scanned, running AV, etc. I would say a good BIA should have some of this information…but have you ever come across a good BIA? VM is the catch-all and check on how effective you are in the AM, PM, and CM process areas.

    Prioritizing the system by data it contains is a good idea, but this is also a debate we have going at work. Do we classify the system by the data, applications, or both? I’m choosing the path of determining risk or criticality based on the impact to the business if lost (i.e. email systems in a law firm), the applications it runs, and the data it stores/transacts. This isn’t difficult for a law firm given that data is “client/case information” or “not client/case information”. I’m trying not to forget about the core network infrastructure devices as well.

    And don’t be lazy; 5 hour energy is where it is at. You’re going to need that in a month or two…trust me.

    The

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Next | Previous
    Theme made by Igor T. | Powered by WordPress | Log in | | RSS | Back to Top