Should I bring all my shoes and glasses?

//Ideas on Asset Criticality Inference (ACI) – Research Notes

General | | 22. August, 2014

Asset management is a foundational IT service that many organizations continue to struggle to provide.  Worse yet, and from the security perspective, this affects all of the secondary and tertiary services that rely on this foundation such as vulnerability management, security monitoring, analysis, and response to name a few.  While it is very rare that an organization has an up-to-date or accurate picture of all of their IT assets, it is even rarer (think rainbow-colored unicorn that pukes Skittles) that an organization has an accurate picture of the criticality of their assets.  While some do a decent job when standing up a CMDB of mapping applications to supporting infrastructure and ranking their criticality (although many tend to use a binary critical/not critical ranking), these criticality rankings are statically assigned and if not updated over time may turn stale.  Manual re-evaluation of assets and applications in a CMDB is a time-consuming task that many organizations, after the pain of setting up the CMDB in the first iteration, are not willing or likely to make re-certification of assets and criticality rakings a priority…and it is easy to understand why.  My other issue is that many CMDBs sometimes take into account the “availability” factor of an asset over the criticality of the assets from a security perspective.  For example, it is not uncommon to see a rigorous change management process for assets in the CMDB with a slightly less rigorous (or non-existent) change management process for non-CMDB assets.  But I digress…to summarize my problem:

  • Asset criticality often does not exist or is assigned upon the asset’s entry into a central tracking mechanism or CMDB
  • The effort to manually determine and recertify asset criticality is often so great that manual processes fail or produce inaccurate data
  • In order for asset criticality data to be useful we may need near real time views of the criticality that change in concert with the asset’s usage
  • Without accurate asset inventories and criticalities we cannot accurately represent overall risk or risk posture of an organization

The impact of inaccurate asset inventories and lack of up-to-date criticality rankings got me thinking that there has to be a better way.  Being that I spend a majority of my time in the security monitoring space, and now what seems to be threat intel and security/data analytics space, I kept thinking of possible solutions.  The one factor I found to be in common with every possible solution was data.  And why not?  We used to talk about the problems of “too much data” and how we were drowning in it…so why not use it to infer the critically of assets and to update their critically in an automated fashion.  Basically, make the data work for us for a change.

To start I looked for existing solutions but couldn’t find one.  Yes, some vendors have pieces of what I was looking for (i.e. identity analytics), but no one vendor had a solution that fit my needs.  In general, my thought process was:

  1. We may be starting with a statically defined criticality rating for certain assets and applications (i.e. CMDB), and I’m fine with that as a starting point
  2. I need a way to gather and process data that would support, or reject, the statically assigned ratings
  3. I also need a way to assign ratings to assets outside of what has been statically assigned (i.e. critical assets not included in CMDB)
  4. The rating system shouldn’t be binary (yes/no) but more flexible and take into account real-world factors such as the type/sensitivity of the data stored or processed, usage, and network accessibility factors
  5. Assets criticalities could be inferred and updated on a periodic (i.e. monthly) or real-time basis through data collection and processing
  6. The side-benefit of all of this would also include a more accurate asset inventory and picture that could be used to support everything from IT BAU processes (i.e. license management) and security initiatives (i.e. VM, security monitoring, response, etc.)

These 6 thoughts guided the drafting of a research paper, posted here (http://www.malos-ojos.com/wp-content/uploads/2014/08/DGRZETICH-Ideas-on-Asset-Criticality-Inference.pdf), that I’ve been ever so slowly working on.  Keep in mind that the paper is a draft and still a work in progress and attempts to start to solve the problem using data and the idea that we should be able to infer the criticality of an asset based on models and data analytics.  I’ve been thinking about this for a while now (the paper was dated 6/26/2013) and even last year attempted to gather a sample data set and to work with the M.S. students from DePaul in the Predicative Analytics concentration to solve but that never came to fruition.  Maybe this year…

Comments

  • Cameron Hunt says:

    Deron, would using a graph data approach to support causal modeling be helpful in this case? Using something like Titan (which uses Hadoop services for data storage) might provide a way to test this concept with large scale data, while still preserving the flexibility of a data model that also supports walking the graph to determine causal impact (and help derive dynamic priorities).

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Next | Previous
    Theme made by Igor T. | Powered by WordPress | Log in | | RSS | Back to Top