Malos Ojos Security Blog

Incident Response

Operationalizing Threat Data & Intelligence – Putting the Pieces of the Puzzle Together

by on Aug.20, 2014, under General, Incident Response

While this isn’t a post on what threat intelligence is or is not I’d be negligent if I didn’t at least begin to put some scope and context around this term as the focus of this post is on making threat data and intelligence actionable.  Not to mention, every vendor and their grandmother is trying to use this phase to sell products and services without fully understanding or defining its meaning.

First, it is important to understand that there is a difference between data and threat intelligence.  There are providers of data, which is generally some type of atomic indicator (i.e. IOC) that comes in the form of an IP address, URL, domain, meta data, email addresses or hash.  This data, in its least useful form, is a simple listing of indicators without including attribution of the threat actor(s) or campaigns with which they are associated.  Some providers include the malware/malware family/RAT that was last associated with the indicator (i.e. Zeus, cryptolocker, PlugX, njRAT, etc.) and the date associated with the last activity.  Some other providers focus on telemetry data about the indicator (i.e. who registered the domain, geolocated IP, AS numbers, and so on).  Moving up the maturity scale and closer to real intel are providers that track a series of indicators such as IP, domains/subdomains, email addresses and meta data to a campaign and threat actor or group.  If we add the atomic indicators plus the tactics (i.e. phishing campaigns that include a weaponized PDF that installs a backdoor that connects to C2 infrastructure associated with a threat actor or group) used by the threat actors we start to build a more holistic view of the threat.  Now that we understand tactics, techniques and procedures (TTPs) and capability or our adversaries, we focus on the intent of the actors/groups or personas and how their operations are, or are not, potentially being directed at our organization.  The final piece of the equation, which is partially the focus of this post, is understanding how we take these data feeds, enrich them, and then use them in the context of our own organization and move towards providing actual threat intelligence – but that is a post on its own.

Many organizations think that building a threat intelligence capability is a large undertaking.  To some extent they are correct in the long term/strategic view for a mature threat intel program that may be years down the road.  However, the purpose of this post is to argue that even with just a few data and intel sources we can enable or enhance our current capabilities such as security monitoring/analysis/response and vulnerability management services.  I honestly chose these services as they fit nicely in my reference model for a threat monitoring and response program as well as threat intel which is at the center of this reference model.  So let’s walk through a few examples…

Enrichment of Vulnerability Data

Vulnerability assessment programs have been around for what seems like forever, but mature vulnerability management programs are few and far between.  Why is this?  It seems we, as security professionals, are good at buying an assessment technology and running it and that’s about it.  What we aren’t very good at is setting up a full cycle vulnerability management program to assign and track vulnerability status throughout the lifecycle.  Some of the reasons are due to historical challenges (outlined in more detail in a research paper I posted here: such as poor asset management/ownership information, history of breaking the infrastructure with your scans (real or imagined by IT), or way too many vulnerabilities identified to remediate.  Let’s examine that last challenge of having too many vulnerabilities and see if our data and intel feeds can help.

Historically what have security groups done when they were faced with a large number of vulnerabilities?  The worst action I’ve seen is to take the raw number of vulnerabilities and present them as a rolling line graph/bar chart over time.  This type of reporting does nothing to expose the true risk, which should be one of the main outputs of the vulnerability management program, and infuriates IT by making them look bad.  Not to mention these “raw” numbers generally tend to include the low severity vulnerabilities.  Do I really care that someone can tell what timezone my laptop is set to?  I don’t know about you but I doubt that is going to lead to the next Target breach.  Outside of raw numbers, the next type of action usually taken is to assign some remediation order or preference to the assessment results.  While a good start, most security teams go into “let’s just look at sev 4 and sev 5 vulnerabilities” mode which may result in what amounts to a still very large list.  Enter our threat data…

What if we were able to subscribe to a data feed where the provider tracked current and historical usage of exploits, matched the exploit with the associated vulnerabilities, and hence the required remediation action (i.e. apply patch, change configuration, etc.)?  This data, when put into the context of our current set of vulnerabilities, becomes intelligence and allows us prioritize remediation of the vulnerabilities that impose the greatest risk due to their active use in attack kits as well as non 0-day exploits being used by nation state actors.  As a side note, among a few vendors there is a myth being spread that most all nation-state attacks utilize 0-days, which I find to be an odd statement given that we are so bad at securing our infrastructure through patch and configuration management that it is likely that an Adobe exploit from 2012 is going to be effective in most environments.  But I digress.

So how much does using threat data to prioritize remediation really help the program in reducing risk?  In my research paper (here: I noted that limiting to sev 4 and sev 5 as well as using threat data it is possible to reduce the number of systems that require remediation by ~60% and the discrete number of patches that needed to be applied was reduced by ~80%.  While one may argue that this may still result in a high number of patches and/or systems requiring treatment I would counter-argue that I’d rather address 39,000 systems versus 100,000 and apply 180 discrete patches over 1000 any day.  At least I’m making more manageable chunks of work and the work that I am assigning results in a more meaningful reduction of risk.

Integrating Your Very First Threat Feed – How Special

In addition to creating a reference model for a security monitoring, analysis and response programs (which includes threat intel) I also built out a model for implementing the threat intel service which includes a 4 step flow of: 1. Threat Risk Analysis, 2. Acquisition, 3. Centralization, and 4. Utilization.  I’ll detail this model in a future post and the fact that in a mature service there would be a level of automation, but for now I’d like to point out that it is perfectly acceptable to build a threat intel program as a series of iterative steps.  By simply performing a threat risk assessment and understanding or defining the data and intel needs an organization should then be able to choose a data or intel provider that is suitable to their goals.  Ironically I’ve witnessed a few organizations that went out and procured a feed, or multiple feeds, without understanding how it was going to benefit them or how it would be operationalized…I’ll save those stories for another day.  And while I’m not going to cover the differences between finished intel versus indicators/data in this post, it is possible for an organization to procure feeds (open source and commercial feeds) and instrument their network to prevent activity, or at a minimum, detect the presence of the activity.

As an example, let’s say that we have a set of preventive controls in our environment – firewalls, web/email proxies, network-based intrusion prevention systems, and end point controls such as AV, app whitelisting, and host-based firewalls.  Let’s also say we have a set of detective controls that includes a log management system and/or security information and event management (SIEM) which is being fed by various network infrastructure components, systems and applications, and our preventive controls mentioned above.  For the sake of continuing the example let’s also say that I’m in an industry vertical that performs R&D and would likely be targeted by nation state actors (i.e. this Panda that Kitten) in addition to the standard Crimeware groups and hacktivists.  With this understanding I should be able to evaluate and select a threat intel/data provider that could then be used to instrument my network (preventive and detective controls) to highlight activity by these groups.  At this point you would start asking yourself if you need a provider that covers all of the type of threat actors/groups, if you need vertical-specific feeds, and if you need to ensure that you have a process to take the feeds and instrument your environment?  The answer to all three is likely to be yes.

Continuing with the example, let’s say I selected a provider that provides both analyst-derived/proprietary intel in addition to cultivating widely available open source information.  This information should be centralized so that an operator can assess the validity and applicability of the information being shared and determine the course of action on how to integrate this into the preventative and/or detective controls.  A simple example of this may be validating the list of known-bad IPs and updating the firewall (network and possibly host-based) with blocks/denies for these destinations.  Or, updating the web proxy to block traffic to known bad URLs or domains/sub-domains.  One thing that shouldn’t be overlooked here would be that we trigger an alert on this activity for later reporting on the efficacy of our controls and/or the type of activity we are seeing on our network.  This type of data is often lacking in many organization and they struggle to create a management-level intel reports that are specific to the organization that highlight the current and historical activity being observed.  In addition, we could also take the indicators and implement detection rules in our log management/SIEM to detect and alert on this activity.  Again, keep in mind that for an organization just standing up a threat intel service these may be manual processes that have the possibility of being partially automated in a later or more mature version of the service.

As a side note, one thing I’ve noticed from many of the SIEM vendors is how they try to sell everyone on the “intel feeds” that their product has and how they are “already integrated”.  The problem I have with this “integration” is that you are receiving the vendor’s feed and not one of your choosing.  If SIEM vendors were smart they would not only offer their own feeds but also open up integrations with customer-created feeds that are being generated from their intel program.  As it stands today this integration is not as straight-forward as it should be, then again, we also aren’t doing a very good job of standardizing the format of our intel despite STIX/CyBOX/TAXII, OpenIOC, IODEF, etc. and the transfer mechanism (API, JSON, XML, etc.) being around for a while now.

To round out this example, it is also important to note that as we instrument our environment that we track the alerts generated based on our indicators back to the category or type (i.e. nation-state, crimeware, hacktivist, etc.) and if possible track back to the specific origin of the threat (i.e. Ukrainian crimeware groups, Deep Panda, Anonymous, etc.).  This is going to be key in monitoring for and reporting on threat activity so we can track historical changes and better predict future activity.  We can also use this information to re-evaluate our control set as we map the attacks by kind/type/vector and effectiveness (i.e. was the attack blocked at delivery) or the in-effectiveness (i.e. was a system compromised and only detected through monitoring) and map these against the kill chain.  This type of information translates into both overall security architecture and funding requests very well.

In Summary

While this is a new and seemingly complex area for information security professionals it really isn’t that difficult to get started.  This post highlighted only a few simple examples and there are many more that could be part of your phase 1 deployment of a threat intel service.  My only parting advice would be to make sure you have a strategy and mission statement for the service, determine your threat landscape, define what you need out of your feeds and acquire them, centralize the information and utilize it by instrumenting and monitoring your environment.  While some of the vendors in this space have great sales pitches and even really cool logos, you had better understand your requirements (and limitations) prior to scheduling a bunch of vendor demos.

Leave a Comment more...

JSPSpy demo

by on Mar.07, 2013, under General, Incident Response

JSPSpy is an interesting tool that once uploaded to a server that supports JSP pages gives you a user interface on the web server itself.  Its power comes from the ability to upload/download, zip, and delete files at will on the web server as well as spawn a command prompt.  In addition, if you are able to gain credentials to a database server serving the web application (say through an unencrypted database connection string) it has a database connection component as well which would allow one to crawl a backend database server for information.

There is one issue with the code, which I find odd given that it was created in 2009, in that the SQL driver and URL for the connection using JDBC is incorrect.  Well, not incorrect, the issue is that it supports SQL Server 2000.  Starting with SQL 2005 the driver and URL were changed…and the code for JSPSpy which is easily accessible on the internet has an old connection string.

In addition, there are a few more UI’s for crawling a SQL backend using JSP floating around as well.   I’ve included one in this demo as well.

The video demonstrates the power of JSPSpy in my demo environment consisting of Java 1.6, Tomcat 6.0, SQL 2005 and Windows 2003 Server. UPDATE: I updated the video on this as it appears it didn’t convert correctly and only shows in SD, not HD so the text is very hard to read. The new video below is in HD.


Leave a Comment more...

Tap Your Network BEFORE You Have an Incident

by on Jul.07, 2012, under General, Incident Response

In responding to incidents there is one thing that stands out that I felt deserved a post and that is the topic of network taps and visibility.  While some large companies often have the necessary resources (i.e. money, time, engineers, other tools which require visibility into network traffic, etc.) to install and maintain network traffic taps or link aggregators, the number of companies I run into without ANY sort of tap or aggregator infrastructure surprises me.  While it depends on the type of incident you’re dealing with, it is quite often the case that you’re going to want, better yet need, a very good view of your network traffic down to the packet level.

If you’re not convinced imagine this scenario: During a routine review of some logs you see that you have traffic leaving your US organization which is going to an IP address that is located somewhere in Asia.  It appears to be TCP/80 traffic originating from a host on your network, so you assume it is standard HTTP traffic.  But then you remember that you have a web proxy installed and all users should be configured to send HTTP requests through the proxy…so what gives?  At this point your only hope is to view the firewall logs (hopefully you have these enabled at the right level), or you can go out and image the host to see what sites it was hitting and why.  But, if you had packet level inspection available a simple query for the destination and source address would confirm if this is simply a mis-configured end user system, a set of egress rules the firewall that were left behind that allow users to circumvent the proxy, or if it is C2 traffic to/from an infected host on your network.

Having taps, SPAN/mirror ports, or link aggregators in place PRIOR to an incident is the key to gaining visibility into your network traffic, even if you do not possess the monitoring tools today.  It allows response organizations to “forklift” a crate of tools into your environment and gain access to the network traffic they need to begin the investigation.  The main benefit of tapping your infrastructure prior to an incident is that you don’t need to go through an emergency change control at the start of the incident just to get these taps, SPAN, or aggregators installed.  This is also technology that your network team may not be familiar with configuring, installing, or troubleshooting.  So setting up your tapping infrastructure up front and being able to test it under non-stressful conditions is preferred.  That being said, it is also important to remember that there are pros and cons on how you pre-deploy your solution, both in terms of technology and tap location.

A couple of questions should be answered up front when considering how to approach this topic:

  1. Can our current switching infrastructure handle the increased load due to the configuration of a SPAN or mirror port?
  2. Will we have multiple network tools (i.e. Fireeye, Damballa, Netwitness, Solera, DLP, etc.) that need the same level of visibility?
  3. If we tap at the egress point what is the best location for the tap, aggregator, or SPAN?
  4. Do we know what traffic we are not seeing and why?


Taps vs. Link Aggregators vs. SPAN/mirror

The simplest way to gain access to network traffic is to configure a switch, most likely one near your egress point, to SPAN or mirror traffic from one or more switch ports/VLANs to a monitor port/VLAN which can be connected to the network traffic monitoring tool(s).  The downside of SPAN ports is that you can overwhelm both the port and/or the switch CPU depending on your hardware.  If you send three 1G ports to a 1G SPAN port, and the three 1G links are all 50% saturated at peak, you will drop packets on the SPAN port shortly after you surpass the bandwidth of the 1G port (oversubscription).  The safest way to use a SPAN in this case is to mirror a single 1G port to a 1G mirror port.  Also consider how many SPAN or mirror ports are supported by your switching hardware.  Some lower end model switches will only support a single mirror port due to hardware limitations (switch fabric, CPU, etc.), while more expensive will be able to support many more SPAN ports.  I’m not going to get into local SPAN vs. RSPAN over L2 vs. ERSPAN over GRE…that is for the network engineers to figure out.

Passive and active taps can alleviate some of the issues with dropped packets on a switch SPAN as they sit in-line to the connection being tapped and operate at line speed.  The drawback is they may present a single point of failure as you now have an in-line connection bridging what is most likely your organization’s connection to the rest of the world.  Also, keep in mind that passive taps have two outputs, one for traffic in each direction so you’ll need to ensure the monitoring tools you have or plan to purchase can accept this dual input/half duplex arrangement.  Active taps on the other hand are powered so you’ll want to ensure you have redundancy on the power supply.

The last type of tap isn’t really a tap at all, but a link aggregator which allows you to supply inputs from either active/passive taps or switch SPAN ports which are then aggregated and sent to the monitoring tool(s).  The benefit of an aggregator is that is can accept multiple inputs and supply multiple monitoring tools.  Some of the more expensive models also have software or hardware filtering, so you can send specific types of traffic to specific monitoring tools if that is required.

Last but not least are the connection types you’ll be dealing with.  Most monitoring tools mentioned in this post accept 1G copper up to 10G fiber inputs, depending on the tool and model.  You also need to make sure your taps and/or aggregators have the correct types of inputs and outputs that will be required to monitor your network.  If you’re tapping the egress point chances are you’re dealing with a 1G copper connection, as most of us rarely have a need for more than 1G of internet bandwidth.  If you’re tapping somewhere inside your network you may be dealing with 1G, 10G, or fiber connections or a combination (i.e. 10/100/1000 Base-T, RJ-45 Ethernet,  1000 Base-Sx/Lx/Zx Ethernet multimode or singlemode), so keep this in mind as you specify your tapping equipment.

Location – Outside, Inside, DMZ, Pre or Post Proxy?  What About VM Switches?

Next is the issue of location of the network tap and the answer to this really depends on what level of visibility you require.  At a minimum I’d want to tap the ingress/egress points for my network, that is any connection between my organization and the rest of the world.  But that doesn’t quite answer the question as I still have options such as outside the firewall, directly inside the firewall (my internal segment), or just after my web proxy or IPS (assumes in-line) or inside the proxy.

There are some benefits and drawbacks to each of these options; however I’m most often interested in traffic going between my systems and the outside world.  The answer mainly depends on your network setup and the tools you have (or will have) at your disposal.  If you tap outside the firewall then you can see all traffic, both traffic which is allowed and that which may be filtered (inbound) by the firewall.  The drawback is both noise and the fact that everything appears to originate from the public IP address space we have as I’m assuming the use of NAT, overload NAT, PAT, etc. is in use in 99% of configurations.  The next point to consider is just inside the firewall; however that depends on where you consider the inside to be.  If we call it the inside interface (that which our end users connect through) then I will gain visibility into traffic pre-NAT which shows me the end-user’s IP address, assuming an in-line (explicit) proxy is not being used which would then make all web or other traffic routed through the proxy to appear to originate from the proxy itself.  Not forgetting the DMZ, we may also tap our traffic as it leaves the DMZ segment as well through a tap or SPAN as that will allow for monitoring of egress/ingress traffic but not inter-DMZ traffic.

Pre or post-proxy taps need to be considered based on a few factors as well.  If it is relatively simple to track a session that is identified post-proxy back to the actual user or their system, and is it cheaper for me to tap post-proxy, then go for it.  If we really need to see who originated the traffic, and what that traffic may look like prior to being filtered by a proxy, then we should consider tapping inside the proxy.  In most situations I’d settle for a tap inside the proxy, just inside the firewall prior to NAT/PAT, and just prior to leaving the DMZ segment.  To achieve this you may be looking at deploying multiple SPANs/taps and using a link aggregator to aggregate the monitored traffic per egress point.

Finally, what about all the virtual networking?  Well, there are point solutions for this as well.  Gigamon’s GigaVue-VM is an example of new software technology that is allowing integration with a virtual switch infrastructure.  While this remains important if we need to monitor inter-VM traffic, all of these connections out of a VM server (i.e. ESXi) need to turn physical at some point and are subject to the older physical technologies mentioned above.


This should be a standard section on encryption and how it may blind the monitoring tools.  Some tools can deal with the fact that they “see” encrypted traffic on non-standard ports and report that as suspicious.  Some don’t really care as they are looking at a set of C2 destinations and monitoring for traffic flows and amounts.  If you’re worried about encryption during a response you probably should be…and if you’re really concerned consider looking into encryption breaking solutions (i.e. Netronome).  Outside of the encryption limitation, after you deploy you tapping infrastructure your network diagrams should be updated (don’t care who does this, just get it done) to identify the location, ports, and type of component of your solution along with any limitations on traffic visibility.  Knowing what you can’t see in some cases is almost as important as what you can see.

Final Thoughts

Find your egress points, understand the network architecture and traffic flow, decide where and how to tap, and deploy the tapping infrastructure prior to having a need to use it…even if you don’t plan on implementing the monitoring tools yourself.  This is immensely beneficial to the incident responders in terms of gaining network visibility as quickly as possible.  As time is of the essence in most responses, please don’t make them sit and wait for your network team to get an approval to implement a tap just to find out they put it in the wrong place or it needs to be reconfigured.

If this needs to be sold as an “operational” activity for the network team, tapping and monitoring the network has uncovered many mis-configured or sub-optimal network traffic flows.  Everything from firewall rules which are too permissive to clear text traffic which was thought to be sent or received over encrypted channels.  Something to keep in mind…who knows, if you ever get around to installing network-based DLP you’re already on your way as you’ll have tapped the network ahead of deployment.

Leave a Comment more...

RSA/EMC Webinar on Security Resilience

by on May.04, 2012, under General, Incident Response

I also presented on a RSA/EMC webinar on security threats and building the right controls back in January that I never posted. The link to the event is Here.

Leave a Comment more...

Cyber Threat Readiness Webinar – May 3rd, 2012

by on Apr.25, 2012, under General, Incident Response

I’m presenting on Cyber Threat Readiness on a webinar on May 3rd with Mike Rothman (President at Securosis) and Chris Petersen (Founder and CTO at Logrythm)

Register Here

Most IT security professionals readily acknowledge that is only a matter of time before their organizations experience a breach, if they haven’t already.   And, according to the recent Cyber Threat Readiness Survey , few are confident in their ability to detect a breach when it happens.

In this webcast, three industry experts will discuss the current state of cyber threats and what’s required to optimize an organization’s Cyber Threat Readiness.  Given that it’s “when” not “if” a breach will occur, would you know when it happens to you?  Attend this webcast and increase your confidence in answering “Yes”.

Featured Speakers

Mike Rothman, President, Securosis

Deron Grzetich, SIEM COE Leader, KPMG

Chris Petersen, Founder & CTO, LogRhythm


Leave a Comment more...

KPMG LogRythm Webinar Replay Link

by on Sep.21, 2011, under General, Incident Response

The link here will take you to the LogRythm webinars page where you can watch a recording of the webinar from 9/13/11. Here is the excerpt from the webinar registration:

Detecting Advanced Persistent Threats (APTs) — Applying Continuous Monitoring via SIEM 2.0 for Maximum Visibility & Protection

KPMG’s Deron Grzetich and LogRhythm’s CTO, Chris Petersen share experiences working with clients to help detect and respond to sophisticated threats such as APTs and how continuous monitoring via SIEM 2.0 can play a meaningful role in thwarting the increasing number of high-profile data breaches occurring today.

Leave a Comment more...

KPMG LogRythm Webinar

by on Sep.11, 2011, under General, Incident Response

Shameless self promotion – I’m doing a webinar along with LogRythm’s CTO where we’ll be talking about new malware drivers and controls that most organizations should have in place today.

Leave a Comment more...

Security Management is Like an Ice Cream Sundae

by on Sep.01, 2010, under General, Incident Response

Building the foundations of a good security management program is much like building an ice cream sundae.  Not to imply that building a mature security management process is as easy, but quite honestly both have been around for long enough that we have some good guidance to rely on.  Given that everybody’s tastes are different, and the same goes for their risk appetite, the programs that result from going through this process are generally the same but always slightly different.

My analogy is from the perspective of the ice cream shopkeeper, although this may work from the customer point of view as well.  I’m behind the counter and my job is to make the sundaes when a customer orders.  A customer comes in, reviews my menu, and places an order.  The key is that I have a menu.  Compare that to the “catalog” your security function has created.  If you don’t have a menu how do your customers (i.e. the business, IT, outsourced customers, etc.) know what you can provide and at what level?

1. Consider creating a security services catalog to outline the services your security organization can provider to its customers.

Next, after I have the order, I need to know where everything is in order to start making the sundae.  I need to first grab a bowl which will hold the contents of the sundae I’m about to create.  I compare the bowl to the asset management function.  I need something to base my sundae on, and having a nice solid bowl instead of one that is cracked or has holes in it will make my job easier, my customer’s happier, and probably result in fewer stains on clothing.  Now consider your “asset management” bowl.  How solid is it?  Do you know where your assets reside, who owns them, what data they hold or process, or their level of criticality to your organization?

2. Ensure asset management is mature to create a solid foundation upon which to build your security management process.

If you ordered a sundae the next logical ingredient is the ice cream…so let’s go with two scoops.  And let’s pretend one is your patch management process and the other is configuration management.  Once we have a comprehensive view into our portfolio of assets through asset management we can now ensure that these are configured securely and consistently and that they are up-to-date with patches.  This is my equivalent of making sure the ice cream is not spoiled by checking the expiration date on the containers, but also that my scoops are round and of the same size every time I make a sundae.  I like this from an owner’s standpoint because sick customers don’t buy much ice cream and I also have a repeatable process to better understand pricing and profits…after all, I’m in this to make money.  So start asking yourself, is the ice cream spoiled and if not, am I consistently scooping the right size servings?  A better question may be, do all of my deployed (and future) technologies have a configuration standard?  Are my patching processes mature enough to ensure that all of my OS’s, applications, and devices are patched in a timely manner?

3. Configuration management and patch management are key areas in a mature security management process.  Ensure that all systems are deployed following a secure and consistently applied baseline standard and that teams responsible for patching have the right processes and technologies.

To add to the above statement, many people believe configuration management is a function of security only.  And for some organizations this may be true.  I’d contend that configuration management should be a function of IT.  From an operational standpoint ensuring systems are consistently configured cuts down on change control testing since we can test on a known configuration and our tests and back out plans are now more accurate.  This has implications on patch testing as well.  How many times has your organization deployed a patch to 10 systems and 2 went down because of the patch?  I’d venture to guess the 2 that went down, even after “successful” QA testing, were the result of some configuration inconsistency from the other 8 that had no issues.  This only hinders patch application and makes IT, whose job is mainly to keep systems available, less likely to deploy that off-cycle patch…which is ironic since those tend to be the vulnerabilities that are more critical.  Finally, patch application is no walk in the park either.  We tend to lack agentless solutions that patch both the underlying OS as well as the application layer.

Back to my sundae.  Now it is time for the whipped cream and a cherry on top.  The whipped cream is comparable to the vulnerability management process.  It blankets the ice cream, and hence ensures that patch and configuration management are “covered” and that we haven’t missed anything.  My opinion is that vulnerability assessment (not management) is the check to ensure that configuration and patch management are effective.  If they aren’t then I have holes in the whipped cream and I start to “see” the issues at a layer deeper.  Think about this, how does your organization use vulnerability management, or even assessment?  As a way to make up for a lack of mature asset management?  Are you even checking configuration compliance at this point?  Do you have so many vulnerabilities “out for remediation” that you can’t keep track of the current state of vulnerabilities in your environment?  Think about how many of those are related to patch and configuration issues…what, almost all of them?

4. Vulnerability management and the assessment process should be used as a check to ensure that patch and configuration management are effective.  If you’re using this process as a gap-filler for poor asset management, patch, and configuration management then you’re doing it wrong.  You’ve probably created an unrepeatable and very heavy vulnerability management process that is ineffective (read: everyone outside of security despises you).

Let’s not forget the cherry…which is nice to have but if you didn’t get one you wouldn’t be too upset…which in my analogy is pen testing.  I’m sure there are many “pen testers” that will disagree with that statement.  My opinion is that pen testing is a nice to have and I’d challenge those who feel otherwise to explain the value from a pen test.  Given enough time, money, and effort everything is breakable.  Someone created it therefore someone else will, given my statement above, break it.  Where I do believe pen testing is of value is in examining a critical applications in more depth and detail than a vulnerability assessment would.  Keep in mind that VA tools only check from known vulnerable conditions and it is possible, although rare, that new vulnerabilities are identified through pen testing.  There are also those who say “it shows the impact”…well, if you’re forced to show someone in management there is an impact then you haven’t done a very good job of relating technical vulnerabilities into business impacts and terms they understand.  I’m sorry to say that this happens quite too often and in that case maybe a pen test is exactly what you need to bridge that gap.  I’d put it this way:

5. Pen testing is a nice to have.  If my asset, patch, configuration, and vulnerability management processes are mature and effective the pen tester should be extremely bored during my assessment.  If you can’t explain the technical risk in terms of business risk (or you don’t have a risk management group) then hire a pen tester, but only as a last resort.

One caveat in all of my statements is that secure application development and strong network architecture exist.   In addition, I’m also assuming that you have defined remediation processes, the right technology and people, and some repeatable processes as well.  A stretch I know, but I can’t cover everything in one post J

Leave a Comment more...

Are Mac’s Taking Over The World?

by on Jun.05, 2009, under General, Incident Response

Kind of a misleading title, but the fact that I’m sitting in a Caribou Coffee with 6 other people who are all using their laptops to watch movies, surf, or do work, it struck me that everyone outside the corporate walls seems to be using a Mac. Of those 6 people I see 4 MacBook Pros, 1 Macbook, and 1 lonely HP Netbook. It could be that I’m on a college campus and that the Mac is the current “in” laptop to have based on Apple’s genius marketing campaign.

The other side of that campaign is based on the the fact misconception that the OS X is more secure than Windows. In 2007 OS X had 243 total software flaws that required patching versus just 44 for XP and Vista combined (OK, mainly XP since no one is actually using Vista). Also, the current release of the OS 10.5.7 fixes nearly 70 security flaws in OS X. One thing to keep in mind if you’re looking purely at the number is that OS X has many security patches to a singe Windows patch. As anyone on a linux box who has run the yum -y update command recently should be able to tell you, each component of the system requires an individual update or patch. Since OS X is built on many of these open source components it is no wonder the numbers seem to be in Windows’ favor.

If we can assume that OS X is as flawed as, if not more than, Windows then why aren’t we seeing a barrage of attacks against OS X? I think the right question should be is this even a viable platform to attack? If the motive of current attacks is money in the form of credit cards, bank accounts, identities, etc. then we can speculate as to why not. Ignoring the fact that OS X holds a low market share of the market, if most users are college student using Mac’s would it even make sense to compromise a system to access a credit card that has a $500 limit…mainly because the student filled out an application just to get a free $2 tee shirt. Or, is it the fact that Windows is so easily compromised that it makes no sense to go after Macs. I’m going with the latter for now.

While I wrote this a while ago I’m glad I held off on posting it. Since that time a study was conducted at the University of Virgina of incoming freshmen and which OS they picked for their laptop. Seems that Apple is starting to take a larger share of the higher ed. market...well, at least at Virginia. The issue around the total number of vulnerabilities in OS X was also covered today in an IBM ISS meeting I attended to discuss the findings from the X-Force 2008 Security Study.

1 Comment : more...

Effective Vulnerability Management — A Preface

by on Nov.12, 2008, under Incident Response

While its fresh in my mind, I thought I would write a little bit tonight about implementing a sound vulnerability management program within medium to large sized businesses. The good folks here at Malos Ojos and I are only starting to improve upon the VM processes in our day jobs, so the following posts in this category will be a mixture of me talking out loud; a splash of advice mixed with a dash of growing pains as we step down this well-traveled-but-not-quite-understood path.

Vulnerability management means many things to different people. Put simply, it is the act of obtaining a (semi?)-accurate depiction of risk levels in the environment, with the eventual goal of bringing those risks down to business-tolerable levels. At many companies, this translates into the following:

  1. A lone security analyst, sometime between the hours of 6pm-7am, will scan the internal/external networks with a vulnerability scanner of his/her choosing. The resulting output will be a 9MB XML or PDF file that is the equivalent of approximately 5,000 printed pages.
  2. This file is sent out via email to IT staff (print form would result in an arboreal holocaust which eventually leads to the Greenpeace reservists being called up to active duty…)
  3. IT staff member opens the attachment, notices a couple things: Many confusing words/numbers and systems that don’t belong to them. Attachment is closed and never opened again.
  4. GOTO 1

If you look at the people, technology, and processes used to deploy the vulnerability management solution in the scenario described above, I wouldn’t rate either of the three pillars very high. Well, maybe for people, at least this company HAS a security analyst…

There are also some of you that will also point out the laziness of the IT staff for either closing the document prematurely and not asking the analyst for help on making heads or tails of the report. While this is true, it is our jobs as security professionals to anticipate this laziness and create a report that will help ease their digestion of the information. Thats why they pay us the big bucks *cough* 😀

Back to my point, I’ve sat down and mused upon what a good vulnerability management program should consist of. Here are my notes thus far:

  1. Inventory of assets
  2. Data/system classification scheme
  3. Reoccurring process for identification of vulnerabilities -> risk scoring
  4. LOLCats
  5. Remediation and validation
  6. Reporting and metrics

The list above will be revised over time, almost assuredly as soon as I arrive into the office tomorrow morning with my coworkers having read this post. Because I’m a tad hungry, but more importantly because I don’t get paid by the word (Schneier, I’m looking at you), I will end the post here.

Fear not true believers, I have many more posts to come that expand on my vague aforementioned scribbles.

3 Comments more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

Links for tools and such...