Should I bring all my shoes and glasses?

// >> General

Cyber Threat Readiness Webinar – May 3rd, 2012
| 25. April, 2012

I’m presenting on Cyber Threat Readiness on a webinar on May 3rd with Mike Rothman (President at Securosis) and Chris Petersen (Founder and CTO at Logrythm)

Register Here

Most IT security professionals readily acknowledge that is only a matter of time before their organizations experience a breach, if they haven’t already.   And, according to the recent Cyber Threat Readiness Survey , few are confident in their ability to detect a breach when it happens.

In this webcast, three industry experts will discuss the current state of cyber threats and what’s required to optimize an organization’s Cyber Threat Readiness.  Given that it’s “when” not “if” a breach will occur, would you know when it happens to you?  Attend this webcast and increase your confidence in answering “Yes”.

Featured Speakers

Mike Rothman, President, Securosis

Deron Grzetich, SIEM COE Leader, KPMG

Chris Petersen, Founder & CTO, LogRhythm

 

Is Cloud-based SIEM Any Better?
| 20. April, 2012

In flipping through some articles from the various publications I read (wow, did I just sound like Sarah Palin?) I came across this comment in an article on SIEM in the cloud:

“Another problem with pushing SIEM into the cloud is that targeted attack detection requires in-depth knowledge of internal systems, the kind found in corporate security teams. Cloud-based SIEM services may have trouble with recognizing the low-and-slow attacks, said Mark Nicolett, vice president with research firm Gartner.” (http://searchcloudsecurity.techtarget.com/news/2240147704/More-companies-eyeing-SIEM-in-the-cloud)

To give some context to the article it was more about leveraging “the cloud” to provide SIEM services for the SMB market, which doesn’t have the staff on hand to manage full-blown SIEM deployments, than it was about detecting attacks, but I digress…

I agree and disagree. I agree, and have said this before in my arguments for and against using an MSSP for monitoring. While the context of the article was using the cloud to host the data (and the usual data protection arguments came up) isn’t this just another case of outsourcing to a 3rd party provider and calling it cloud? Data security issues aside, it doesn’t matter if the MSSP uses their own infrastructure, or some cloud provider’s infrastructure, the monitoring service is what I’m paying for. I’ve said this before, and I’ll say it again, a MSSP is “a SOC” not “your SOC”. They do a fair job at detecting events, but may fail to put these into a business context that makes sense. Again, this is something you can try to get them to do, but personal experience has taught (more like biased) me to believe that it can’t be done.

But I’m also going to disagree and say that it isn’t only cloud-based SIEM providers who miss the low-and-slow attacks. I’d argue internal security teams are as likely to miss them as well based on the maturity of monitoring I see at organizations and the surrounding IR processes. I don’t mean to sound negative, but few organizations have built a solid detective capability that gets down to the level of very carefully crafted attacks which may not result in a lot of traffic and/or alerts. In addition, the alerts as defined by your SOC/IR team may not be suited to catch these attacks, and even if they are we still need to ensure we have the right trigger sources and thresholds without overwhelming the analysts who deal with the output.

Either way, my point is that we aren’t very good at this….yet. What I often see lacking is the level of knowledge in the analysts who review the console and even some of the program architects who define the alerts for the “low and slow” attacks. In terms of maturity we are still struggling with getting the highly visible alerts configured correctly for our environment or getting the SIEM we purchased 2-3 years ago to do what we want it to do. Vendors are doing their part to make deployment and configuration simpler while still allowing flexibility in alert creation and correlation. But, I don’t think that will get us to the level of maturity needed to identify the stealthy attacks…I do think it is going to come down to us providing “security intelligence” versus a monitoring service, but I’ll hold on to that for a future post.

To answer the question of the title, no, not yet. Again, I think what we are talking about here is just outsourcing using the “C” word and I argue the same points I would if I just said MSSP in place of cloud. Business context issues aside, it is better than doing nothing and may serve a purpose to fill a void, especially if the organization is small enough that they will never bring this function in house. One thing that the attackers understand is that although the SMB market may not be as juicy a target as the large orgs, they still have some good data that is worth the effort…and even less risk since they rarely have solid security programs.  So, is it better than nothing?  Sure.  Is it the correct answer today?  Maybe.  Will it detect a low and slow attack?  No, but you’re chances with in internal program aren’t that much better today….and they need to be.

SIEM Deployments Don’t Fail…
| 10. April, 2012

Let me restate the title, SIEM deployments don’t fail.  The technology to accept logs, parse and correlate events has existed in a mature state for some time now.  The space is so mature that we even have a slight divide between SIEMs that are better suited for user tracking and compliance and some that are better at pure security events depending on how the technology “grew up”.  So the pure technical aspects of the deployment are generally not the reason your SIEM deployment fails to bring the value you, or your organization, envisioned (no pun intended).

Remember that old ISO thing about people, process, AND technology?  Seems we often forget the first two and focus too much on the technology itself.  While I’d like to say this is limited to smaller organizations the fact it that it is not.  The technology simply supports the people who deal with the output (read: security or compliance alerts) and the process they take to ensure that the response is consistent, repeatable, tracked, and reported.  That being said we also seem to forget to plan out a few things before we start down the SIEM path in the first place.  This post aims to provide you with the “lessons learned” from both my own journey as well as that of what I see my clients go through in a Q and A format.

Question 1. Why are we deploying SIEM or a log management/monitoring solution?

The answer to this is most likely going to drive the initial development of your overall strategy.  The drivers tend to vary but generally fall into the following categories or issues (can be one or more):

  1. The company is afraid of seeing their name in the paper as the latest “breached” company (i.e. is afraid of Anonymous due to their “ethicalness” or possibly afraid of what is left of Lulzsec)
  2. A knee-jerk reaction to being breached recently and the checkbook is open, time to spend some money…
  3. Had some failure of a compliance requirement (i.e. PCI, e-Banking) that monitoring solves (from a checkbox perspective)
  4. Have finally graduated from simply deploying “preventative” controls and realize they need to detect the failure (which happens more than we know) of those controls

What are just as important are the goals of the overall program.  Are we more concerned with network or system security events?  Are we focused on user activity or compliance monitoring?  Is it both?  What do we need to get out of this program at a minimum level and what would be a nice to have?  Where does this program need to be in the next 12 month?  The next 3 years?  Answering these questions helps answer the question of “why”.  The purpose and mission must be defined before we even think about looking at the technology to support the program.  While this seems like a logical first step most people start by evaluating technology solutions and then backing into the purpose and mission based on the tool they like the most.  Remember, technology is rarely the barrier.

Question 2. Now that we are moving forward with the program, how do we start?

The answer to this one will obviously depend on the answers to some of the questions above.  Let’s assume for a moment, and for simplicity of this post, that you have chosen security monitoring as the emphasis of the program.  Your first step is NOT to run out to every system, application, security control, and network device and point all of the logs at the highest (i.e. debugging) level at the SIEM.  Sure, during a response having every log imaginable to sort through may be of great benefit, however at this stage I’m more concerned that I have the “right” logs as opposed to “all” logs.  One of the reasons I see this “throw everything at the SIEM and see what sticks” idea may be partially driven by the vendors themselves or an overzealous security guy.  I could image a sales rep saying “yes, point everything at us and we’ll tell you what is important as we have magical gnomes under the hood who correlate 10 times faster and better than our competition”.  Great, as long as what is important to you exactly lines up with what the vendor thinks then go for it (joking, of course).

The step that seems most logical here is to define what events, if they occur, are most important given your organization, business, structure, and the type and criticality of data you store or is most valuable.  If we define our top 10, 20, 30, etc. and rank these events by criticality we have started to define a few things about our program without even knowing it.  First, with a list of events we can match these up to the log sources that we would need in order to trigger an alert in the system.  Do we need one event source and a threshold to trigger?  Or is it multiple sources that we can correlate?  Don’t be surprised if your list is a mixture of both types.  Vendors would love for us to believe that all events are the result of their correlation magic, but in reality that just isn’t true.  We can take that one step further and define the logs we would need to further investigate an alert as well.  Second, we started to define an order of criticality for both investigation and response.  Given the number of potential events per day and a lack of staff to investigate every one, we need to get to what matters which should be our critical or higher risk events first.

One thing to keep in mind here as well is to not develop your top “x” list in vacuum.  As part of good project planning you should have identified the necessary business units, lines, and resources that need to be involved in this process.  Security people are good at thinking about security, but maybe not so much about how someone could misuse a mainframe, SAP, our financial apps and so on.  Those who are closer to the application, BU, or function may end up being a great resource during this phase.

And finally, events shouldn’t be confined to only perimeter systems.  If we look at security logging and are concerned about attacks we need to build signatures for the entire attack process, not just our perimeter defenses which fail us 50% of the time.  Ask yourself, if we missed the attack at the perimeter, how long would the attacker have access to our network and systems until we noticed?  If the Verizon DBIR report is any indication the answer may be weeks to months.

Question 3. I’ve defined my events, prioritized them, and linked them to both trigger log sources and investigation log requirements.  Now what?

Hate to say it, but this may be the hardest part of the process.  Hard because it assumes your company has asset management under control.  And I don’t mean being able to answer where a particular piece of hardware may be at a given moment.  I do mean being able to match an asset up to its business function, use, application, support, and ownership information from both the underlying services layer (i.e. OS, web server, etc.) as well as the application owner.  All of this is in addition to the standard tracking of a decent asset management program such as location, status, network addressing, etc.  If you lack this information you may be able to start gathering the necessary asset metadata from various sources that may (hopefully) already exist.  Most companies have some rudimentary asset tracking system, but you could also leverage output from a recent business impact analysis (BIA) or even the output from the vulnerability assessment process…assuming you perform periodic discovery of assets.  Tedious?  Yes.

Let’s assume we were able to cobble something together that is reasonable for asset management.  Using our top “x” list we can identify all of the log sources and match those up to the required assets.  Once we know all of the sources we need to:

  1. Ensure that all assets that are required to log, based on our events, have logging enabled and to the correct level, and;
  2. That as new assets are added which match our a log source type from our event list go through step 1 above, and;
  3. The assets we do have logging to the SIEM continue to log until they are decommissioned.  If they stop logging we can investigate as to why.

One client I had called this a Monitored Asset Management program or something to that effect, which I thought was a fitting way to describe this process.  This isn’t a difficult as one may think given that the systems logging into our SIEM tend to be noisy, so a system that goes dead quite for a period of time is an indicator of an potential issue (i.e. it was decommissioned and we didn’t know, someone changed the logging configuration, or it is live yet has an issue sending (or us receiving) the logs).  One thing that does slip by this process is if someone changes the logging level to less than what is required for our event to trigger, thus blinding the SIEM until the level is changed back to the required setting.

In addition to the asset management we should test our event for correctness at this point.  We should be able to manually trigger each event type and what as it comes in to the SIEM or dashboard.  I can admit I have made this mistake in the past, believing that there is no way we could have screwed up a query or correlation so that the event would never trigger…but we did.  You should also have a plan to test these periodically, especially for low volume high impact type of events to ensure that nothing has changed and the system is working as designed.

Question 4. To MSSP, or not to MSSP, that is the question.  Do you need an MSSP and if so what is their value?

This is also a tough question to answer as it always “depends”.  Most companies don’t have the necessary people, skills, or availability to monitor the environment in a way which accomplishes the mission we set for ourselves in step 1.  That tends to lead to the MSSP discussion of outsourcing this to a 3rd party who has the people and time (well, you’re paying for it so they better) to watch the events pop up on the console and then do “something”.

Let me start with the positive aspects of using an MSSP before I say anything negative.  First, they do offer a “staff on demand” which may be a good way to get the program off the ground assuming you require a 24×7 capability.  That is a question that needs to be answered in step 1 as well, and you should ask yourself if we received an alert at 3am, do we have the capability to respond or would that be taken care of by the first security analyst on our team in the morning?  24×7 monitoring is great, assuming you have the response capability as well.  Second, they do offer some level of comfort in “having someone to call” during an event or incident.  They tend to not only offer monitoring services but also may have response capabilities, threat intelligence information (I’ll leave the value of that one up to you), and investigation.

Now on to the negatives of using an MSSP.  First, they are “a SOC looking at a SIEM console”, and not “your SOC who cares about your business”.  The MSSP doesn’t view the events in the same business context and you unless you give them that context and then demand that they care.  Believe me, I’ve tried this route and it leads to frustrating phone calls with MSSP SOC managers and then the sales guy who offers some “money back” for you troubles.  Even if you provide the context of the system, network architecture, and all the necessary information there is no guarantee they will use it.  To give you a personal example we used an unnamed MSSP and would constantly receive alerts from them stating that a “system” was infected as it was seen browsing and then downloading something bad (i.e. JavaScript NOOP sled or infected PDF).  That “system” turned out to be the web proxy 99.9% of the time.  To show how ridiculous this issue was all you had to do was look in the actually proxy log record, which was sent to them, to determine the network address (and host name) of the internal system that was involved in the event.  Side note, they had a copy of the network diagram and a system list which showed the system by name, network address, and function.  Any analyst who has ever worked in a corporate environment would understand the stupidity of telling us that the web proxy was potentially infected.  Second, MSSPs, unless contractually obligated, may not be storing all of the logs you need during an incident or investigation.  Think back to the answer to question 2 for a moment where we defined our event, trigger logs, and logs required to further investigate an event.  What happens if you receive an event from the MSSP and go back to the sources to pull the necessary logs to investigate only to find they were overwritten?  As an example from my past, and this depends on traffic and log settings, but Active Directory logs at my previous employer rolled over every 4 hours.  If I wasn’t storing those elsewhere I may have been missing a necessary piece of information.  There are ways around this issue which I plan on addressing in a follow up post on SOC/SIEM/IR design.

Question 5. Anything else that I need to consider?  What do others miss the first time around, or even after deploying a SIEM?

To close this post I’d offer some additional suggestions besides some of the (what I feel are obvious) suggestions above.  People are very important in this process, so regardless of the technology you’re going to need some solid security analysts with skills ranging from log management to forensics and investigations.  One of the initial barriers to launching this type of program tends to be a lack of qualified resources in this area.  It may be in your best interest to go the MSSP route and keep a 3rd party on retainer to scale your team during an actual verified incident.  Also, one other key aspects of the program must be a way to measure the success, or failure, of the program and processes.  Most companies start with the obvious metric of “acknowledged” time…or the time between receiving the event and acknowledging that someone saw it and said “hey, I see that”.  While that is a start I’d be more concerned that the resolution of the event was within the SLAs we defined as part of the program in the early stages.  There is a lot more I could, but won’t, go into here on metrics which I’ll save for a follow up post.  In my next post I’ll also talk about “tiering” the events so that the events with a defined response can take an alternate workflow, and more interesting events which require analysis will be routed to those best equipped to deal with them.  And finally, ensuring that the development, or modification, of the overall incident response process is considered when implementing a SIEM program.  Questions such as, how will SIEM differ from DLP monitoring and how does SIEM compliment, or not, our existing investigative or forensics tool kit, will need to be answered.

Conclusion

To recap the simple steps presented here:

  1. Define your program with a focus on process and people as opposed to a “technology first” approach
  2. Define the events, risk ranked, that matter to your organization and link those to both the required trigger log sources as well as logs required to investigate the event
  3. Ensure that the required logs from the previous step are available, and continue to be available to the SIEM system
  4. Consider the use of an MSSP carefully, weighing the benefits and drawbacks of such an approach
  5. Lots of other items in terms of design, workflow, tracking and the like need to be considered (hopefully I’ll motivate myself to post again with thoughts on SOC/SIEM/IR design considerations)

While I think the list above and this post are quite rudimentary I can admit that I made some of the mistakes I mentioned the first time I went through this process myself.  My excuse is that we tried this for ourselves back in 2007, but I find little excuse for larger organizations making these mistakes some 5 years later.  Hopefully you can take something from this post, even if it is to disagree with my views…just hope it encourages some thought around these programs before starting on the deployment of one.

Ethically teaching Ethical Hacking?
| 21. November, 2011

So I’m starting to get a little concerned over our educational institution’s paranoia around teaching ethical hacking skills to information security students.  To start I ran a search to find universities or colleges that offer an ethical hacking course as part of a degree program and was quite surprised to see that USC, John Hopkins, and The University of Colorado at Boulder offer this just to name a few.  But as I expanded my search I came across a presentation written by Gail Finley, who was a faculty member at Hampton University in 2009, that was titled “Just Say No to Teaching Ethical Hacking”, link is here .  Interested in the title and always willing to read something for a laugh I opened the presentation.  Dispensing with the junk in the front of the presentation I finally got to the meat of the argument of why we should not teach this class to students in a university or college setting.

Of the 3 reasons presented I could only partially agree with one of them.  I agree it is a liability if the university or college supplies the tools and systems to use in a lab environment and is unable to sufficiently lock these systems down so they couldn’t be used to attack other networks on the internet.  I agree as long as Gail could tell me if they prohibited the installation of 3rd party applications on all school systems which had an internet connection.  If she couldn’t then why would it matter…don’t want me running nmap in a lab environment, well I’ll just go install it somewhere else and run it.  Or, Gail, did the university disallow students from “plugging in” their laptops and netbooks?  If not then this point doesn’t hold up.

Now on to the 2 reasons I actually disagree with.  First, Gail mentioned a concern about teaching a “dangerous skill” to students who may be unable to make the correct ethical or moral decisions on how to use their newly acquired skill.  Isn’t that true at any age?  She mentioned that “some may consider hacking as a prank”… again, that is as true for a self-taught 12 year old as it is for a person in their 80’s today.  I’m not sure why age matters given the range of ages of the students attending college today.  In fact, I’d say their moral compass is far more likely to be “developed” than say a high school student’s just based on life experience.  Then again, I was an engineering major and received a B- in psych so what do I know?

Second, and related to the “dangerous skill”, is a concern that “some students have a background that would make them unsuitable for such a class”.  Really, is the student population heavy on ex-con hackers trying to live a reformed life?  Could it be a comment related to the ethnic mix at an inner-city university?  Who knows?  Only Gail knows.  My sense is that Gail is trying to say some of the students, although good students, are predisposed to a life of crime and this would only act as an enabler.  To that I would answer the same as above…if you don’t teach them and they want to learn they will teach themselves.  Some of the best people in the field of pen testing and ethical hacking don’t go to or haven’t graduated college.  Point being, if you want to use this skill to commit crimes you’d be better off skipping the high tuition of a university course and teaching yourself.  When I started in this game there was one book “Hacking Exposed Volume One” and a bunch of IRC channels where you could learn.  Add Google and 10 years and you can teach yourself anything, including ethical hacking or basket weaving if you so choose.

Now a few years have passed since Gail wrote and gave this presentation, and I’m wondering if she still feels the same.  She didn’t have the opportunity to witness the lulz of LulzSec…BTW Gail, how many of the people associated with LulzSec do you think learned their skills in a college course?  You could always answer “none, because we won’t teach them” which would make me laugh.

So to my question in the title, can we ethically teach ethical hacking?  Yes.  Part of teaching a course like this entails instilling a sense of ethics and responsibility in the students.  If you read any “ethical hacking” book flip to the first chapter…no the one after the one about the certification test…there.  It is probably something on ethics and a brief intro to the laws related to computer crimes.  I’m not saying this stops someone from committing crimes once they know how to use certain tools…but I can also tell you that there is no way one or two college courses could condense and convey the knowledge required to be a hacker of the skill level required to start your own underground cybercrime ring.  My view is that the student is going to use their skill for good, or evil, or something in between.  In the end that isn’t up to us…and all we can do is hope.  And I honestly do believe that we are doing a disservice to our industry if we can’t, and don’t, teach people this offensive skill.  Some of the most well defended networks I’ve come across were designed by folks who truly understand offense as much as defense.  And if I had one message to the institutions of higher education…get over it and start teaching your students the skills that make them valuable and worry less about teaching the “wrong” students.

MIRCon 2011 Wrap-Up
| 14. October, 2011

So I’m on a kick now of attending conferences again and happened to be close by at a client at the same time Mandiant was holding the annual incident response conference called MIRCon in Alexandria, VA. This was only the second annual conference, however like DerbyCon you’d never know it was a fairly new conference based on the speakers and quality of the conference. I did take notes on some of the more interesting topics and wanted to share those in a post to the site. As a side note, pony tails (or as I was corrected, pwnie tails) and suits seemed to be all the rage at this conference. Don’t think I’ll be able to grow my hair out in time for next year so I guess I just have to feel out of place.

The keynotes for the two day conference included Richard Clarke and Michael Chertoff. In the first keynote, Clarke used the CHEW acronym to describe the current set of threats, meaning crime, hacktivism, espionage, and warfare. While not groundbreaking I tend to enjoy acronyms I find funny. A few snippets that were interesting include: “we tend to over classify in the government, and sometimes we use that to hide mistakes”, and “we had the software purchased that would have caught the private who accessed the cables (now known as wikileaks), but it was on a shelf and not installed”. Chertoff’s message was similar in terms of describing the threat types, but he also pushed a need for better information and intelligence sharing among responder and counterintelligence groups. There is a need to understand the motives and methods of the attackers. While I do think “some” sharing occurs, this is most likely limited to DIB (defense industrial base) type orgs and the government. In the commercial sector it would be tough to ask SecureWorks CTU to share with Mandiant or the other way around. Maybe the issue is they have too much data, but it is also this intelligence which differentiates (or doesn’t) between competitors in this space. But I think the overall message of his keynote was that the intelligence (shared or not) needs to be less about technology and more about the human element of the threat. When he was talking about sharing though I did get a quick flash of Joseph K. Black on his MegaCommunity soap box (which hasn’t come up recently, so I assume someone new is running his Twitter account)…I actually saw his Twitter profile pic in my head and now I’m scared.

Following the keynotes were various speakers where the sessions were broken into management and technical tracks. While I wasn’t able to attend all of the tracks due to calls and client commitments, I did attend a few that were interesting. Tony Sager from NSA talked about his experiences with contracting with the Red Team to perform assessments. One comment that came out of this was when he asked the Red Team if a well managed network was a harder target, and the obvious answer was yes. But when thinking about client engagements and their lack of IT management/operational maturity I couldn’t help but be discouraged. The comment related to that was “defense-in-depth has become a crutch”, something that we do because we don’t know or can afford it…but it doesn’t solve the management problems of IT. And the solution of good management and inclusion of security in what IT does, as we’ve said a million times, needs to come from the top of IT and not from the infosec level. Even in orgs where this is the case I don’t see that buy-in trickle down to the staff levels which is discouraging. On the topic of metrics, which is always enjoyable, was a presentation by Grady Summers on the how and what to measure to track your incident response metrics. I liked the intro of “what makes good metrics” which used the Security Metrics book by Andrew Jaquith as the list of “good” measures. The unfortunate thing is that consistency, context, and automation seem to be the biggest issues. That aside, there is a lot that you could, and probably don’t, measure and report on. Most orgs start with the most obvious of the 8 or 9 measures you could take and that is the “time to review”. That simply measures the response to opening and acknowledging a ticket or alert. Perhaps if incidents are tracking in a ticketing system this could be pulled and reported on, but in some cases the info just isn’t tracked at all and measurement becomes very difficult and time consuming (read: not a good metric). I think we need to get here, but my concern is we have orgs still working on getting monitoring off the ground and mature to a level which identify the alerts or events that require investigation. This approach to metrics would be great if you had a very mature monitoring and response function…sorry, just not seeing to many of these today.

Finally, there was a panel discussion on in-sourcing or out-sourcing your CIRT. While the panelists came from different size orgs and industries the message was quite similar. IR out-sourcing is not a solid option, however augmenting your team with a 3rd party is. Internal business knowledge and direct management over the responders is required to make it a successful response function. The topic of MSSPs and monitoring came up as well, which had a similar message. Either you throw it to the MSSP because you have noting and need something now or you need to augment staff (i.e. 24×7 monitoring). However, the message that this should also be an internal function was pretty strong. Again, as you move into monitoring your internal environment, not just the perimeter, you’re going to need people who understand the business and IT environment. MSSPs serve a purpose, but keep in mind they are “a SOC” not “your SOC”.

All in all a good conference and will definitely try to make it back next year.  As another side note, Apneet Jolly was not present at this conference…I’m suprised, since I assume thats all he does for a living since I see him at every one.

DerbyCon 2011 Wrap-Up
| 4. October, 2011

Since DerbyCon is brand new this year, and in case you weren’t aware of what it is, I thought I’d drop some of my notes on the conference and presentations overall.  First, it is at a decent time during the year given the spacing between the various cons.  It also runs over a weekend, so even those who don’t get “approval” to go to this can simply take a day off and hit the conference from Friday through Sunday.  Louisville as a location is also great for those of us heading in from Chicago or the Midwest as driving or a cheap SWA flight makes it fairly easy to get to.  There are also plenty of hotels around the area of the conference (held at the Hyatt) in case you have points at one of the competing chains you’d like to use for a free room.  It is also right down the street from 4th Street Live and pretty well located in terms of finding food and drinks after the talks.

All right, enough about where and when it is.  Let’s get on to the talks and conference itself.  The conference features some training tracks (evenings) as well as presentations throughout the day.  The nice thing is at the end of the first day the conference starts to split into tracks and continues this way until the conference ends.  While I like that format I didn’t see a theme to the tracks like you do at Defcon, which unfortunately means I’m torn between two different talks at the same time as the content is interesting and along the same path.  The talks I did attend were very good for the most part and I took at least one new thing away from each session (a new tool, technique, thought, etc.).

In addition to the talks there were quite a few training courses.  They range from physical security and social engineering topics to Metasploit and Windows exploit development.  While these did run in the evening they also overlapped with the end of the day presentations, which would make it difficult to do both unless you go into the con knowing you’re missing talks you may like to see.  Beyond training there was a movie theatre setup playing movies all hackers love, a lock pick and hardware village, and the usual CTF competition.  There were also vendors, but the space was somewhat limited so there weren’t too many…I did enjoy the book vendors as you generally don’t get to “see” the books covering security topics in a book store anymore.

All in all you’d never guess this was a 1st year conference given the content, speakers, AV, and attendance.  I didn’t see any issues (short of the lack of space in some rooms).  Things that would be an improvement for next year according to me would be:

  1. Have some “theme” to the tracks…such as exploit development, social engineering/hardware hacks, new projects/tools…just some thoughts.
  2. Training should start a day early and continue in the evenings after the presentation are over, or two days prior and not overlap the presentations.
  3. Location is good, but the rooms are a bit tight for some of the talks.  I’ve always guessed that if you spaced the chairs out slightly (or left more standing room at the back) that more people would fit.  Those chairs are closer together than airline seats and people sit every other chair for the most part.
  4. Vendors?  I’m assuming that comes along with time as the conference is just starting.  It would be nice to see more vendors to help offset the costs (or possibly pay for a larger space like the conference center down the street).
  5. Stop giving out bags.  No one wants them and they end up being thrown away.  All I need is a conference schedule and a badge to get in.  Speaking of, posting the talks, rooms, and times in a central spot would be nice as well.  It was done on Saturday by each individual room but I didn’t see the sheets up near the rooms on Sunday.

Despite all of these improvement ideas I’m definitely going back next year.

KPMG LogRythm Webinar Replay Link
| 21. September, 2011

The link here will take you to the LogRythm webinars page where you can watch a recording of the webinar from 9/13/11. Here is the excerpt from the webinar registration:

Detecting Advanced Persistent Threats (APTs) — Applying Continuous Monitoring via SIEM 2.0 for Maximum Visibility & Protection

KPMG’s Deron Grzetich and LogRhythm’s CTO, Chris Petersen share experiences working with clients to help detect and respond to sophisticated threats such as APTs and how continuous monitoring via SIEM 2.0 can play a meaningful role in thwarting the increasing number of high-profile data breaches occurring today.

BitCasa Encryption?
| 18. September, 2011

Art’s post below got me thinking about BitCasa and the security of the data…and it seems BitCasa’s CEO mentioned something about how they plan to protect the data in a recent interview (http://techcrunch.com/2011/09/18/bitcasa-explains-encryption/). The obvious answer is encryption, but the question is how? Note, I’m not stating this is HOW BitCasa works, simply presenting an option for how this may work.

One issue with successfully de-duplicating data is data encryption itself. So for example, if I have a file and you have a file but our encryption keys are both different than the file appears completely different to the de-duplication system. It fails to identify two exact files because they no longer match. However, there is another way in which we can secure the data using the same key if we derive the encryption key from the data itself. So in a new example, let’s take the file mentioned above and split it into chunks of data. Now, if I hash a chunk and use the hash as the encryption key for the chunk I have a “secure” chunk. If I transmit the chunk across the wire and it is intercepted by an adversary it is still secure as the adversary doesn’t know the plaintext which generated the key for encryption. Sure, depending on the size of the chunk we could be subject to brute-force attacks…so care needs to be taken to make brute-force possible only after the data has “expired” or lost all value (you choose: years, decades, millennia, etc.). Next, I upload the chunk to the server for assessment. Thinking about de-duplication for a second, since the hash and algorithms are all the same (SHA-256 and AES-256 in BitCasa’s case) and the key, which is derived from two identical chunks of data is also the same, the resulting cipher text will also be identical. And if I see two identical chunks on the server-side I know I have a duplicate chunk and only need to store one of the two.

Given that I’m talking about chunks there is another layer to this system which I’m still trying to understand…the metadata. Something has to map all of those chunks to a single file if we are indeed breaking it up into smaller pieces. But that’s for another post…hopefully after BitCasa tells us more on how the system works. Also, the secret-sauce that stores “something” on the local drive needs some explanation as well.

Mi Casa, Bitcasa?
| 13. September, 2011

Recently got wind of a new startup cloud service, Bitcasa,  pieced together from some ex-Mastercard and Verisign guys.  Essentially, it is a cloud service that offers its users UNLIMITED storage.  I’ve scoured the web for more details, but they’re pretty vague at this point.  From what I can gather, it is basically Dropbox without the local syncing.  The service uses your local hard drive as a temporary cache with some patent pending mumbo-jumbo where it attempts to guess what files you will use the most.   Yea, I don’t really understand it either.

A few things thoughts come to mind:

1) With the advent of other streaming cloud services (Spotify, Netflix, etc), I would argue that the routine of buying larger and larger hard drives are a thing of the past.  I’ve already begun deleting my music and movie “backups”, and am currently at pre-2003 hard drive space levels.  Look out Moore’s Law!

2) The things I actually do use my hard drive for (operating system, games, applications, etc), aren’t hard drives cheap enough now that I don’t really need cloud storage for this?  I can get a 1TB 7200 RPM drive right now for 50 bucks.  Now that I think about it, I probably can’t even run applications off Bitcasa anyways.

3) What happens if I don’t have an Internet connection? How do I get files if their patented guessing algorithm is wrong?

Putting on my security hat for a second, this service poses an interesting issue should it take off.  In one of my earlier posts I had guessed that the ever increasing sizes in hard drives would be the end of forensics.  While this may still happen, it will be a gradual, slow death.  But what if the actual coup de grace is the shift from using traditional hard drives to cloud based storage?  Don’t get me wrong, this idea isn’t novel or groundbreaking, but what I’m trying to highlight is that instead of cloud being a “down the road technology”, the train is already in the station and will only gain momentum.  Certain host-based forensics you could probably still do, like web history and security log analysis.  But from an e-discovery perspective, what would you do if a company had made the switch to store their data using a service such as Bitcasa?  Who knows if any trace of the files exist locally, and its not as if they can goto the cloud vendor with a subpoena to seize data.  Looking 2-5 years down the road, I can see most companies migrating their email infrastructure to the cloud as well.  I know the Microsoft’s cloud mail solution, BPOS, comes with a master account should mail need to be retrieved for a user.  But what if Bitcasa’s “no keys to your kingdom” security model were applied at other email vendors? I suppose corporate email and personal storage operate on two very different premises, but hey, I’ve seen crazier trends come out of this industry.

 

KPMG LogRythm Webinar
| 11. September, 2011

Shameless self promotion – I’m doing a webinar along with LogRythm’s CTO where we’ll be talking about new malware drivers and controls that most organizations should have in place today.

https://www1.gotomeeting.com/register/659315160

Theme made by Igor T. | Powered by WordPress | Log in | | RSS | Back to Top