San Francisco Bay Area Data Loss Prevention User Group

 View Only

The wrath of risk scoring 

Mar 09, 2010 12:38 AM

Over the past several months, I've had the opportunity to speak to a lot of people about the product direction we're heading in and have spent a fair chunk of time discussing risk and prioritization of security events. A fair portion of those discussions have centered on the notion of a risk score or ranking to prioritize folders and incidents for remediation.  In light of this, the following threads are particularly interesting and amusing -- a mix of skepticism as it pertains to risk modeling along with all out bluster and bashing.

I think it’s important to bear in mind that many of these discussions, while dismissive of risk quantification in many ways, are also focused on assessing the probability of a threat being executed rather than prioritizing clean up or remediation. However, I also think it raises the issue of how folks approach risk modeling and quantification, and how we can provide a better/more informed risk context.  Or if we should even try. 

One thing is for sure, there doesn't seem to be any shortage of opinions on the matter and I'd like to better understand all the fuss.  What are the approaches to risk modeling that work for you and aren't so appalling and more importantly, actually convey relevant information with an eye towards reducing said risk? We can abandon an effort around risk quantification (although I've heard it's been an enhancement request since v2), but it seems like we need to step in this direction while providing context and flexibility without giving away the entire store.  It's either that or we're left throwing a dart at a dart board while blindfolded and off-kilter from downing 3 Racer 5's. Ok, that's just how I model risk... er, throw darts.

Hat's off to the folks at Securosis for raising the thread and invoking the post here by the way.  They generally have some good material and an interesting view on trends.  Their latest post bashes on marketing Advanced Persistent Threat (APT) and is good for a read.  Look forward to any comments/thoughts you may have on the above. 

Harold

Statistics
0 Favorited
1 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Comments

Apr 12, 2010 06:54 AM

It was my thought that incidents were in fact vulnerabilities that could be scored using CVSS... so if Vontu could pull in certain defined values from sources we define, it could define the CVSS score for a given incident.

CVSS Metric Groups

Base Metrics

  • Is the vulnerability exploitable remotely (as opposed to only locally)
    Was it a DAR incident?  DIM outbound unencrypted?  DIM inbound?  Symantec DLP could determine this automatically, I would think.
  • How complex must an attack be to exploit the vulnerability?
    Complexity is a bit harder, but you can define it in another scope -- if it's easier to steal someone's identity with more information about them, then more data elements (first name + last name + SSN + DOB) makes it less and less complex to steal someone's identity
  • Is authentication required to attack?
    Perhaps define this as DIM vs DAR again?  If records are on a device and encrypted then more layers of authentiation are necessary to attack...
  • Does the vulnerability expose confidential data?
    This one would be fun to define -- do you have one level for PII and a second for IP and choose, at the policy level, which category an incident should fall into?
  • Can attacking the vulnerability damage the integrity of the system?
    Integrity to me means integrity of the data -- if it's an EDM hit, then it's complete, if it's a regex, it's partial
  • Does it impact availability of the system?
    Availability could also serve the function described for complexity above, or it could be defined in some other manner... apologies, I'm just thinking of these off the top of my head

Temporal Metrics

  • How complex (or how long will it take) to exploit the vulnerability.
  • How hard (or how long) will it take to remediate the vulnerability.
    If it's a Prevent incident, then remediation is available first hand.  If it's DAR, then remediation may be a click away.  If it's DIM outbound, there may be no hope for remediation
  • How certain is the vulnerability's existence. 
    We once thought about the liklihood of a record out in the open being used in identity theft, and we equated it to the overall risk of fraud due to identity theft in general (4.32%) and due to data breaches (19.5%) Javelin Strategy and Research, “Data Breach Notifications: Victims Face Four Times Higher Risk of Fraud”, October 2009; Found at http://www.javelinstrategy.com/2009/10/27/javelin-likelihood-of-fraud-is-over-four-times-higher-for-consumers-who-receive-data-breach-notifications/  Our guess was that the actual percentage was somewhere in the range of 0-19.5% and around 4.32%, if these numbers are proven accurate.

Environmental Metrics

  • Potential to cause collateral damage
    Could you calculate this as a dollar figure based on the Ponemon report estimates of cost per record in a data breach ($202 or whatever it is for your industry)?
  • How many systems (or how much of a system) does the vulnerability impact.
    You could think of this in many "how many" situations -- How many customers?  How many lines of business? etc.
  • Security Requirements (CR, IR, AR)
    This one hinges on being able to feed in variables (confidentiality | integrity | availability) related to the system affected... I'm not sure how to deal with this one off hand but perhaps these numbers could be configured in the admin interface based on some variables deemed to be important?

The upside to this, in my opinion, is that you could even have new "incidents" that weren't possible before when actors and their devices do things over time... "scenarios" if you will.  You could group all of the CVSS scores of a device or a sender together based on their department/LOB and watch how these values change over time.

Mar 19, 2010 08:54 AM


We don't currently use CVSS for Vontu, but we do have an integration with Control Compliance Suite (CCS) that is due for release in May 2010.  This allows the automatic metadata tagging of assets based on information that is discovered on them.  The benefit is the ability to apply compliance, technical controls, and hardening policies based on the content profile of what lives on said asset. 

I think in this realm CVSS would absolutely work in terms of how these assets are handled and prioritized.  But I suppose pulling that value into Vontu based on targeting would also shift the profile of the incident in terms of remediation.  That's pretty cool.  It would likely be based off a lookup into a custom attribute in my opinion (at least initially), but could then be used in conjunction with the other parameters we're pulling in.  Hmmm... I will create an enhancement on this one. 

We wrap up code on v11 in June, but it's definitely on the table for the next release.  Are there other inputs you foresee pulling in or wanting to pull in and weight around the score?


Mar 19, 2010 08:46 AM


What you state about the SAT makes sense from a standardization perspective and the fact that people understand that notion.  I think that level of standardization applies to Vontu if the set of policies are the same across two asset reports.  If the policies are different, then obviously there's differential risk there (ie. Policy A is more important the B).

The other challenge that I think we have though around this notion of standardization across is that we're incorporating external information that fundamentally differs across the two reports, so it's not just that the test is different when you take it in April, it's as if the subject matter and types of questions (the input queries are vastly different), so I still don't think you can compare Report A to Report B. 

The net of the effort here is to achieve exactly what you state, a prioritization of the needles in the haystack.  The way we are approaching this is as follows:
  • Show me clusters of needles in folders (don't show me 1000 incidents, show me where 700 high severity incidents out of 1000 are in one folder)
  • Add in ACL weakness (how wide open is the ACL)
  • Any spike in actual access to sensitive files that have violated policy
So, if we go back to the example comparing business unit X vs. Y -- if unit Y has more scattered incidents and fewer people in that business unit, they may have a top score of 80 due to normalization of the scores and lower access utilization (fewer people working = fewer accesses on sensitive files), whereas, X has a score of 95 because of a high concentration of incidents and high access accounts.  Perhaps the people in X just work harder and have access to more sensitive data.  In this sense, standardization loses it's value, but relative to prioritization of remediation of items, it works within each filter set.

Of course, you could always expand the filter set and that would achieve broader standardization as well.  We've also decided to give an option to "hide" the score in the event, that people just want a basic ranking/priority, but most customers have stated that they want to see a score or number. 

Thanks for all this feedback btw, it's invaluable.


Mar 17, 2010 12:51 PM

As a follow up, speaking of a way to "standardize" the risk score across DIM/DAR/DIU, would CVSS work?

http://www.first.org/cvss/cvss-guide.html

We use this scoring methodology through our vulnerability operations, and standardizing on it would help us, but I'm not sure if that is the same throughout the Vontu customer base.

Mar 16, 2010 09:16 AM

Your example with AP Calculus/English is understood, but I'd frame it another way.  Two students take the SAT this year, one this month and one the next.  The tests will be different, but the scoring system has been standardized so that one student's score can be compared to the other's, apples to apples.  Everyone knows there are problems with doing things this way, but it's the best system that we can come up with right now.

Based on the data that the system collects, how can you use that data to assign a standardized score to a given incident, a given file, a given device, or a given user?  Vontu is the SAT and we are the college admissions counselors.  Given all of the data inputs we have, how do we assemble those together (in a SIEM or another correlation system) to make decisions that we'd stand by to the rest of the college?

I understand this is no easy task, but Vontu is getting to be so good at finding the needles in the haystack that not only do we need to prioritize the needles once we've found them, we also have to be able to describe where we tend to find them and the effects of time.

Mar 15, 2010 10:41 PM

"Quite obviously you can't abandon efforts to quantify risk" -- Oh, yes we can always abandon.  I don't think we would in this case (in fact, I know we won't because we're too far down this path), because I do believe that despite the challenges in arriving at a score that can be reasoned about, that "something" is better than nothing, and we are providing enough flexibility (i.e. weighting) to make the model adaptable to different customer situations and needs.


That said, you raise a very interesting point about external feeds into Vontu vs. external feeds out and where risk quantification should live.  On the one hand, I don't think Vontu is equipped to receive the number of inputs required to create the risk score/profile that you suggest.  That's definitely the role of a SIEM or other correlation tool that make assessments based on multiple sources. 

On the other hand, having Vontu aggregate information and pull other inputs in comes with its own caveats -- namely, what is the associated risk of the external event.  I suppose we answer this with weights that we place on specific events and sources which essentially are a reliability rating of sorts, but I think the piece that we most grapple about internally about is the fact that regardless of weights, the fact remains that one is comparing apples and oranges when it comes down to multiple risk sources and events.


Here's one example:  we were discussing the risk model that we have (centered on Discover incidents) and the fact that given the report filters applied, a user could not compare report A to report B.  What this means is that if the report filters for A and B are different, then the underlying assets and information and events are different, so the risk score for Report A has no relationship whatsoever to the risk score for Report B.  Most DLP users can make sense of this and comprehend it, but conveying this to the line of business may lead BU A to say, "hey, our highest risk score was a 73, we're way better than BU B who had a risk score of 92." 

It's the equivalent of saying that I got a 99 in English (probably never happened in my life) and you got a 91 in AP Calculus and therefore, I scored better than you.  The comparisons don't work and the more variables we bring into play across the different DLP threat vectors (DIM, DAR, DAE), the more "wonky" and subject to scrutiny the scores will be. 


Perhaps the answer is to punt to the SIEM, but I believe the converse challenges of weighting a DLP score will apply there and I still think there's value in quantifying, as you state, "which one of the 1000 incidents I should deal with first" is.  I think we're at a first step, but want to get to where we should be as fast as possible.  The net net is simplified remediation.

Mar 11, 2010 04:04 PM

Quite obviously you can't abandon efforts to quantify risk, but the approach you take is wholly dependant on the inputs you're planning on including.  It seems to me like you would need feeds from sources external to Vontu to adequately estimate the risk of something, because devices need to be connected to users who are connected to a history.  You'd have to revamp the severity system and be able to rank the value of policies and statuses.  Our correlation/risk quantification system takes inputs from the proxies, agent-based solutions (DLP, database monitors), network solutions (Vontu Network Discover/Monitor), and it's still not adequate enough.  All of this is so interconnected to the workflow of these products, so our system only works with our installations.

But then again, that's in a perfect world.  Simply knowing which one of 1000 incidents I should deal with first isn't a bad start -- nor is which of my devices seem to have the most PII. The system doesn't have to be perfect, just better than it is now.  In our world, that "better" means more flexible.  In that regard, Vontu seems to be headed in the right direction...

Related Entries and Links

No Related Resource entered.