Over the past several months, I've had the opportunity to speak to a lot of people about the product direction we're heading in and have spent a fair chunk of time discussing risk and prioritization of security events. A fair portion of those discussions have centered on the notion of a risk score or ranking to prioritize folders and incidents for remediation. In light of this, the following threads are particularly interesting and amusing -- a mix of skepticism as it pertains to risk modeling along with all out bluster and bashing. I think it’s important to bear in mind that many of these discussions, while dismissive of risk quantification in many ways, are also focused on assessing the probability of a threat being executed rather than prioritizing clean up or remediation. However, I also think it raises the issue of how folks approach risk modeling and quantification, and how we can provide a better/more informed risk context. Or if we should even try. One thing is for sure, there doesn't seem to be any shortage of opinions on the matter and I'd like to better understand all the fuss. What are the approaches to risk modeling that work for you and aren't so appalling and more importantly, actually convey relevant information with an eye towards reducing said risk? We can abandon an effort around risk quantification (although I've heard it's been an enhancement request since v2), but it seems like we need to step in this direction while providing context and flexibility without giving away the entire store. It's either that or we're left throwing a dart at a dart board while blindfolded and off-kilter from downing 3 Racer 5's. Ok, that's just how I model risk... er, throw darts. Hat's off to the folks at Securosis for raising the thread and invoking the post here by the way. They generally have some good material and an interesting view on trends. Their latest post bashes on marketing Advanced Persistent Threat (APT) and is good for a read. Look forward to any comments/thoughts you may have on the above. Harold
It was my thought that incidents were in fact vulnerabilities that could be scored using CVSS... so if Vontu could pull in certain defined values from sources we define, it could define the CVSS score for a given incident. Base Metrics
Temporal Metrics
Environmental Metrics
The upside to this, in my opinion, is that you could even have new "incidents" that weren't possible before when actors and their devices do things over time... "scenarios" if you will. You could group all of the CVSS scores of a device or a sender together based on their department/LOB and watch how these values change over time.
Your example with AP Calculus/English is understood, but I'd frame it another way. Two students take the SAT this year, one this month and one the next. The tests will be different, but the scoring system has been standardized so that one student's score can be compared to the other's, apples to apples. Everyone knows there are problems with doing things this way, but it's the best system that we can come up with right now.
Based on the data that the system collects, how can you use that data to assign a standardized score to a given incident, a given file, a given device, or a given user? Vontu is the SAT and we are the college admissions counselors. Given all of the data inputs we have, how do we assemble those together (in a SIEM or another correlation system) to make decisions that we'd stand by to the rest of the college?
I understand this is no easy task, but Vontu is getting to be so good at finding the needles in the haystack that not only do we need to prioritize the needles once we've found them, we also have to be able to describe where we tend to find them and the effects of time.
"Quite obviously you can't abandon efforts to quantify risk" -- Oh, yes we can always abandon. I don't think we would in this case (in fact, I know we won't because we're too far down this path), because I do believe that despite the challenges in arriving at a score that can be reasoned about, that "something" is better than nothing, and we are providing enough flexibility (i.e. weighting) to make the model adaptable to different customer situations and needs.
That said, you raise a very interesting point about external feeds into Vontu vs. external feeds out and where risk quantification should live. On the one hand, I don't think Vontu is equipped to receive the number of inputs required to create the risk score/profile that you suggest. That's definitely the role of a SIEM or other correlation tool that make assessments based on multiple sources.
On the other hand, having Vontu aggregate information and pull other inputs in comes with its own caveats -- namely, what is the associated risk of the external event. I suppose we answer this with weights that we place on specific events and sources which essentially are a reliability rating of sorts, but I think the piece that we most grapple about internally about is the fact that regardless of weights, the fact remains that one is comparing apples and oranges when it comes down to multiple risk sources and events.
Here's one example: we were discussing the risk model that we have (centered on Discover incidents) and the fact that given the report filters applied, a user could not compare report A to report B. What this means is that if the report filters for A and B are different, then the underlying assets and information and events are different, so the risk score for Report A has no relationship whatsoever to the risk score for Report B. Most DLP users can make sense of this and comprehend it, but conveying this to the line of business may lead BU A to say, "hey, our highest risk score was a 73, we're way better than BU B who had a risk score of 92."
It's the equivalent of saying that I got a 99 in English (probably never happened in my life) and you got a 91 in AP Calculus and therefore, I scored better than you. The comparisons don't work and the more variables we bring into play across the different DLP threat vectors (DIM, DAR, DAE), the more "wonky" and subject to scrutiny the scores will be.
Perhaps the answer is to punt to the SIEM, but I believe the converse challenges of weighting a DLP score will apply there and I still think there's value in quantifying, as you state, "which one of the 1000 incidents I should deal with first" is. I think we're at a first step, but want to get to where we should be as fast as possible. The net net is simplified remediation.