
yes thanks for coming out to our talk money second vault where in not everything has a tidy baseball analogy this is kind of a riff off of the book Moneyball and we did a presentation on this last year at a couple of venues so this is the sequel so my name is Brian keeper I'm a security architect with a leading SAS security vendor and this is Gerry post and I build tools to help manage security teams so just a quick words the audience if you guys can help me look out for Chris Nickerson he said that he was gonna wide up and throw dollar bills at me which is fine seeing as the gas prices have gone up 50 cents
in the last two weeks so I could use the spending money but just give me a heads up if he comes in here and starts winding up take 20 s and here's our requisite cat picture we were told that we had to have a cat picture and since this is baseball themed we figured we would go with with this cute little guy all right my hands are washed he hates me for putting that in there so recap from last year we applied baseball sabermetrics to information security so this was our talk from last year and then we spent some time out in the real world I went and tried to implement some things in our organization and Jared went and
worked with some mature organizations and then we asked some guy named Brad was in some movie and I heard somebody actually mentioned that already today that I must have had that like a dozen times last summer yeah no it's not from the movie it's from the book so in case you missed it this is how analytics changed the sport of baseball so the case study from the book Moneyball is the Oakland A's and in baseball if you're not familiar with how it works it works this way most sports you can pick your sport teams bid and in a free agent market for talent they bid four players and the goal is obviously the more a
better player performs supposedly the more they're worth they ask for more money and so it's a competitive market at the start of the 2002 season the Oakland A's had a payroll of about forty million dollars for the year conversely the New York Yankees or one of their big rivals at the time also the richest team in baseball had about a hundred and twenty six million dollars to spend so the the budget there was a symmetric and so you would think that from this poor shot poor teams have no shot at winning but if we actually look at the data we break this down and this is what the book Moneyball did you can see over this
three-year period that the New York Yankees had 280 wins which is a lot of wins that's how baseball successions is measured in wins but then you look and you see the Oakland A's also had 280 wins and you look and see what their payroll was so I calculated this excluding bonuses and some other things but it's about two hundred and fifty seven million dollars spent over that period by the New York Yankees and about seventy million dollars spent by the eight so you have less than thirty percent basically right there as being spent by the A's and yet they get the same results so why did this happen and this why this is why it's a really
interesting case study and why book was written about this is most of this success is attributed to the general manager Billy Beane but how did he manage to have this success well for one thing he didn't follow best practices so he didn't go out and do what every other baseball team did because he figured well if I have you know X and everybody else that I'm competing with as 3 X and I do the same thing that they're doing how am I going to get better results that doesn't make sense so he made data-driven decisions he hired analysts and mathematicians one of these people was Paul DePodesta it was actually an economy an economist from Harvard so in
traditional baseball talent is evaluated so this is what all the other teams do or did traditional baseball talent is evaluated by Scouts so Scouts are players who either didn't make it to the big leagues they didn't they weren't successful players or they got injured something happened but they kind of been around baseball for a long time there were sort of industry experts because they kind of knew a lot about baseball they've been to a lot of baseball games and people thought well if you spend that much time around something you must know something about it and the value statements from Scouts are largely subjective so they say some guy you know looks really smooth and his
delivery or really crushes the ball but these are highly subjective statements so the contrast next generation baseball what I call next-gen baseball although that in retrospect that's kind of a poor choice because you go over to our SA there's next-gen everything over there and it's just the same stuff started in 1977 a guy named Bill James wanted to see what really influenced the outcome of games and what over the course of a baseball season led to successful teams versus not successful teams and he realized that the stats when he started trying to analyze this he realized that the stats that were collected have been created in 1859 a South typo 1859 and they had come over from cricket which
was a completely different game and so the stats really didn't attribute the events that were happening on the field so it really wasn't a way to analyze the data that was available and tell what created success so the key lessons from the book I'm just summarizing here we did a whole presentation on this if you're interested I still have the slides online you can contact me and you can go and look at the slides in the original presentation but the key takeaways here from the money the money of all philosophy is don't make emotional decisions so when you go out and you're if you're too close to something you're gonna be making emotional decisions they're based on a small sample set and
they're just you know how were you feeling that day and how was this person performing that day and if you're making decision based on that it's not likely to be accurate you also need to collect the right data so you need to have data that's actually representative of what's going on and once you've collected that data then you can look for correlations in the data and the other key point is set a reasonable criteria for success so one of the things that the Oakland A's did that was different from all the other teams approach at the time was that they figured out how many wins they would need to have per season in order to go to the playoffs because their goal
was to go to the playoffs and then they only spent as much as they had to to to get to that number of wins how they projected out that they would win that many games they didn't overspend so our our premise here is that this applies to information security so the problem statement is that every organization is competing with attackers right every company out there profit anyone who has any kind of digital presence presence is competing with attackers and the problem for everybody who's not in the fortune 50 is we don't have gigantic security budgets so how can we be effective with limited budgets so the conventional wisdom so let's look at some information security conventional wisdom right so
everybody knows that you need to have a firewall right and you need to have anti-virus and you need to change your passwords frequently and you've got to block that social networking no Facebook no Twitter while you're at work that's all bad right that's I mean you've heard all these things these are information security best practices but if we're a little bit critical we start looking at these we looked at gee wait a second port 80 web traffic goes right through a firewall yuju ACL allow 80 any antivirus misses custom malware I mean anybody can go download Metasploit go through an encoder and their payload is going to fly right through antivirus stolen passwords are used really quickly people
don't sit on them for a long time to use them in social networking is a key to marketing in current companies and also employee satisfaction so this is a problem so maybe we should take a different approach to this and before we even start to say that is do we actually want to take a new strategy management want a new strategy are we on board with taking a new approach and then if we're gonna do that what does winning actually look like so in baseball it's pretty clear we have wins and we have nine inning games and we can figure out who's the winner and loser at the end but how do we get started so that what do we what does
winning look like we need to figure that out first and then once we've got that settled then how do we get started all right thanks Brian time to roll up our sleeves and thanks for joining me in the jumpy house here today it's great so before we can get into you know how do we determine and build a data-driven security program I want to answer those questions that Brian talked about the first one is you know are we really ready to win and in this spectrum every organization that I've been with over 17 years or so fits on the spectrum where on the left you know you're building up your security debt then there's this
motivating event whether it's an incident or a nasty audit or a leadership change the next event is where you have needed all the backing and there's more time and resources than you can possibly execute on then the next year when you come back you want more resources this space that's where management says mmm you know all right how much do you really need so the key takeaway here is recognized where you sit on this spectrum to determine how aggressive you should be in building a data-driven program that we're gonna go through and so the second question that Brian asked will well you know what does winning look like that so you know I don't know if we I'm sure everybody saw
money second version 1.0 the premise was very clear it was what our candidate metrics that directly correlate to reducing incidents it's very clear it's going to be come driven if we don't have the luxury of nine innings and highest score wins so what does when he look like to us so we took a different approach and said sometimes winning for InfoSec is not losing where that's defined as having no unacceptable risk being realized and by the way you know live like a cost center you have to do this on the cheap so if you forgive the image there but we said you know if it feels like you're pulling teeth you're fighting for time and money
that's that's what's needed for a data-driven security program you want to take this yeah so so about that so we did the 1.0 version we had these twelve metrics there's these twelve candidate metrics that we said how these are a good idea to track and I was all fired up and I thought oh yeah you know this will be a piece of cake everybody's been doing it wrong I'm gonna go implement this stuff and it's gonna solve all the problems so I started collecting the information the organization that I work at and what I really quickly realized was that the the information that I had available was far from complete so I was trying to get a
visibility across the whole organization and try to measure things and I just didn't have the data available in a lot of the areas that I wanted to look at that and then the other thing the other aspect of that was that if I wanted to measure if we changed some control does the outcome change well then when we started going and looking at our incident data to figure out well what's our historical outcomes been in security and so then we can compare it to the future we just didn't have the incident data ite hadn't been collecting it they had just been going in reformatting laptops so we had no idea what the historical incidents were like so we
didn't really have that ability to measure what was helping in my organization so given that reality we were one now one year smarter and so clipart tells us that hey maybe we need to take a crawl walk run approach to this thing and this is what you know really represents that how this talk is organized naps so we've cherry picked those original money SEC metrics and find the ones that are the easiest to collect how to pilot the product the process of a staff a target and measuring actual against it and using that to demonstrate value so you can get more time and resources you can undercover more rocks get more data then that third aspect of here's our to
run run icon once we have some history now how do we optimize those targets are we spending too much or too little based off that our historic data so let's get going starting with the easy and easy in quotes I mean even you know Brian's anecdote is very relevant and it starts off with our first areas you have to be able to mine the data that you have it isn't I'm still amazed to the different organizations I work with they don't take advantage of all the gold that they're sitting on top of it starts with the incident data and in two classifications one is from a business impact standpoint it's used to hear of incidents that hit executive management
middle management and even all those things that maybe just stop at the help desk quantify those occurrences you'll need that data in the future and the second aspect of incident categorization then is from the the threat action type so take you know the verus approach in the Verizon database breach report understand what's going in your environment from a technical standpoint as well as an impact and that once we have that baseline of information we can start tracking progress against it so our first money say Apple metric was around applications so if you do in-house development now what is the the easiest way that the very first one to track progress and it doesn't get much
easier than production application polls so you know either an incidents going to find them or hopefully your assessment team is going to find them and it's it's a great revelation so now you've demonstrated the the complexity of exploiting that vulnerability you just need to complete the risk statement for what is the asset the action the agents and the impact of this you've got a well-formed risk statement you're ready to effect change in the organization especially with the targets of how often should these occur now in the passwords area this one really needs to come with a bit of an asterisk so it really has to pass the silhouette test so you know if you're concerned about consumer credits
our back-office creds make sure that this that when you find it when you measure this area that it relates to something it's important take the risin DBR I think of the the top thread actions number four was default and blank passwords number seven was let me guess pun intended I easily guessed and brute-force passwords so again this is something that the security team can control it's an easy to audit you don't have to rely on somebody to go get this information just make sure that that story resonates with with management and my favorite and the easy category is scandal owner abilities and we're starting to see more of this as that metric on yesterday and the Department
of State with their continuous monitoring process they've cut on that hey here's something that we can automate and track over time and segregate and show performance the they need has taken a couple steps further but it's wonderful progress so I'm not just interested in the counts of vulnerabilities that identify let's take it another step further and show how old are those vulnerabilities based off severity and the assets they're associated with and we'll get into that a bit more should I keep talking about targets and I want to get something a you know solid foundation laid right away so how many here have at least just one metric in their organization I maybe a quarter two of those who measure how
many have a defined target that you compare your actual against nobody you told me so you know in my humble opinion a metric without a target is a waste of time the for two reasons is defining what this target is that process is driving acceptable risk that target is defined by the security group and the control owner or the business owner depending on the metro where do we expect to be for those four metrics that I just talked about in the eight that we're gonna cover by by de goshi ating through what that expectation should be you're really implicitly defining what is acceptable risk for this control area and the second benefit of having a
target is it simplifies reporting it gives you the eye candy so take this little rainbow chart here I even took the labels off and we can just see say for this first six months we've been performing above expectations something happened we were below we got back up it doesn't matter what it is but there's a story there to communicate what happened and that visual helps communicate that story because we just care are we performing above or below that target that kind of rips off of what Martin was saying and the other track is that you know trying to go for a hundred percent secure totally secure is just a fallacy that's a flaw of assumption and I think
that's what this illustrates it's very very powerful all right so back to the easy I want to expand on the scan vulnerabilities a bit so as I describe what we're talking about here isn't just counts of bones but what's the time to live how long are they on your environment and being able to set policy based on the severity and that they the environment so here we just have a histogram of you know a we have so many bones and how old they are but it really changes a conversation when you're able to to talk with your your business owners and say hey let's understand this why are these vulnerabilities over this asset class pass the pre-negotiated SLA
by our operations team so the goal here is to take the emotion out of it and just have a data-driven conversation these bones you know is there a process breakdown was there a testing glitch you know do we just suck what's the underlying issue and why are we not operating an acceptable risk that we agreed on maybe we need to change the target it doesn't matter what the answer is that what matters is that's a data-driven conversation right and just in case you guys didn't catch it here on this slide because I didn't even catch it the first time what this is illustrating right here is the vulnerabilities that are present they weren't accepted so these
are what we've measured that is beyond our risk tolerance it's it's over the target that we say cool and we've got an example here right right so so this is actually an example of making this very simple one of the things that I've noticed is that a couple of metrics presentations and they're they're all good there was one by Mike Lloyd yesterday I think it was I had a tak map which looked great and then it had a whole bunch of math and I don't know about any of you but I'm not really that great with math and so I was like oh man how okay I can see how that was done but I don't know how I would go and do that
with the data that I have so I just did something really simple that's like the training wheels version of metrics to illustrate this yeah just to iterate you don't need fancy tools and what's this a some Excel yeah this is says this is the max spreadsheet program to os10 spreadsheet program so I don't know if you can read that but basically what I'm doing here is just pulling in available information that I have you know I'll try to walk about here and the camera guys are going to do that so here I'm just tracking how many servers are my environment over a period of time so each hopefully you all have inventory if you don't have an inventory of your
assets you have to start there sorry it sucks it's hard but you have to start there and then we're going with scan vulnerabilities medium our highest available severity user devices again this is from your inventory how many missing patches on your user devices and this goes back to what Ron gula was talking about earlier of doing authenticated scans you kind of breeze past that I don't know if you caught what that means but when you're using a vulnerability assessment tool you have the ability to plug in credentials the vulnerability assessment tools that actually can bog into the operating system and query what patches or if it are installed on that system so then it can do the Delta and figure out
which ones are missing very important and then terminations how many of your terminations happen within the time window that you would agreed on for your process and so then from here we just start doing some ratios so the number of high severity vulnerabilities divided by the number of servers in the environment that gives you how many high severity vulnerabilities per server that you have deployed it's fairly simple math you can do this in a spreadsheet you don't even need the function wizard to do that and then we extend that out to all of our different things that we're tracking here so we can do it for terminations we can do it for devices etc and then so
this is the important part this is the leap is that we try to figure out and at first this is just kind of a subject matter expert estimation but we try to figure out with our current resources with the current budget that we're putting into this with the current technology with the current people we have what we think is a reasonable level for each of these things so how many how many missing highs or how many high severity vulnerabilities do we think is reasonable that we can achieve with our current with our current resources and then we'll just what that is a target so we we set that target for each of these categories and this drives the
discussion like Jared was talking about this drives the discussion with the business units of how much risk are they willing to tolerate and you know what are they going to accept and then we control shiny pictures with this so graph it this isn't this isn't really that hard this graph function your spreadsheet I've never been to spreadsheet training in my life obviously but this is your target right here so this line is is your target and then you can see your actual measured performance against that target and you know once you have the information available to collect from doing this very simple analysis is really not that hard and just to add on they don't underestimate the power of
this very basic visual and if you somebody's ever said hey what do you guys do you know this demonstrates that you know doesn't matter what the numbers are but it demonstrates that you're working with the operations team or the control owner here you've negotiated what this control is you're verifying actual performance and you're determining is that acceptable based off the business objective I made Wow it's little little simple visual communicator communicates a lot all right ready to jump into some of the rest of the money sack metrics you're excited yeah okay so the goal here again is easy to find data and you can control that process and we wanted to cover as much as of the spread
as we could and relate that to reducing incidence so the first category access management so there's kind of a clever one percentage of employees terminated within policy so the data to go get is depending on your HR system when the employee tells the manager they're leaving and then where the manager tells HR and the HR system and the creds are lost so those deltas are great data points and I'd say yep every large organization that I've been with I was really surprised of how long it took the process failure was when usually the manager telling HR that somebody had left and I may or may not have been part of that problem it's a really good
metric to get insight into the process related is percent roll access verification this doesn't even comment about how effective that those rules are just that for some percentage of your user base somebody is going through the process and verifying that roll and that is a great early indicator and it could be 5% but is that the right amount and is that being executed on consistently on the network side this is you know fully understand probably oversimplify percent critical systems monitored and we let the standard talk about what events are we going to monitor and what is the the workflow behind them but the goal here is to communicate coverage and what is how much coverage do we think we should have
and are we executing on that a really cool thing that I'm seeing is moving this to full packet capture inspection so for example some organizations are taking key ingress and egress points looking for protocol anomalies or egress communication and using that as a proof of concept to justify expanding this so they're using this target based metric now of saying you know we think we need to get to 10 percent 20 percent 30 percent as they demonstrate value that they're actually you know reducing the length and the extent of instance on the vendor side if you leverage a lot of service providers here's a good way to keep the pulse there percent assessed per policy so of your critical service
providers how many of them are we assessing per frequency and then as an efficiency measure of the spy needs that come out of that how many of them are overdue so again it's similar is what we're looking at for scanner based phones it's just a governance function and are we making sure that things get cleaned up they're not laying around on the employee side it's very difficult to have a target and a measurement for awareness so the approach that I really like is just to identify duplicates and trends everybody's gonna get spammed they were just gonna click on something made me do some tailgating but what you want to capture is when those things happen in clumps and nothing invigorates
a user awareness education program then real data hey population we saw this thing occur you know this many of times it just makes it real for them and then tearing out a page from devops number of emergency or unplanned changes percentage changes with a regression this definitely requires taking out your you know change change manager for a few pints because this is an area that you know you probably won't have this data you need to get access to that data so by hook or by crook and again we didn't have time to to generate visuals for all of these but you know it's a waste of time if you don't negotiate that target and track against it all
right so we're ready we're on to the third phase that's the run it's what the rabbit tells us and you know once you have a minimum of six preferably twelve months of data you know go back to those folks where you negotiate that target and challenge it no it was a two load you had an incident that's pretty instant feedback but if it's too high or you do or is it too high and how much costs were associated with that and you challenge all your assertions and then you can do exercises like this so take our server patching and what if we went from 98 to 80 what does that mean to the business and you know people are
actually doing this today help helping somebody do this so you know it's it's not pie in the sky but it really is the conclusion of this data driven security process so here's the here's the question now let's look at the process of driving this answer and it's just 101 cost-benefit analysis and one of the keys to success when you go through this exercise is choose a metric that has a variable cost associated with it so patching is a really good one so we can say hey how long how many people and how long does it take to go through test and deploy cycle after patch Tuesday or Oracle 90 or whatever it is and so say
it takes you know 10 folks 5 days to do this we can get a nice rough estimate and what would it mean to the business if we took out two or three these cycles for the year you know that might be interesting from a cost perspective so here's our benefit side now is where we have to use all that evidence that I've just been up here preaching about and use it to construct a story so what is our history tell us from an incident perspective and on our peer groups what's the performance on our detect and response area that's going to contribute and any type of intelligence that we have as far as you know how often these
attacks going to occur are there specific agents that we need to be worried about based off this control so just your evidence has to tell the story not your opinion but how you communicate that story this is risk right so here we have two different ways where we can communicate this risk story and I don't want to go into I'd love to but we don't have time each of these different risk models but the point is your evidence tells the story you need to choose the right communication mechanism that resonates with your leadership if your management wants to communicate the Delta in risk where the probability distribution saying hey if we lower our patch threshold here's the increase in attack
frequency that we're going to see so on the left there you know 90% of the occurrences we're going to have a $13,000 of an vert and now we're moving to a $20,000 event some folks like that story and on the right there we could say lowering this patch threshold is going to increase our our frequency here we have a user-defined ordinal buckets we're gonna go from the 0.3 occurrence bucket to the 0.5 occurrence bucket you know hey if something's gonna happen once every three years maybe it's gonna happen once every two years so the communication mechanism is not as important as the evidence that supports it is really the takeaway and this is really the culmination of that money set
effort that you know Brian started said we start out with these tiny little metrics over here and understanding information and what we end up with is an evidence driven risk story so I think that's very cool so one of the lessons so I just wrapping up here one of the lessons that I learned from going out and trying to implement more metrics driven program in my environment was that our Incident Response really had to improve I mentioned earlier that our incident was instant response basically consisted of reformatting the machine and then we just didn't get any information from that and so what I'm proposing here is that Incident Response really needs to be moved out of IT it shouldn't be an IT
function or if it is ninety function then there should be a dedicated team or dedicated person IT that's performing this you need to make sure that you're collecting all the relevant data because any virus infections are an incidents gone are the days when you could map out oh well here are the holes in my firewall and here are the servers in my DMZ and you know we'll just monitor everything in the DMZ and when the attacker attacks is gonna be a buffer overflow and we'll see that no basically the way a lot of attackers are getting in now is through the desktop clients and through fishing and and drive-by downloads things like that so your
desktops are actually the frontline in a lot of this so if you get infections on your desktops then you need to be treating those as incidents and investigating them and you need this data in order to evaluate your controls because if you're seeing that most of your incidents are on end-user systems or workstations that you need to be collecting all the information related to those incidents and trying to do a root cause analysis to figure out what the cause of that infection was so that you can see if it was a failed control or non-existing control or the control wasn't operating at its target and that's going to guide your future controls and and setting targets in the
future so this actually and Brent Hardin had something fairly similar to this yesterday saying basically the same things that we need to use the data to support our decisions and this means going and looking at incidents and figuring out okay in this case when we do the root cause analysis was there a related metric here was there something that we were tracking that was related to the cause of this and if we were then was the target appropriate so maybe we got hit by a client-side zero-day and our target is it will be susceptible to a client-side zero-day once every three months and it turns out this was the first time that happened in five months so in that case
the target seems like it was appropriate and we're operating pretty much as expected or better than expected in this case and there you go that was the risk happened that was the risk that we accepted if there wasn't a related metric then is this a total Black Swan event or is this something that there was information that we really should have been tracking and we kind of see similar things keep happening and that's information that we should be collecting and defining a target for and then you know we just use that to find leading indicators of types of attacks and try to drive our controls with that anybody do this today sorry I was I could go
through a whole rant an instant response room believe me would say I mean I know I've just plain Catholic mother so a couple of parting thoughts here one of those is if people are implicitly deciding all the time what not to measure so you know show show of hands how many people have had a project that they're working on they're like oh well you know I could do this other a little bit that we collect a little bit of information or I could rush off and start this other project because my managed business objective is how many projects I finished this year right like everybody's done that everybody has said oh well it'd be nice if I had time to
but I got to go start this other project but you're making implicit decision there that you're not going to collect that information that's available and you're not going to completely implement that control and your your what you're doing without realizing it is saying that it's more valuable to complete a lot of projects than it is to complete them well or have thorough coverage so just make sure that you're cognizant of that trade-off that you're making and make an explicit decision of okay no we're not going to 100% finish this project because there's this other project that we need to start and it's related to this project this program and the risk manager for this is
decided that they have a very low tolerance for risk so we need to solve that one first etc etc but make data-driven decisions on that yeah it really is hard not to sound like Catholic mother up here but that's the whole purpose of starting with data that's easy to get and the Volm scan is my favorite so we're actually ripping through this really fast I can do a whole rant on insane response bigoted Catholic mother now Protestants so actually this was really great timing and this was last week there was a blog post on liquid matrix and it was actually talking about very much the same topic of you know the stuff that we're doing right now isn't working and
you know what would a real solution look like what should we be doing to try to make a dent in this problem and to me it just reminded me especially because of the sheer quantity of candidate metrics that were suggesting that blog post it reminded me of Martin Luther and the Reformation right we're just tacking something up to the door here and saying what you guys have been doing so far isn't working it's not serving anyone's interest except your own it's not helping the general populace and here are some things that I think that we should be doing that would really help so you should by all means this isn't meant to be readable you should go read
this electric liquid matrix blog post I think it was my iron fog Ben I forget his last name I'm sorry and read through that whole post and really the the meat of it is at the end of the things that he recommended that you start tracking within your environment how many folks have read that yes that's awesome so did somebody mind sharing what did you think of the ending what's that it's better at the beginning hell yes yes I thank goodness I survived the beginning of that article but did it inspire you to do anything so is there anybody that's actually tracking any of these even if you haven't set a target for you actually tracking any of these
statistics that were mentioned yeah a couple people so there's a nice three or so okay so progress so that's a good segue actually okay do you want to and we've got a couple more anecdotes but it's really just expanding on some of our previous topics we wanted to pause here and see if folks want to share hey just as we have our favorite metrics where it's it's it's fairly easy to get the information and to drive what an acceptable target is the now's the opportunity we're on camera somebody can show off if there's a metric that you like or a war story that you want to share I saw some hands to go up after that liquid matrix question
yeah and particularly is there anything that you've noticed when you're collecting data where you've then made a change to a control somewhere and all of a sudden you see a different output in your data that you're getting different results different outcomes
right yeah so the comment was that there was a case where you're collecting information right and and you were able to show that one of the controls that was implemented was was costing a lot and it wasn't necessarily giving a huge return for the cost as I could justify ratcheting that back well did he give any more one more layer of specifics
right
[Music] awesome so summarize that yeah so that's the summary on that was basically it sounds like and I'll just paraphrase here there was a DLP project and they were they were afraid that there was a lot of data being exfiltrated and so they decided to to implement some controls and measure that and see how much data actually was being exfiltrated and then when they actually looked at the hard numbers after running it for a period of time they discovered there really wasn't much data being exfiltrated and so that they didn't need to make a huge investment in trying to solve that problem they could they could deal with that risk in other ways and you get the checkbox so that's a DLP
win-win
so basically so if I can summarize testing if the security controls are optional seeing how many people or the levels the security controls are optional seeing how many people will elect for the more stringent security controls voluntarily mmm that would be fun I've never heard of that wouldn't be fun to toy with them that way it's like a Stanford experience do you know of any
okay interesting yeah come on one more before we
so basically if I can summarize there basically you have two controls one's kind of watching The Watcher and and you're monitoring the output from that and you're determining that well the second control really isn't isn't performing at a rate that justifies the expense love it
and right yeah that's great data yeah so do you want to hit really quickly on the coverage from the Dvir I will so you know at the risk of belaboring the point we've got more process anecdotes but we thought it'd be interesting to say hey you know when you take when you selected your money SEC metrics how does this overlay to the top threat actions from the Verizon DB I are so I'm sure you're all you've all have these tattooed somewhere so it just took the you know our high level metrics for you know vulnerability scanner outputs and you know at the risk of oversimplifying again just areas of where monitoring is going to assist I
mean throw in some application security there and you pretty much have covered the top 15 top 16 thread actions which I believe were 86% of the records or I believe it was records you know through throughout the study so again our goal was to find what is the smallest amount of metrics with the easiest to collect data that directly related to incidents and I think the you know this visual really shows hey even though we're this the Verizon report is really point-of-sale heavy you know driving improvement in those areas is going to lead to reduction of incidents yeah now we just need to prove it it's a no-pressure guys so actually we made much better time than we expected
so thank you all for sitting through what could have possibly been a very boring academic subject on metrics hopefully we made it interesting and more hopefully motivated you guys to go out and start collecting some of this information and setting targets for where you should be performing it so this is our contact information and feel free to follow us on twitter or get in touch with us after the talk [Applause]