← All talks

The Fault in Our Metrics: Rethinking How We Measure Detection & Response

BSides Seattle47:4141 viewsPublished 2024-10Watch on YouTube ↗
Speakers
Tags
CategoryCareer
StyleTalk
About this talk
Allyn Stott Your metrics are boring and dangerous. Recycled slides with meaningless counts of alerts, incidents, true and false positives… SNOOZE. Even worse, it’s motivating your team to distort the truth and subvert progress. This talk is your wake-up call to rethink your detection and response metrics. Metrics tell a story. But before we can describe the effectiveness of our capabilities, our audience first needs to grasp what modern detection and response is and its value. So, how do we tell that story, especially to leadership with a limited amount of time? Measurements help us get results. But if you’re advocating for faster response times, you might be encouraging your team to make hasty decisions that lead to increased risk. So, how do we find a set of measurements, both qualitative and quantitative, that incentivizes progress and serves as a north star to modern detection and response? Metrics help shape decisions. But legacy methods of evaluating and reporting are preventing you from getting the support and funding you need to succeed. At the end of this talk, you’ll walk away with a practical framework for developing your own metrics, a new maturity model for measuring detection and response capabilities, data gathering techniques that tell a convincing story using micro-purple testing, and lots of visual examples of metrics that won’t put your audience to sleep.
Show transcript [en]

all right well hey y'all thanks so much for coming to my talk I have worked in detection and response for the last 10 years and I've made a lot of mistakes but especially when it comes to metrics and this is the talk I wish I had seen today you'll get three things you'll get a new maturity model that'll help you describe and measure your detection and response capabilities you'll get a framework that'll help you build new metrics and you'll get lots of examples and my story with metric starts on a Monday morning I'm only a few months into a new job and I get a message from my boss he's like the board of director meetings

coming up looking for those updated program metrics you can see I'm new to senior leadership based on this text thread I don't ask any questions I'm eager to please got it boss so I send the message to my new team and I ask them hey what have we presented in the past and what's the response oh no bad news the last guy just made those up but good news I'm going to do so much better and how many of you have had this happen where these are the metrics that you start with it's our starting Place generally they're metrics that haven't been well thought out or they've been fudged so that we can avoid new

questions so I did what you probably did I Googled it or I binged it whatever is appropriate for this room duck du go a Duck Duck Go it yeah and then I just ended up copying the metrics I used at my last job and that's led me to using a lot of bad metrics but so what why should we care about metrics I know you tell me it's 9:00 a.m. you came to a talk about metrics why do you care about metrics I want uh I want them to tell me a story okay you want me to tell you a story that's how you get budgeted for more staff yeah it's how you get budgeted for new

staff know how yours how's the business doing what's that Trends find Trends yeah eff of your controls measure effectiveness of controls yeah yeah yeah metrics help Drive Improvement uh Carl Pearson he's a late 1800s 1900s guy founder of modern statistics he's got this quote he's famous for that if you ever write a talk about metrics it'll be in your Google search that which is measured improves and when I first read it I was like what a great plug for metrics but there's an implied warning in that message what if you're measuring the wrong thing there's also a paper that these two guys out of MIT Hower and cats wrote It's called metrics you are what you

measure and they talk about the more you pay attention to your metrics the more you start to make decisions and take actions to improve those metrics the metrics you choose those are the ones you'll improve and then you'll become what you measure metrics also help us communicate what we do and why people should care Edward tufty he teaches this really great course on presenting data if you ever want to take a course about like visualizing data it's not a security class it's not even a technology class it's just about visualizing data it's one of the best courses by Edward tufty look them up but he's got a quote that says metrics reveal data metrics are a

tool that enable us to present the greatest number of ideas in the shortest time with the least ink in the smallest space he's much more well spoken than I and why well let's be honest because we need a budget we need headcount and metrics are usually the tool we need to communicate with that but why are security metrics hard why are security metrics hard the landcape changes people aren't attack 100% of time people aren't attacking 100% of the time security is hard security is hard in my personal experience security metrics are hard because I'm a security person and I don't care that much about metrics here's a much less famous quote metrics are an annoying PowerPoint I

need to update every month that one's from me a bit about me I'm a senior staff engineer at Airbnb I work on fun things not metrics like Enterprise security threat detection an incident response and I really love my job I live in Austin Texas with my wife and my three-year-old son Liam and I really love being a dad and a husband and there's one thing that I'm really good at as husband as a dad and as a security engineer I'm really good at making mistakes and this is the point of the talk when I'm supposed to gain credibility with all of you uh tell you about my accolades my years of experience but really I've just been

making mistakes so let me tell you about five of them and the first mistake I've made is losing sight of the goal how many of you work or do work the alert Q some kind of triage who here is on call who's on call right now yeah all right those are the tired people in the room this is my 10year anniversary of being on call it's a short amount of time I suppose but and for those of us that spend our days triaging alerts and responding to fires it can be really easy to lose sight of the goal and so we end up describing our Frontline operational work with metrics like this one and here's a metric that shows the

number of security alerts per month you've seen this metric you probably have this metric and if you take a closer look we see that in the past year looks like March and April had the most alerts my boss will ask a question about that and if we keep looking at it the alerts are generally trending down did we do that or did we stop logging something in February did the new IPS rules come out and I was like nope and I just turned them off alert count has become the heartbeat metric for security operations instead of rooting back to our goal of detecting threats and responding quickly we've reduced ourselves to cries for help I've

come to call this metrick the operational burden we've inflicted on ourselves another title might be we're doing things it's crazy out there maybe it's fear driven scare leadership with a bunch of alert volume and sometimes we try to make it a bit better we break it down by true and false positives I've been proud of myself for doing this but if I'm honest I'm not really sure what I was trying to say with this metric either that we have a lot of false positives so what's a good true to false positive ratio is it the same for every alert type would reducing the false positives mean I have less visibility in the threats or is

having too many false positives saying I'm missing true [Music] positives and the first problem I'm running into is I don't know where to start with metrics detection and response has matured as a field but I'm stuck here making metrics about alert volume so I need a starting place and so to give you a starting place I thought about what in detection and response could we measure to help us make decisions and see if we're improving and you can remember these by the acronym saver and we want to show that we're streamlining our operations are improving our efficiency and accuracy through either automation better tooling or processes is and we want to raise awareness about what we're learning from

our thread Intel sharing things like threats and trends that we should be prepared for and we want to measure our vigilance how are we preparing for the top threats can we detect them and as we learn about new threats and Trends how is that guiding our threat hunts as we explore networks what are we finding and when our detections fire and our threat hunts turn into incidents what's our Readiness how quickly are we able to organize and respond to incidents how complete are our playbooks so when you're thinking about your own metric specifically in detection response think about these different categories that your metric may fall under and this can help you tie it back to the outcome what's the goal

and then to figure out what category a metric should fall under ask yourself what question does this metric answer so what question were we trying to answer with this metric are false positives taking up too much of our time do we have enough time to investigate the true positives completely and then another question you can ask is how do I control this metric how do I reduce false positives how do I reduce false positives but like me security engineer how do I decrease false positives tuning turning them off so how's that going for yall yeah that's right so if I map this into my saver category I think about this this should be a streamlined metric and streamline

metrics usually answer questions about efficiency accuracy automation but it's about time and I have two big problems with this metric this metric doesn't tell me where I'm spending most of my time you might think it does but it doesn't and second the only control I have to reduce or to make this graph better to make this metric better is what turning off tuning that's it that's all I've got so let's make it a little better and here's a graph of time spent on false positives and I've completely removed true positives right now because for right now I'm okay to say true positives spend as much time as we need but instead of tracking how many false

positives there are I I'm tracking how much time we're spending on them now how much time you spend on an alert manually that could be as simple as measuring the time it's signed and the time that it's closed as a false positive now if your team is anything like mine we have this really amazing habit where when the alerts are coming in we select them all and assign them to ourselves whether we're going to work on them right then or not and why do we do that what metric is making us do that time to assigned time that it was

acknowledged when you think about your metrics think about what human behavior you're encouraging and how your people how you are going to hack that metric cuz that's that's what we do we're going to find a way to hack the metric and so the way that we're hacking the metric of time to acknowledge or time to assign is not helping us it's not making things better so stop measuring it at least temporarily and then this metric suddenly becomes a lot more accurate so how do we control this metric what can we do to improve this metric how much time we're manually spending on our analysis on our response for false positives what about Automation and as we get more automation

tools the number of events no longer equates to how much time we're spending on false positives you could have it you could have an alert that is firing like crazy but no one ever has to look at it because automation's doing all the work so do you care I don't if I don't have to do anything I don't care and as you automate it then you can carry the time that you're spending on those alerts and put it into your Automation and this lets you do something really cool then you can actually speak to the amount of hours the amount of human hours your automation efforts are saving you so now maybe you'll prioritize doing

automation which we often don't end up doing we often don't take the time to do that that we should now somebody might look at this and go your automation suck they're taking that long to run no you're carrying the time you spent manually to do something into automated so now you're not just incentivized to tune alerts you're incentivized to find where's the most manual time I'm spending so that I can automate it second mistake second mistake is using quantities that lack controls or more simply said measuring things you can't change meantime to recover is a classic incident response metric it'll be in your Duck Duck Go search in this example you can see that recovery was lower in September and

October but then it grew in November and December but then man the team pulled together we got th we got our we we got things in gear here and then we got our response time down or maybe the December there's holidays and people weren't acknowledging their pages and there's yeah and it's funny I've spent the last year researching metrics for detection and response and I've learned something we're really obsessed with speed in incident response the vast majority of metrics when I search for detection and response is about time time to detect time to respond time to contain time to recover and I'm certainly not going to argue that speed is not important but using time as our sole measurement across all

incident phases completely ignores quality and Effectiveness but I yeah that's right but my big problem with this metric is that security incidents have a lot of variability especially the further the downstream you get in that response process a lot of dependencies a lot of teams come into the picture and you can't control all those things so a graph like this it doesn't help me make decisions because I don't know what's controllable here and so what happens when you have a metric that you can't control you stop caring about it because you can't affect it so instead break out your response times across all your different phases and here I've filtered out all of the built-in time that I know I'll need for

Quality you have an idea with your playbooks how long that Playbook will take for specific types of incidents and every response Playbook you have has some expected built-in time sure as you mature your capabilities that built-in time will come down but that's not what this focus is for this graph here we're looking at what can I control today Eric brandwine at AWS he gives us really good talk it's called the tension between absolutes and ambiguity and in security and in it he says when you look at a metric it should immediately answer what do you want from me what do you want me to do and one of the easiest ways to do that is by making the answer

zero if there's nothing to do so here I filtered out all the time we can't reduce right now there's nothing for me to do right now I can't do anything about it I've made the answers zero so now when I look at these metrics I know exactly what it wants from me go look at the incidence in December figure out what happened in the remediation phase mistake number three thinking proxy metrics are bad or more simply taking your brain and thinking about amazing metrics you could could build that are insanely expensive to create when all you really needed was a correlating metric that was good enough here's a great example eight years ago my team and I

decided that we wanted to know what our miter attack coverage was so all the different techniques across the entire attack framework I see some nodding heads like yep oh yeah uhhuh I know where this is going and this was before miter attack coverage was like the really cool thing to do and so we figured out all right we're going to have to write tests across the entire framework some kind of simulation for each one and then we got going and we figured okay all right one test per technique that's not going to tell us much we're going to need a lot uh and then we got Windows Mac and Linux yeah we're going to need a lot for all of

those and so after years of developing tests investing in tooling we finally had the data we needed to visualize our attack detection coverage side note I saw this tweet the other day and it said we need to do a better job of mocking vendors that claim 100% miter attack coverage for many reasons but most importantly I've seen the Carnage of 100% coverage quote unquote it's fatigue alert fatigue like you wouldn't believe so anyway I spent we spent years putting this data together and it is cool don't get me wrong but at the end of the day all we really wanted to know was what detections should I be building next so do this instead rather than

trying to measure your detection coverage across the entire attack Matrix start by finding top five threats that you care about the most don't overthink it look at your external thread Intel what kind of industry are you in what kind of environment do you have and then what are your incident Trends what kind of incidents do you have that are just reoccurring and then link those back to your organization's security risks what would be a really really bad day for your company if data was exfiltrated what would make the Chief privacy officer cry the most that's a good metric by the way and then once you have those top five prioritize your detection development from there so we like to

Workshop these as a team where everyone takes one of those top five threats or you go into groups of top five threats and then use attack to then derive the different techniques and sub techniques from there and then as you write tests and detections you will over time slowly end up building yourself a prioritized miter attack coverage map but without all the alert fatigue and a super costly metric and then you might also become best friends with your Chief privacy officer it can happen mistake number four not adjusting to the altitude and as someone that has Ed back and forth between management and IC I'm really guilty of this one who here has ever tried to

explain all the different phases of the miter attack framework to a board of directors oh yeah yes and I say sure let's do it why not detection coverage I think is actually one of our better new metrics but wow we have done a bad job at explaining it at the leadership level I have seen one of those minor attack heat Maps probably generated from a certain vendor just slapped into a board of directors deck as if it meant anything to them or told them anything so we need metrics at every altitude and the higher the altitude the less it'll become about very technology specifics and it'll come more about how to does this impact the

business and so it's helpful to think about it like a pyramid for the business the impact we make is reducing the cost of an incident or a breach or how difficult and costly we make it to cause an incident or breach and so our metric at the top of our pyramid our Northstar metrics there those are the ones that tell the business what and how we contribute so mean time to detect how long does it take for us to find out that there's a threat and then how long does it take for us to get things contained so maybe it's meantime to respond maybe it is just meantime to contained just that first part especially because that's

maybe all you can control and then under that top layer is our coverage and Effectiveness can we DET detect the top threats to the business do we have playbooks for the attack that are most likely to happen do we have the visibility that we need and then under that layer how well do our tools perform how much time do we end up spending trying to figure out what logs do I need and how long does it take for me to search organizing your metrics in a pyramid can help you connect the lower layers those operational metrics back up to your Northstar metric and then allow you to speak at the the altitude that's appropriate to your

audience but organizing your metrics in a pyramid can also help you connect your metrics with the rest of the security organization because it turns out detection of response is not the best strategy all the time if your metrics are showing that your meantime to respond is trending up because of a repeating type of incident some Sometimes the best way to reduce the cost isn't by improving your streamline metrics or your Readiness metrics it's telling other parts of the Security Org that hey maybe we should think about putting a control in place and maybe you should drop what you're doing right now and do it and now you can use your metrics to influence the rest of the security orc

and tell them what to do cuz we're the most important mistake number five asking why instead of how and my natural inclination is to ask why why did we detect that malware sooner why were we missing the firewall logs again and as a dad I have a lot of why questions why did we bring the car seat when we only took one taxi ride the entire trip why do we need four suitcases why didn't we bring the stroller and why can't Liam walk by himself but in all of those examples why is not helping so instead I've learned move straight to the how and start figuring out what actually needs to be done and often answering how helps you

identify the underlying problem much faster and what they much more positive perspective especially from your spouse I mean coworker how can I carry Liam a car seat and suitcases through the airport how can I detect these type of threats sooner how can we respond faster when I interviewed with my current VP she asked me how do we build a modern detection and response program how do we get there that was like one question not not a simple way to answer it how do we describe where we are today where we're going and how we're going to get there and it made me think about maturity models and my first exposure to maturity models was the hunting maturity

model and the hunting maturity model hmm was really helpful when I needed to build a threat hunting program when I just wanted to threat hunt because it told me how do you figure out where you are and it told me how do I get the next level maturity what things need to be true and maturity models give us as security practitioners this common language so when I go an interview somewhere and we're talking about threat hunting I can be like where where you think you all are on like the hunting maturity model and they usually know what I'm talking about and they go oh where we're here and that's because of this and maturity models tell they they

let us ask where are we now what tools and processes do we have what's the current situation what's the challenges and where are we going where do we want to be by next year and then how how will we get there what are our objectives how are we going to achieve them so I created a maturity model it's called the threat detection and response maturity model TDR and it builds off of hmm and it expands it across the different areas of detection and response now there's a lot to it so at the end I'll give you a link to the full maturity model that you can use and the first pillar when I thought about it was

observability or having the tools and logs that we need to get visibility into our entities and user activities and enriching it so we can contextualize that data search it quickly and then proactive threat detection where we can focus on collecting our threat Intel prioritizing detections and the Hunts we perform and then finally rapid response where we prepare playbooks and automations so we can move from triage to analysis and then respond with the forensic capabilities that we need and then we can use these 14 capabilities to describe and measure where we are today where we want to go next and how we get there and for each of the 14 capabilities in the framework you'll score yourself in four different areas

process tools documentation and testing and you'll rate them from initial all the way up to Leading and within the maturity model there's very specific guidance for each of the capabilities but there's some general guidance provided here as well so for example if we were to rate our detection engine capabilities we can think about the processes we have do I have a process that tells me how to create a detection that looks for firsttime occurrences do I have a process that defines the most optimal way for me to find my thresholds and then we rate our tools are detections managed from a central location and then documentation or what's been the case for most of my career the lack of and

then finally testing how do I validate that our logic to determine firsttime occurrences is actually working and as you go through each of the capabilities and rate them I like to rate them individually like to sit down and really mle over it and then we'll do it as a team and we'll break out in the group because that'll help you uh either think about the answers you gave and if they're you know online or if you maybe you didn't consider some other things and so doing it as a group exercise it's a lot of fun I make sure you write down why you rated things certain ways that way when you come back to it you're like

oh that's right that's why that was true and then you can visualize it and here's an example of how you can take your ratings and show at a high level where you are across those three pillars and you could do specific metrics for each of those pillars too and where you plan to be by some Target date say the end of the year based on the projects that you planned and your initiatives and I like to use this metric at the leadership level because it tells a story of where we are today where we're going and there's lots of underlying detail there if you need it and you can also simple it but I also like using it

because it shows whether the work you're doing is having an impact on your maturity so if you planned a whole bunch of projects and you look at this and you're like well nothing's moving maybe rethink the projects you're planning and the work you're doing but also as an engineer there's nothing better than being like see how that's moving that rapid response that's work I'm working on that's me I'm doing that so once you have metric once you have a a way of saying hey here's how we're going to mature then you need some way to say are we getting better though we're our capabilities are improving we're maturing but are we getting better and that's where the sa framework can come

back in again and so for each metric you create you'll put it into this structure here warning when you do this with your current metrics you may end up deleting all your metrics so do it slowly avoid mistake number one losing sight of the goal and ask what question does this metric answer what's the outcome you're looking to achieve and and use the saver categories to help you tie it back to your outcome in your North Star metrix and then avoid mistake number two using quantities that lack controls make sure that you have metrics you can actually control and don't forget to make it zero filter out what you can't control today so that when you

look at a metric you know exactly when it's telling you to go do something and then if you have a control of a metric what risks could that measurement reward I was talking to a friend of mine and he runs one of those really big socks like the ones with the big room and the monitors all over the wall and the pew pew maps and I haven't been in one of these in a little bit but they still have the pew pew maps that makes me very happy and we were talking about metrics and he was talking about the time to analyze metric and it was a really big pain point in this sock and he told he

told me he said analysis was taking a really long time and they did they couldn't understand that and so they brought it up to the team they're like hey guys like the time to analyze metric like it's really going up we got to bring it down and so guess what happened the metric went down they brought it down and guess what else went down guess what other metric also went down quality went down and then guess what went up true positives missed so when you introduce a new metric think about hm what potentially risky Behavior may I be rewarding or I have a bunch of smart Engineers on this team including myself how will I

try to you know make this metric go the way I want it to do think about those things we're smart people you work with smart people they're going to think of ways of improving the metric think of all the ways and it might not be a bad metric right but you might want to create some companion metrics to go alongside it because remember you will become what you measure and then there's metric expiration when will this metric not be needed anymore when our only lever was alert tuning sure maybe it made sense the track alert volume but now that a lot of the ways that we're reducing the time we're spending is through automation maybe it's time we expire

alert count metrics or at least remove them from the leadership decks then you have your data requirements how much data will this metric require and then how much new effort will I need to improve this metric you can make metrics all day long you will be asked to make metrics can we make a metric for this your first question should be do you want me to improve that metric because you when you make a new metric you don't automatically get people to help you improve that metric is that how that works and then how much time will it take to collect that metric don't come away from this talk telling people that Allan said spend all your time making

metrics you won't be popular and I don't want that coming back to me remember my mistake number three when I thought testing across 100% of the miter attack framework would give us the answer and it was it was really cool but think about the amount of data that you need needed to get that to be a great metric and how long it took us to get there you might not need to do all that think about the simpler meth the simpler metric that'll give you value today and anytime I talk about metrics I always get asked but how do I change the bad metrics I'm already presenting today what approach would I recommend and I get it change is hard

leadership does not like surprises and they often have expectations that you'll be updating last month's slide deck but I have one tip for you that's worked really well for me here I have convinced my friend Dexter he's still my friend to get in near freezing water and Dexter's first reaction was shock his heart rate spiked his when his body hit the water he gasped Liam thought this was great and he had to work to not hyperventilate but then suddenly Clarity and it's the same when you change your metrics it's not going to be fun immediately people are going to go into a State of Shock especially if those bad metrics have been around for a

long time they've gotten used to them but my tip is embrace it push through the change and soon you'll have Clarity too so let's bring all these metrics together and upfront and Center is our maturity using the TDR maturity model and then we can use the sa categories to tell the story of our program we're streamlining our operations by looking at what's taking the most time and automating it we're looking at our threat Intel and incident Trends and we're raising awareness about these top five threats we've focused our time this quarter to build detections for these track these threats here's where we're tracking we've been exploring gaps in security controls relevant to those top five threats we found three new gaps and

from a Readiness perspective we have one type of reoccurring incident really long recovery time so we're working with our Enterprise security team to implement new controls that will prevent those from happening you'll notice too that I've only chosen one metric from each category that's enough for now I think when we make metrics we think more is better and it is definitely not better because you can't improve all those at one time so choose the thing under each category that you would say hey you know what if we were improving this right now this would have the most impact and keep that there until after a while you go you know what I think it's time we think about

improving something else we're doing pretty good here so now instead of making wild guesses about whether you're improving and if the tools you're buying are making any difference you have the TDR maturity model to measure your capabilities instead of using volume counts fear tactics and tired emojis you can use saver to get to the core of your metric ask better questions and map that to something you can control and instead of focusing on 100% miter attack coverage much to your vendor sugrin you've focused on the threats that matter the most found your top five and are working on detection coverage that has real impact so hopefully this is your wake call to take a cold plunge and

rethink your detection and response metrics thank you very

much and real quick this is my link tree it's got my contact info it's got a copy of the slide deck and it has the complete TDR maturity model I also rate a SE severely infrequent news letter called meard it has an adorable cat that people love the security info is decent and I have cat stickers if you come see me at the end and we have time for some questions so if you have questions go ahead just shoot your hand up and we'll go to those do like sentiment of analysts things oo sentiment I like sentiment metrics actually um you know what's funny uh my team and I we just had uh we we were just trying to choose

what tool we wanted to use to solve a particular problem we had some metrics about two tools and we were looking at them and they were really close like everything kind of like looks the same and we were like they're both performing really well and we were like let's do a sentiment check or what we we what you might want to even call like a joy check like how much do we love or hate this thing we did a siment check and they were not equal it was like yeah this tool is great it's doing all the right things we hate this tool it's terrible we hate going into it we hate interacting with we hate configuring it

yes it does all the things but we hate it I think sentiment is actually a really important metric you could use it many different ways yeah absolutely like uh sentiment on like a Playbook sentiment on an alert how do we feel about this alert we hate it all right well that maybe the most hated alert we should fix yes um other great not a completely sentiment but slightly related one of the best metrics I ever introduced a response metric is time spent arguing it's awesome we have somebody that basically is like the note taker for our incident and they you know keep time stamps and all that and we told them like argument start start the clock how long did we

spend arguing about it I'll tell you what one way to approve your response time is putting that metric Tri up because it's very it's it's bold to put up on a leadership deck time spent arguing during an incident I'll tell you what the team that's the problem will slowly stop being the problem because they're we're going to figure out which team it is because somebody's going to go can we break that down by team I'll go oh we sure can any other questions

yes yeah I am a big fan of no surprises even though I just showed you a photo of a cold plunge which is that when we're working on a new metric we get them through the whole team we all look at them we all agree them and then we take them to our sister teams and we say hey we're working on this new metric this is what we're thinking this is what our new streamline metric is going to be we're going to focus on this for the quarter and then I show my boss and then he shows his boss and people seen it before the reality is is like when somebody has already seen it they've asked the questions then when

you start using that metric it's a lot easier so I like to shop it around get the feedback in and then that way when it shows up on the deck and somebody didn't see it then you know a lot of other people have already seen it and they're like no no it's great don't worry about it yes ah Financial impact I love this all right so this is like V2 the talk that I'm working on so I think that every incident we have we can actually calculate what the cost could have been and then what the cost was so there's a decent amount of data out there that tells us what different companies have experienced incidents

that could have resulted of things that happened and what that cost was numbers are all over the place but it gives you an idea and if you take a lot of them then you could be like all right it's based on some data and then you can look at the incident you had and think about at what point did we stop it how much time did it take us to stop it how much you know what's our budget cost per X and then factor that in and say Here's how much it costs to have all these tools this is how much it costs for this amount of time this is how much it costs to have these

engineers and here's where we stopped it and so maybe there's no cost like no actual business cost to that incident or maybe the business cost is much smaller or maybe the business cost is really bad and so now you have to rethink where your Investments go but I do think you can actually say here's the incidents we had here much how here's how much it costs the business for us to stop these and keep this cost where it's at and here's how much this is what the diff is every time I run those numbers by the way all you have to do is have like one incident that you stopped and your budget pays for itself it's a reality

like when you actually do the exercise you'll find out oh actually I don't need to do this for all of my incidents I need to do it for like three of my incidents that went well and I've paid for it all and then you go ask for more money and you start making more incidents on there yes don't give away all the secrets though all right there's no cesos or budget people in here right yeah there is uh oh any other questions yeah so for the Chief privacy officers yes uh is it better to measure that in volume or time spent C I actually think that Chief privacy officers cry a lot and so you probably don't want to go by

volume um I what I think's helpful is is maybe the uh salt content of the tear yes well thanks so much I'll hang around in the back for a little bit and chat but thanks so much for attending [Applause]