
foreign Chris from the paranoids project team if you like pretty pictures and data analysis then you're in for a treat
an AI generated art then you're in for the same pretty pictures of data analysis plots too I brought my fans with me so over the course of my career I've observed that the status quo is often based on happy thoughts and assumptions I like to mix things up a bit with some data analysis and dumb questions so the dumb question today is what's risk I put this deck together for myself to compose my thoughts but I'm more interested in your thoughts so I'll be asking for them if you show our hands and Q a and I like all the take part so pop quiz to warm up shout if you know the answer cve 2021
44228 is better known as yay point to the blue team red team here all right setting context about 80 of the software in our applications in our products is open source uh which is in line with industry trends with that open source comes vulnerabilities CVS and zero days into our devops pipeline which we'd like to give out of our products so our customers are not impacted and that attackers can't exploit we have several Tools in our devops pipeline that detect these vulnerabilities but remediation at scale is a hard problem so we're going to do some data analysis and see if we can make life easier for our users by telling them which vulnerabilities they need to fix first
a lot of what we talk about here applies to infrastructure and third-party products but our focus is devops we'll also take a tour of the vulnerability landscape to better understand the building blocks we have to work with as part of our risk mediation [Music] so here's the generic devops pipeline based on the cloud security lines I added some additions at the bottom a source code package repository example for storing and scanning npms and intake say our new vulnerability program that triages and makes decisions about new vulnerabilities and our bug Bounty program so here we have labeled seven tools and services that detect vulnerabilities or cves in our pipeline pop quiz time according to devops handbook there's
three ways what's the first way
the first way is flow or systems thinking so what's our system well as an organization the value we deliver to our customers is delivered via software which flows to our pipeline which is developed by developers and more recently gpts and I as a security person exist to enable flow of that software to the pipeline balance with security risk as you see so I want to know risk and Remediation per asset and for the organization and as a developer I don't care about your security tool or team I care about delivering software of high assurance quickly and as a developer or leader I want the unified prioritized personalized achievable View of what to fix first I was at the Olas accept appsec
conference here early in the year and there was a keynote talk about winning the war in cyber and the speaker asked what it is that people wish for and it was very existential topics thrown out and one person said well I wish developers would fix the security issues and it was a murmur of support and I thought to myself what is it I'm doing to help Developers fix the security issues so now we're going to take a brisk walk through the vulnerability landscape to understand our building blocks we have to work with so we have cves which are vulnerabilities that are a list of records for publicly known cyber security vulnerabilities they are enriched by the national
vulnerability database that uses the CVSs common vulnerability scoring system standard to add scores and that's CVSs data is used by the epss exploit prediction scoring system to determine the probability of exploit for a given CV it's the same group that maintained the CVSs standard and epss linked to nvd is Cesar Kev known explosive vulnerabilities and bottom left we have CSA ssbc stakeholder specific vulnerability categorization and the more observant of you will have noticed that ssvc is CVSs spelled backwards which is interesting if you're proposing an alternative to a standard so what are these cves well we can see there's a lot of them and the rate is increasing trending towards 30k by in the year
looking in more detail we see that the base score for our friend log for Shell the gift that keeps on giving is 10. we can see the parameters that make up that base score and bottom right we can see the permutations are options for those parameters on the top we can see the link to see Kev okay pop quiz we can see that love for Shelf has a score of 10 and we know is widely exploited so the question is is the CVSs score a good predictor of exploitability okay in other words if I give you a random selection of scores and ask you to tell me which ones or exploiter or not would you be able to tell me better
than guessing show hands if you think yes show our hands if you think no smart crew yeah so CVS score not a good predictor of exploitability so don't use a loan to prioritize so what do you look at different academic studies industry analyst studies or vendor studies they all say the same thing moreover teneville goes on to say that if you do use it to prioritize then you'll end up fixing the wrong things and not things fixing the things that you should be fixing okay look at Cesar Kev results linked to the nvd and Cesar maintains a source of vulnerabilities that have been exploded in the wild known as Kev and why to do that well what they're
saying is many vulnerabilities classified as critical are highly complex and have never been seen exploited in the wild and they give a figure of less than four percent that's in stark contrast to the previous guidance down below from 2019 that says if the CVSs is greater than whatever fix it by whenever so guidance is if it's in the catalog fix it here's what it looks like here's our friend log for shell and on the top we can see a related entry which is for the vulnerability in the fixed associated with log for shell and interestingly it was only entered in the catalog this month okay looking at epss so from the same group that bring us the
CVSs standard epss is estimating the likelihood or probability that a software vulnerability will be exposed in the wild goal is to assist Network defenders in better prioritizing vulnerability remediation efforts in conjunction with an existing CVSs score so CVSs is about the severity of vulnerability epss is about the uh more related to the threat aspect you can see what it looks like in terms of the score and you can download for CV or download the snapshot for a day it's temporal as in changes per day unlike the CVSs score which as we saw for log Rochelle is static it's an ml model and it takes ground truth from various intrusion detection systems definition is probability of observing
exploitation activity in the next 30 days from the user guide it shows uh how to use it so you can use it in conjunction with CVSs as in if you're using CVSs then you can use EPS score as well and you can start from the top scores as in high EPS score high cpss score and work your way downwards what's not clear from the guide and what wasn't clear to me originally until I did the data analysis and submitted to the epss special interest group is that epss score is only useful for a small fraction or percentage of cves and we'll talk more about that so how to use it is if it's got a high
EPS score definitely be worried about it if it's got a lower piece of score you don't know and most EVS will have that low score so it's not not relevant there um as a second pass you could say well if a cve in my devops shows up and sees the Kev probably want to prioritize that too looking at ssbc so goal of ssvc is to assist in prioritizing the remediation of a vulnerability based on the impact exploitation would have to a particular organization and sisa the same group that probably seesakev encourages every organization to use a vulnerability management framework That Couldn't considers a vulnerabilities exploitation status pop quiz survey hands up whose vulnerability management framework
considers a vulnerabilities exploitation status show hands yes I'd say two percent of the audience okay good to know um so we'll talk more about decision threes uh in detail later so that's a very quick tour through the vulnerability landscape so you now know more about uh the vulnerable landscape than many of your peers so time for some tasty treats that I promised at the beginning jumping into some data analysis um I like to understand things by back at a napkin or the magnitude so I generated this Venn diagram on the right so if we look at the population of all CVS there's approximately 200k and approximately half of them have known exploits available example entries
in Metasploit exploitly B Etc and of those approximately 10 are known actively exploited the problem is there isn't a defined list of those cves that are known actively exploited the percentages are approximately up five percent of total CVS non-actively exploited however we do have a well-defined list in season Kev which is about 10 of that again there's approximately one thousand entries in Cesar Kev drilling down into season Kev we see that Google project zero which is about zero X zero day exploits is largely contained within ccav and so while known exploit available is a good indicator of risk as in better than a CVSs score knowing that the CV is being actively exploited is a whole lot
better some other graphs then based on those populations so we looked at All cves the known exploit available and the entries in cck of three different populations in general we can see number of CPUs per year increasing and the count of I call them key and Kev follows along no real surprise there if we look at CVS by product we see it's mainly operating systems and browsers one note though if you're looking at the CVS in your devops pipeline and you're comparing the season Kev you might see something like example Oracle weblogic server associated with CV and think to yourself well we're not using that and think happy thoughts and say well we're not affected but you could be affected if
you're using the underlying open source dependency that has that vulnerability so important to check from a devops point of view what piece of software or open source dependency is associated with that cve don't dismiss it immediately so looking at the across those three populations again the distribution of epss scores um epssl epss is a machine learning model which ml is opaque so it's important to understand what it works we don't have the data associated with the model but we do have the outputs so looking across the street populations we can see approximately six to seven percent of all cves have a high or useful EPS score across known exploit available slightly higher and across Kev it's about 50 50.
looking in more detail so if we wanted the Benchmark epss it's all about the probability of exploitation in the next 30 days so we need a data set that we know has been known exploited so we can use ccaf here so the histograms of the counts across the different axes and I created this plot and then I posted it to the epss special interest group and there was some surprise and immediate questions about well okay we expected the top band for the know and exploited vulnerabilities we'd expect that EPS detects them that's it's its purpose and even for the on the right I took all the alerts or cves from the alerts they typically issue about 10 or
15 per year which is the the most known exploited vulnerabilities um and even for that population there's a small little tail at the bottom uh where epss score is low so the explanation and it gets to the heart of how epss works and it's what it sweet spot is so Kev is probably a mix of vulnerabilities that epss has good visibility into like widespread exploitation Network roles that can be detected with epss intrusion detection systems and others that it doesn't okay so EPS score near zero should not be taken as a low probability of exploitation when I looked at the beginning um naively thinking happy thoughts I thought wow all my CVS have suddenly been
de-prioritized but that's not the case so um now moving into or devops data analysis so one of the questions was do the tools the seven tools and services find the same CVS and a generally short answer is no most CVS are found by one tool only that's relevant because when we look at the distribution of counter cves across the cve IDS we see this Pareto type distribution or what's called hockey stick and normally products used in a business context the 80 20 rule as in you can get 80 of the way there with 20 of the effort and same idea here that if we were to address just the CV IDs with the high counts we would
you know get 80 of the way there quickly
further data analysis we see that we're in line with industry averages in terms of our proportion of uh cdss scores are high that's interesting standard and the most frequent score is seven and a half we see paretos everywhere whether it's due to one specific library and versions of accounting for most CVS or as we talked about the distributions and we see correlations we see two and a half times more language B libraries than language a but language a has the most CVS by far and it turns out that was due to that specific language um the dependencies being stale and we obviously ran the correlation and we we can see that too High correlation between counter cves
and State Library so the root cause and the root fix is to keep libraries updated okay so when we're understanding our devops data it's important to understand this root causes so pop quiz survey how many people have this level of understanding of their devops pipelines and Associated data related to cves shorthands zero browsers so how to get it most vendors do a crap job in this area I know that and I tell vendors because I'm on several advice reports but here's how you do it if you use data profiling just export the files from your tools run the profiling and it generated all that data that we saw at the beginning in fact I give an internal
technical conference presentation um whichever tool and data and the data analysis involved exporting four files profiling it and the hard effort was copy pasting into a deck and making it look pretty if somebody out of conference told me about this I buy them a pint later on this is very powerful and it's a significant Gap in the vendor Market okay risk mediation so we asked the dumb question earlier what's risk so now we get to answer it so according to this risk is per asset and depends on impact of vulnerability being exploited by a threat okay that's good to know but if I'm trying to figure out the risk associated with CV that doesn't help me much I can't relate my
data sources to that so we need to go a step further but before we do as devops folks we're interested in fixing things so remediation is the other part of the picture so the full picture is risk remediation so looking at threat and doing a first principle analysis and breaking it down into a sub components so on the right is what I wish existed when I started this effort but didn't it looks at the various data sources that we have and puts them into a taxonomy for threat so to explain looking at the likelihood of exploit the biggest predictor of exploitation for CV would be that if it was previously exploited in the organization example
showed up in our incident response our bug bounty then that's a good indicator if we explored again in the future or if it was known actively exploited in the wild I'm going to use an entry and see SEC have for that various vendor DBS provide additional data on that and then there's epss score which is a probability of exploitation we saw earlier that the it covers about seven percent of CVS or we can use non-exploit available and so on so this is a prioritized list of data sources for um thread if you look at impact breaking it down impact is very much business context dependent and we can determine impact based on asset value or various other parameters
so yeah very much organization specific looking at vulnerability depends on runtime context so as an example spring shell depending on certain jdk versions and if you weren't using them you weren't vulnerable reachability analysis sometimes you might be calling the vulnerable method or sorry you may may have the vulnerable measures independency but it's not being called so that's where something like reachability analysis comes in and so on so forth so these are all the parameters that contribute to the vulnerability remediation so we want to know is there a fix available is it a quick fix is it likely to break something and typically we upgrade by package and back when I started developing software in embedded we were patching at the byte
level now it's at the package level much easier yeah but remediation depends your development context okay so that's good we have a risk taxonomy to inform for ACV what is the risk associated with but ultimately what we want to decide is do I need to fix it now or do we need to fix it later so this is where decision trees come in so why decision trees so they focus on what matters risk and is constituent components and what action decision needs to be taken they're very understandable as in you can see on the right we look at exploitation and we decide is there active exploitation is a proof of concept or is there no exploitation and
we do our sort of CE Choose Your Own Adventure and walk down the train they're modular so cssvc is very much nation state critical infrastructure not suited for Enterprise organizations so we can swap out that node and put in our own and come up with a high medium low ah so many benefits to decision trees also that they're debuggable we were back testing against our new online bug Bounty data and it's very easy to look at the decisions that were made in the tree versus let's say what had been previously um ranked in our other programs so very debugable and easy to understand what happened so in one pager this is SSV ssvc don't worry about all the details but the key
point is that for each node there's an explanation of what to do so example active is shared observable reliable evidence that the exploit is being used in the Wild by real attackers Etc and then you have decisions which is kind of act or kind of let's say track it keep an eye on it but don't do anything so I restructured the decision tree into something that I wanted um had I wanted to change it completely I would have put the risk on top the most important stuff should be on top but I kept Fidelity with the existing SSV C3 so what I did do is I split out the highest risk items that's two red items
at the bottom in ccssvc we would have four pink ones so I added an extra level because I'm most interested in the most risk stuff and I changed the mission and well-being to two levels but otherwise this is a this is our decision tree and so we get to see for depending for cve you know if there's an exploit available or not which decision we would land on do we act ASAP act Etc okay so that's cool we have a risk taxonomy we have a decision tree about giving the parameters do we fix it now or later and now we connect the two things together so for exploitation active we can see is there a bug Bounty entry yeah I'm pretty
sure open or instant responses in Cesar Kev is the EPS score above the threshold Etc and so we can see that here on our decision node and we can see the values that we're referencing from the taxonomy similarly with automatable if it's no user interaction required for the attack and it's low complexity no or low privilege is required and it's network attack Vector then it's automatable we use a vendor DB which gives us more refined info but it's similar technical impact if it's high confidence here or Integrity impact then it's a total else it's a denial service Mission and well-being as I said very much a blank slate for your organization but you come up with a high medium and
low Okay so that's the risk taxonomy from first principles given a cve what are the measurements or parameters and data we can use to determine the risk for that cve decision tree is about okay we know the risk but do we need to fix it now or later or when we get those decisions we connect the two together and now we're ready to test it so before we do here's what it looks like together and if you can read that down the back by your point but this is a zoomed out view of our risk taxonomy our prioritization decision tree and our node inputs it's available as open source I like plant uml so the diagrams are available as
source code so feel free to contribute make it better all right okay so we just built a risk-based prioritization decision tree said no one ever time to test our real talents in our talents in the real world do you reckon Said Fred Weasley so we need some test data we've talked about some of it before so this is our set of test data whether we look at cves no exploit available or non-exploded vulnerability we're going to look at CC Kev are there any Harry Potter fans in the audience okay so shout out to Harry Potter spell all right incantation and just to avoid prompt uh prompt engineering or or attacks um make sure you don't turn this into
frogs right that wouldn't end well all right okay so shout out prompt right boom if we look at our decision tree applied to the Cesar Kev top most exploited known vulnerabilities we would see on the left is would have been the old way of doing it following the user guide or prioritizing by bands on the right is color-coded our decisions so we can see that even within that most known exploited vulnerabilities we're getting targeting here we're targeting as in the red dots are spread out across the CVSs which we know is not a good indicator or predictor of exploitability and the epss score which we know only gives coverage for a small percentage of cves we're getting targeting for all the
CVS in that population so we're seeing the Reds the oranges and the pinks and they're not arranged diagonally but yet we're targeting them all right another spell
all right we'll discuss that one in the pub later so I just look at the full population again what we see is that uh we're targeting the cves that have been known exploited uh across the bands or across the the diagram if we were to use the traditional method as the diagonal method we'd end up fixing a lot of pink and orange stuff before we got to the right stuff that's the short short and simple version of it what we want to do is fix the most important things first all right and we can see we get this targeting of the most important things in this way okay let's look at all CVS another one
uh and what does that one do [Music] killing curse Priceless if I drop dead okay so this is all cves and on the left I showed whether there's a um known exploit for those or not and you can see it's largely spread across the population and on the right color coded by the decision risk you know do we fix it now later whatever we see the bands similar pattern we see that the rates the most important things are spreader out across the CVSs score the one score and the epss score let's zoom in we can see also what we'd like to see is there's a lot of green um as in track closely and lots of Blues
which we worry about later that's nice if we chop off the bottom bit and we look at epss greater than 0.1 the targeting becomes clearer again we see the red a lot of it is in that top corner but not all of it is and we want to get to the right stuff first the highest risk things first and we wouldn't if we use the other method of tracking by the diagonal or using bands okay uh final spell I think make it a good one make a count crucify kill instant death and crucify I'm loving the happy Vibes here um please fill out a positive first sentiment on the feedback form and don't use there such
okay so this is real data from our devops pipeline it doesn't matter if it's one tool or an aggregation of tools at what is one point in time or in aggregate the pattern looks the same so here we're looking at the CVSs score versus EPS score by the counts um and we can see just looking at either the CVSs score is spread out across the range and there's a concentration of low epz score also meaning that epss is not helpful for vast majority of our cves but is for someone that's where it's useful so doing the traditional prioritization by bands or diagonals we'd get something like this but if we were to do it using our
decision tree what we get is the cves divided up into decision bands do we act ASAP act Etc and we still retain our epss score for where it's useful so we get that static decision associated with the the various parameters that we use from our risk taxonomy into where we using our decision tree made these decisions about should we act ASAP act attend Etc across our decision zero one all the way through and we get to retain the epss score so this is good but what's what's better is that we get a gentle on-ramp um at the end of the day it's still people fixing these cves and we want to have for our user story
an achievable amount of things to fix okay and when we do this we get this gentle on-ramp of things to fix which gives that early easy valuable win which is psychologically important for teams and that comes courtesy of splitting out that those top two decision notes I didn't know that at the time when I did it but that's the way the data came out so this is nice we've taken a large population of cves and we've provided a targeted risk-based prioritization so how we run all this well the threat and vulnerability aspects are automatable that's generic data taking data from our tools taking the data from the various vulnerability landscape databases and updating it daily and we
get the decisions then we can layer in the asset impact and the business and runtime context and finally then there's a remediation context you know is it a quick fix is the major refactor all these things play in the okay we've covered a lot here we started a top level context we drilled down to cover the vulnerability landscape you now know more about the vulnerability landscape than the vast majority of your peers we built a risk remediation decision tree prioritization scheme from first principles and we tested it with our test data takeaways know what matters most to you in your devops pipeline okay I know your tools sweet spots and blind spots we saw from the survey where
the result was Zero people generally don't have a good idea of the data in their pipelines do the Eda for that please do yourself a favor vendors aren't going to help you here but it's super easy to do know where your paretos are and know your risk taxonomy and the problems that matter to you decision trees takeaways for our decision trees they give more targeted prioritization which is important we need to know what do we need to fix first okay and their benefits over the CVSs score which as we showed is not a good predictor of exploitability and over the EPS score which has relatively low coverage um as in no signal for most CDs
we get the Best of Both Worlds by retaining epss with our decisions so we get that static component and we get the temporal component that epss gives us so if something changes in real time um epss will be updated and we can use epss across our decision bands to prioritize so even for our act ASAP we could say well we'll prioritize within that band by EPS score and we go to risk-based SLA with sufficiently granular and understandable decisions so now we have a unified prioritized achievable view across tools and teams of what to fix first we can optimize the our flow of value and software versus risk the personalized part is separate projects I'm working on perhaps another
talk thank you
I have to say b-sides Dublin crowd you've been great thanks for the interactive our interaction and we'll discuss the exact pronunciation of certain Harry Potter spells later in the Harvard Master any questions how we doing on time yep I'm all yours you can uh you can order various other incantations that are have impending debt please have you adopted this already where so where we adopted it for one tool that I was rolling out just to see earlier on we did a rough cut just to see what the data looks like um and now we're back testing it um it turns out that a lot of decisions as in how we hooked up the decision three nodes
a lot of it was consistent what we were doing in our new vulnerability program where those are the folks on in the trenches you know who manually create the cves Lisa is you know a veteran in in an expert in this space but a lot of decision let's say shapes or decisions were similar to what we were doing in new phone so it it's not that significantly different ultimately um so yes in that sense and we've also backtested against our blue zombie data and our book Bank the data and um what we will typically do is run it in parallel with our existing method and see how it comes out the big difference is if you give
developers or any any team as the company a given team and you give them a thousand CVS and go fix that nothing happens you need a an achievable amount you need fix these 10 things it gets done fixed thousands you might as well give them ten thousand no matter the big difference is you have this targeted prioritization that gives an achievable amount of things to fix that's the key difference um yeah peace
yes um the major inputs I probably have a slide on
so there is a bonus pack um yeah so this these are the major inputs yeah
when it's not when you use personally use so we only use it to decide it's one of the factors in deciding if there's active exploitation so that's what we use it for and then when and also for then having made that decision in those decision bands using prioritization but the other parameters that went into knowing is it known exploited are things like as we said if it showed up in our bug Bounty instant response if it was in season Kev all those factors are an indication of exploitation so it's there's many parameters for exploitation a lot of them are static and the kind of one of the unique benefits of epss is there's that temporal aspect it changes
over time so it's if we don't have it and we don't have it for a lot of them it's it's not a big deal we just don't end up using that parameter we use the other ones yes the slides will be shared um the thing is open source so yes um I'd like feedback folks um this is relatively new and I know from the epss special interest group there's a lot of kind of interest in all these pieces we're kind of Blazing a trail and um I'd like other input on you know from folks to shape this it's a good time and place to be so yeah I'll share stuff to get feedback
[Music]
side as well super super yeah super and maybe get a list Paul of the Harry Potter spells as well that we used in the no no frogs are harmed in this gun it's already gone
it's great because most of the times when you pass that CD you also get like a long tail of all the things that attached to it yeah so this is where the remediation comes in and um short answer no I haven't done the data analysis it's I'd like to do a separate presentation on that because we we upgraded a package level you know and it fixes lots of things and ultimately that's a whole presentation itself and host so maybe I'll be back next year if we cover that but that's my next area of Interest here it's it's a hard one to um I guess dude there's a lot of like there's a lot of let's say research went
into this there's similar amount of research to come up with that remediation um what oh let's watch this space if show time folks if I'm on time whatever um five minutes cool yep if you're going to use decision tree and that means the traditional CD you might how does that Appliance audit certification SLA when you're auditor and committing your life sure yeah yeah sure so yeah so I in my previous existence three years ago I worked in the payment card industry for 20 plus years um and PCI DSS data security standard says if it's cvss4 or greater fixer all right so you got to fix it um I was involved in various let's say PCI
boards and all that but in general PCI move slowly let's just say that so and the other important aspect is if you look at the Venn diagram of compliance and security the overlap is a lot less than most people think so you can still use all this to prioritize the stuff you fix within that compliance band cvss4 and more there's going to be a lot of stuff so you need to prioritize it so you can use it there but for compliance um you may still need to fix more than four what I found and I used to do certifications for or separate company separate devices so work with certification bodies PCI boards and all that
depending on the auditor um if you can show that this makes sense then they may be a minimal to this I know what's written the standard you know there may be some wiggle room depending on um yeah how convinced they are of this and as it matures like so what's in the epsa is a Sig like we're talking about discussions about you know should we engage with CSA PCI Etc to introduce this new way um because the flip side is a lot of that stuff doesn't necessarily get fixed it might get fixed for audit you know but then it doesn't get fixed because there's too much of it so do you think in the next five to ten years
it might go more this direction yeah like if you look at what's out there so CVSs doesn't work right in terms of prioritization and and in terms of fix everything more than four that's silly but it's it's what exists it things are going to change you saw a US government agency um you know come up with a much more pragmatic approach in other words we think you should focus on the most non-exploited things here's the list fix them by this date and here's the decision tree so that you can focus on the exploitation status which none of us do so it's moving that way yes it'll take time so it's kind of talks like this and folks you know getting an
interest uh when moving along yep
so what would you say $820 which is
yeah so in Practical terms if all you have a CVSs then follow the user guide for epss and start with that little problems knowing knowing its sweet spots and blind spots if that's all you can do and then CC cave would be the other one so focus on the security aspect the compliance says if you fix it more than four grab CVS more than four you're good but you should be more worried about the security aspects um so I would use epss and uh if your CV shows up in cck if you use those types of things to prioritize not not you get a lot of value from that first step approach and you don't need to go off
making decision trees just on that so understand the limits of that but it's much better than using CVS alone a good first step basically Generations
um so in terms of remediation possibly like the there's the things like um you know the decision tree might say well act ASAP but reality factors a product maybe you can fix it immediately whatever all that plays into it there but that's kind of later stage in terms of this not really I mean what you're looking at is given a cve what's the decision do we need to fix it and that's generic a product then you get into the more product aspects what's the asset value and things like that that gets a bit more gray um that's tweakable in the sense that you're going from let's say binary decisions to what's the value of this
thing but somebody somewhere needs to know the value of those things that stuff would likely tweak but the using those things as input parameters probably wouldn't um it's like the decision tree like we saw the graphs we saw the data it you know to run it on billions of data inputs is it runs very quickly the automated parts we're out of time submit okay we're out of time last one around time you tell me your call okay last one
yeah so that's not part of the decision three the risk part the radiation would as I said is there a fixed available is it a quick fix what's the merge confidence all those factors need to play into actual fixing it well if the fix is not available then you can look at the compensating controls and so you could you go down the list you can't fix it can we come up with compensating controls or do we pull the plug whatever that list is okay uh you've been a wonderful audience folks thank you very much