← All talks

Beyond Whack-a-Mole: Scaling Vulnerability Management by Embracing Automation

BSides Las Vegas · 202444:34135 viewsPublished 2024-09Watch on YouTube ↗
Speakers
Tags
About this talk
Organizations struggle to keep pace with exponential vulnerability growth while remediation capabilities lag behind attack speed. This talk explores a proactive shift toward scalable, automated vulnerability management through standards like SBOM, CSAF, VEX, and SSVC, demonstrating how to filter noise, prioritize by genuine risk impact, and sustainably manage vulnerabilities in dynamic cloud-native environments.
Show original YouTube description
Common Ground, Wed, Aug 7, 19:00 - Wed, Aug 7, 19:45 CDT In the current cybersecurity landscape, organizations are engaged in a never-ending game of whack-a-mole, struggling to keep pace with the rapid increase in vulnerabilities stemming from unprecedented volumes of code combined with an increased reliance on third-party software. Such a reactive approach to vulnerability management is inefficient and unsustainable as the gap between the discovery and remediation of vulnerabilities continues to widen, while the time it takes for attackers to exploit known vulnerabilities decreases. This talk proposes a proactive pivotal shift towards a scalable, automated, and risk-oriented vulnerability management strategy. We'll explore the transformative potential of standards and frameworks like SBOM (Software Bill of Materials), CSAF (Common Security Advisory Framework), and VEX (Vulnerability Exploitability Exchange), to automate, streamline, and enhance the vulnerability management process while aligning remediation efforts with genuine risk impacts.. Attendees will gain insights into how automation can adapt to the evolving threat landscape, ensuring that vulnerability management is both effective and sustainable in an increasingly complex cybersecurity environment. People Yotam Perkal
Show transcript [en]

uh so hi everyone thanks for coming uh great to be here again um yeah today we're basically going to talk about um automating the vulnerability management process um before we dive in a little bit about myself so in the last um five years I've been at resilient and my last role I led the research team uh before that I worked at PayPal doing various security roles uh I've been in and out vulnerability management from various kind of uh uh perspective from a practitioner side doing automation for vulnerability management at PayPal and then from the research side at Brazilian uh in the last couple of years uh currently I'm uh working on uh uh hopefully starting a new

Venture um so that's kind of in a nutshell so because uh we're kind of in a spooky motive uh at bides I thought it will be appropriate to talk about that uh from the three ghosts of Christmas kind of perspective so we have the uh we're going to have a visit from the ghost of vulnerability management past the ghost of vulnerability management uh present and the Ghost of vulnerability management future um yeah and in cyber security we usually uh fight kind of invisible enemies So today we're going to summon these ghosts to help us understand um kind of where we're at with v Mobility management and where we can hopefully get um so let's uh dive in let's travel back

in time a bit if you will to the early 2000s this is a good time um so the vulnerability landscape back then we had very limited visibility the scope was mainly focused around um Network and Os vulnerabilities um we lived in a much simpler time where most of the code that we had was code that we we have written or knew uh uh intimately and obviously it was the pre devops day so um we had a lot of uh of fears and uh repercussions to uh change management processes and uh remediation was a struggle um so I actually took a time to uh um um use the Wayback machine to to look at some screenshots from uh 20

years ago how like the top players of today uh looked at the time so this is qualis from 2002 um as you see not much has changed basically uh and unfortunately uh this is the same one so uh they had uh uh 1,926 different vulnerability is in their database which is nice today we're at 260,000 um and uh yeah as you can see I I highlighted so we had an average of 25 vulnerabilities each week so we'll we'll see where we're at today but that was uh back then uh rapid 7 uh if you dare to install it with this uh scary looking guy then you'll be uh this is the the UI that you would have seen uh as you can

see again uh our not really see but um a lot of the vulnerabilities are kind of network Ori Network base oriented and they say they test for over 980 vulnerabilities different vulnerabilities which is nice um and this is tenable uh from 2003 um so that's how we looked 20 years ago let's discuss snap back into the present and discuss how how are things looking today so first a lot has changed uh we dress better that's one but uh software is embedded in every aspect of Our Lives now uh code is released much faster we have devops we have Dev SEC Ops uh our software is not just ours it's mainly not ours actually we have a lot of third

party whether it's commercial or open- Source uh dependencies that we use uh obviously the move to the cloud has expanded the attack surface and it has also changed kind of the Paradigm where things had in the past were very static and today uh things are constantly changing machines are going up going down changing IPS everything is much more Dynamic and obviously attackers uh have not stayed put and they are uh quicker at identifying and exploiting the the gaps um in terms of numbers so if we were at uh uh 25 new vulnerabilities each week so today we're almost at 110 daily um so uh a lot has changed in that perspective um in terms of the

volumes um and I don't think it's worth me like I need to mention but security teams haven't uh aren't gr 40% year-over-year uh like the operation team so that's kind of where we're at and this is a good uh uh visualization that that demonstrated so uh if we said uh we had less than 200 uh 2,000 vulnerabilities in uh two in the early 2000s now we're at 260,000 uh with over almost 800 vulnerabilities uh each week uh being added obviously we saw that uh even nvd uh in the last uh almost a year now uh can't really keep up with the space there are a lot of problems with managing these volumes and the thing is that um as you can see in

the like the lower kind of part of the graph only a fraction of the vulnerabilities are ever exploited so we're talking roughly 5% um so this is where we're at and this trend is the the gap between the blue line and the red line isn't something that is going to stop so it's not this is this is safe to expect that this is the same Trend if not worse that we'll see in 2030 203 35 so the gaps will only grow wider so the landscape has changed but the way we we do vulnerability management unfortunately it has not we're still very reactive uh the majority of organization are still prioritizing based on CVSs scores by the

way in uh raise of hands um I don't know how many of you are uh actually practicing vulnerability management for or for the best of your knowledge how many of the companies that you work with rely solely or most on CVSs for uh prioritization vulnerabilities yeah quite a few hands okay um so we did went to from fix everything which was what was common at the time to fix the highs in the crits the highs in the criticals which is progress you could say uh but unfortunately it's not uh it's not really because uh as you can see about 60% of the vulnerabilities are high in critical vulnerabilities so it's not really a reduction in terms of the scale

it's not really scalable um it isn't effective because as we saw only a fraction of vulnerabilities will ever be exploited so we're wasting our limited resources attending triaging fixing remediating vulnerabilities uh that have little to no impact on your actual risk posture um and obviously attackers are not relying on CVSs to choose which vulnerabilities to Target um for example so 12% of the cisa cev are medium and low vulnerability so um so there's that um and as I said it's not scalable as well so um um the average organization uh from CIA Institute research gets to about 10% of the vulnerability backlog in a given month so if we have 57% of vulnerabilities that are higher than

criticals and you get to only 10% of those then you're not really uh closing the gap and uh I think the most important aspect is that while the still the like the go-to for prioritization is CVSs the way that we use CVSs is basically CVSs based the CVSs base core and it's not really a reflection of risk because a risk is multifaceted so he has um I guess you know this formula but um it it's kind of a combination of the uh threat the vul ability and the impact um so and CVSs is a measure of the vulnerability so it's not uh like you can't assume that it's a good reflection CH of risk because it's

it only gives you a a very specific perspective um so clearly this isn't working um and I think um as we saw nothing dramatic has changed in the way that we do vulnerability management while a lot has changed and it's keep changing in in the EnV in our environments so we need to think differently we need we need a paradigm shift we can continue to doing the continue doing the same things and expect things to change because they won't um and this is I think a good kind of reflection of where we're at when it comes to vulnerability management we're we're heads down you know drowning in vones uh but we're not thinking about maybe there's a

more efficient way um of doing things differently um so which brings us to our last visit today uh from the the ghost of VM future so if if we try to look into the future of how kind of uh what's the the optimistic VM scenario of the future would look like um yeah in the future uh no I I'm not sure that I can't see you know all the specifics so I'm not sure this is what you know at least uh Leonardo and uh Del that's that that's its view this is after I don't know how many text there were a lot creepier images of of the of the vulnerability Management in the future I'm giving you

just a glimpse of the of the nice one uh anyway so we have a scale issue right um we have limited resources we can't get to everything um and in order to tackle scale issues we have basically two ways we need we can either prioritize uh which is what we're trying to do today but that's one aspect the other aspect is automation uh because if we're still uh uh if we still have manual processes that uh and a lot of them it's not really scalable so I'll try to focus today on the on the automation aspect um and obviously we need to move from a reactive stance that we're um we're in today into a more proactive

sance um and from gut feelings uh to more kind of data driven decisions so how does a kind of complete vulnerability management process looks like so basically these are the phases so we have Discovery we have assessment then we prioritize we have reporting and then we hopefully remediate verify and then rinse and repeat right um so in a perfect world each one of these aspects would be automated uh in reality some pieces are more uh complex to automate uh than others today I'll mostly explore the the first three parts of the discovery the assessment and the prioritization um remediation is also a big challenge but that's a topic for a complete different kind of talk

conversation uh so in the interest of time I won't discuss all of the aspects uh so just that to give to give some context so first even before that cycle that we saw earlier in my perspective vulnerability management starts even before you do the discovery so vulnerability management starts should start with the basics so Asset Management hardening uh code DE bloat getting rid of code that's not being used which is just redundant attack surface um removing unused components or packages from your os's from your images from your applications uh obviously regular upgrades and patches even regardless of of vulnerabilities of end of life software end of support software um and obviously all of the like the the pen testing red teaming

breach and attack simulation aspect of of and tabletop exercises of actually validating your controls um so that's also an important aspect of a kind of a complete vulnerability management because if you would have if you would do all these things you'll be in a much better place Place uh much better off and a lot less noise to start with so that's one thing that I wanted to kind of highlight um and I think um in terms of the discovery aspect so I gave a talk last year about um uh the second bullet which is kind of the security tool coverage and the variance we have uh between the performance and the the results of the

different tools the false positives the false negatives so I won't go too much into that if you want there's again a talk from last year at breaking ground that I gave that kind of Dives deep into that subject obviously we have the software identification problem which is a problem it's unsolved uh we know the state of CPE there are initiatives with pearl swed uh but this still kind of requires Community effort um um that kind of tries to address that that problem so just as a as an anecdote so you have a FTP so it can be an FTP Noe package or it can be the FTP Linux um uh binary so um if you're based on CP it's

uh for a lot of the vulnerability management tools it's hard to make that differentiation then you get things that are not really there um so that's kind of that it has it's again it's it's a topic of its own uh feel free to um take a look at the talk from last year I think as bomb um is kind of an opportunity to address some of the issues that we discussed so for example all the attack surface reduction how do I know which software um I don't use um end of life end of support drift detection so let's say I have Chrome in my environment and I have 100,000 hosts so even if I plot I have an Asom I can

plot all of the different versions of chrome then I will get outliers of chrome versions that are not updated that I have probably a problem with the auto update process so this the these are things that that esum can Surface and the the parto principle is also something important because again we have limited resources basically we're talking about um uh risk management uh so we need to focus on the risks that are most or the the vulnerabilities the gaps that are most impactful in terms of risk reduction so if we for example plot the the different packages in our environment and there is one package that if we fix that it will solve a huge

chunk of our vulnerabilities will probably better off starting with that or and you can take the same kind of perspective into hosts so we have a host that if we deal with that it will uh reduce a lot of our attack surface a package a container image a OS image whatever but it's a really strong principle and I think it's it's not really being utilized today in the normal kind of VM space um obviously surfacing um tool and data coverage graphs so if I if I see uh once I connect data from various sources that look at the same kind of data points so I have uh an sbom that is being generated in my uh uh in my CI from

GitHub gitlab sneak whatever from source code and then I have the same sbone from my SCA and then I have the same sbone from my uh binary analysis tool just the differences between all these different stages in the sdlc can provide a lot of valuable Insight so maybe I have things that my vendor provided in esbon but I don't see them or I see different things uh or I see things that are in my Dev environment that don't make it to production because they're Dev dependency so a lot of insight can Surface from that and obviously the main advantage is that it's machine readable so it allows for the automation aspect that we'll discuss um so that's about

the discovery let's kind of move on and this is kind of the big bulk uh we have the assess and prioritize kind of phases that I basically it's the triage aspect of vulnerability management um and vulnerability triage is by far one of the most cific painstaking processes in cyber security anyone who has done it for a while uh has the scars to to show it um and I think the reason that that's the the that's kind of uh the state the situation is because a vulnerability is only relevant in context right so a cve is not you can have a thousand uh inst assistance of the same cve in your environment and in each place they can

mean completely different things uh because you need context to know so you need asset context is this asset exposed to the network or not uh did it have past incidents uh what's the asset criticality what's the criticality of the information that is on that asset we need business context so who's the owner what's the ownership information am I mandated to uh slas or certain uh policies or regulations uh um the threat context so as as we discussed earlier only a fraction of vulnerabilities will ever be exploited so exploitability is a very strong signal so is this vulnerability likely to be exploited is it already exploited in the wild um does it have an exploit PC or not obviously

the vulnerability context so all of the metadata from the vulnerability uh the description the the cwe the CPE um and where did that vulnerability surface from is it from a a generic scan is it from a pen testing is it from a um uh back Bounty program so all these things matter differently uh runtime context so is this thing even loaded is it actually being used so this comes to the the reachability analysis aspect and do I have a compensating control do I have an EDR in place don't I have an EDR in place do I have a firew war rule don't I have a firew war rule um and we have the remediation context so what's the

operational risk um do I have a fix available is it coded it's in my controller is there a third party that I'm reliant on someone else to fix it and then I need to have some kind of mitigation in place um and how much effort would it cost me to remediate do I need to restart the machine uh or not so so that's all the all the pieces of information that you need to have in order to make a kind of educated decision about what's important and what's less important and again in this under this playing field where we have limited resources and we can't get to anything this is this is a must and the

problem is that currently all these data points are very dispersed um in various sources various security tooling so uh which leads us to um uh basically uh doing it manually so these are the challenges again the data is very fragmented we have data quality issues we have completeness issues we need to communicate because often times we need different teams have different perspectives into these kind of pieces of context um a lot of noise a lot of false positive false negatives again this is a um entire topic of its own and our environments are constantly changing so it's not like in 2002 where we had a rack and a server which was there unless you updated the firmware it stayed like

that like we're living in a much more Dynamic environment and vulnerability management hasn't really kept up with that uh Pace um and ultimately we need context again vulnerability isn't relevant without you know out of context um so okay so these are kind of the problems but we're optimistic here right we're talking about utopian future so I'll try to um touch upon various points that I think are are uh potential for changing kind of the current status quo so the transparency exchange API seaf Vex um and kind of attestations that go with it ssvc and the word that is uh spoken too much in this uh in our conferences lately but uh want that's anyway I won't go too much into that

because I think even like without AI we can still do much better than we do today so uh let's start transparency exchange API or t so basically it's um a standard format agnostic API for exchanging kind of supply chain uh transparency artifacts between systems uh it's a cyclone DX kind of community project and it aims to kind of standardize uh that API an as a standard as an E standard so it's very um fairly early days uh in the work you're welcome to by the way check out the the working group I I posted a link U like I have references in the end with a link to the to the ga page um the elements that it

currently under scope is xbomb so not only s bomb but only Hardware bomb cryptography bomb AI bomb SAS bomb Etc uh Cyclone DX attestations uh vdrs so vulnerability disclosure reports Vex that we'll uh dive deeper into and common life cycle numeration so all of the aspects of the evolvement of the product lifetime so end of life end of support um mergers and Acquisitions all that kind of information is also kind of in scope um and this is again once you have a way to exchange these kind of artifacts between different Tools in a machine readable way that has a lot of potential and and I'll um I'll I'll show how how it becomes more practical in a

minute um so that's one um next we have seaf which is the common security advisory framework how many are familiar with the with seaf just in raise of hands so we have a few but not enough okay great so basically CF is a a machine readable security advisory that's the the aim so if today uh it's an OAS standard that's uh who's driving the project uh so if today a security advisory can be a PDF it can be a text file it can be an HTML file it can be an XML format basically depends on the vendor uh CF aims to automate it uh in a way that is AIS discoverable so you know where you can find that

resources and and uh and um uh ingest it and two machine readable um so basically uh and and it has also a Vex profile again I'll explain Vex shortly in more detail so basically we want to go from this into this which you don't see very clearly so I'll try to zoom in but basically it's a Json so uh it allows you to communicate changes or updates in The Advisory over time in a way that is again machine readable and can be consumed by machines or tools uh which is a huge it's huge okay so uh so here you can see an example I'm not sure how how well you can see the the text but um uh for

example um you have uh a a snippet that said okay fixed software Cisco has released uh free update server uh uh software updates that address the vulnerability described in this advisory and then you have a couple of other uh updates about the vulnerability um that for example this vulnerability affects Cisco devices that are running a vulnerable release of Cisco iOS or IDs XT software blah blah blah and install a specific client feature um and then the last one uh it's a kind of exploitation uh update then that Cisco is aware that this thing is actually being exploited in the wild um and there are a lot of uh vendors big vendors that are already utilizing and

generating uh seaf so Cisco red at cement Oracle um and more um but it's still not enough obviously but it's a good start um but I think that this is something that again if if we want to get to a point where we automate this process SEF or a SEF like thing is a must um so now you have we software that you can potentially automate the consumption aspect of security advisories um so that's yeah which is what is written here okay nice uh but why stop here so let's dive into the next kind of aspect which is Vex the vulnerability exploitability exchange hopefully more people know that one can I get kind of raise of hand just to get

a feeling of the not many more okay uh or not okay but okay so what's Vex uh basically it lets you know a a way to communicate whether a product is or isn't affected by a vulnerability X in a machine readable way again we're all about machine readability and automation um so is a vulnerability exploitable and and it has several flavors which isn't good this kind of um it's not a lot of different standards but that's kind of the the r and I'll try to explain the differences between uh each one so we have the CAF Vex profile as as I discussed we have Cyclone d x Vex we have sdx Vex we have open Vex which is

kind of a lightweight version uh and we have CIS Vex which tries to be a more kind of encompassing kind of General kind of guidance into the format um but basically it allows you to handle false positives from a vendor perspective it lets you save money on support because you can say okay I'm Cisco I'm Intel I'll let you know whether a specific vulnerability is relevant for my router inversion whatever so you don't have to send me an email or open a ticket or call my phone center so um it it reduces a lot of that burden uh but personally again this is kind of my perspective I think the strength of Vex isn't from the

vendor side it's important from the vendor side but the the real potential is actually from the consumer side and that's I'll try to explain why um Al uh it can be either embedded within an sbom but it can also be detached it's not a must um so um it's also something that I think uh uh is worth mentioning and there there are several flavors but the principle is the same so um I'll try to kind of uh touch on the different uh aspects so Vex basically lets you set a status for vulnerability so whether it's affected not affected this is for cisa spdx and open Vex I'll I'll see I'll anyway I I'll show what the difference

in the different formats but either it's uh affected not affected fixed or under investigation um and then you can uh if it's not um if it's not affected then you need to provide a justification so why is not affected um so and in uh in cisf it's a bit different cisf tries to um look at it from a kind of product perspective so not a specific uh cve so then you can specify which are the version the first version that was affected uh when it was first fixed uh it's kind of an inam right like a list so what are the known affected versions or not affected version when it was last affected what's the recommended fix Etc

um so that's kind of in the cisa format of Vex and then um in Cyclone DX it's called States so uh you can say whether something is resolved resolved with pedigree which basically means that is remediated and you have evidence provid Ed uh that you can verify about the fix so a commit uh commit hash or or a diff between the unfixed version the fixed version so that's kind of it is it exploitable or not uh in triage so it's kind of like the under investigation that we saw earlier false positive or not affected um and if it's not affected again you have to provide a justification for why it's not affected so again it's not very different uh the

concept is the same uh and this justification in the kind of cisa spdx open Vex and cisf world are these five uh which is component not present um in uh inline mitigations already exist vulnerable code cannot be controlled by an adversary or vulnerable code is not in execute path so this comes to the reachability aspect um and vulnerable code note present so I have uh vulnerability in a specific uh uh library and in a given product that Library doesn't exist or a specific function or whatever um and then just the justification for cycl thex are a bit more um um there there are nine not five so you but again pretty similar but you have like

required by the environment protected by compiler uh protected at Perimeter or protected by mitigating control so it's again a bit different but I think we should look at the bigger picture that that the specifics are less important so much as the fact that now you have a way that you can communicate exploitability status and filter out noise and again if we look at it from the the consumer side the security tooling side it allows you to have a machine readable way to communicate the exploitability status um and and I'll show why why I think the the uh the consumer side is the right like the kicker here um but I will say that you might ask okay how do we trust this

thing and like who issues the Vex this also kind of top for discussion we can talk about it later if we have uh time but then I'll just say briefly that this is where kind of attestations kick in so you can have attestations you can have uh just like to the justification you can have metad data that says okay who determined it whether it's a a human analyst or it's a security tool or Etc so but uh again that's kind of technicalities so how does this help how does a process like this look like so let's say we have a new uh V RC vulnerability in a library let's call it fog 2K I don't know

um um and then um it's exploitable only via the network and under a specific configuration okay that's kind of that's the vulnerability it's very common everyone has it um so this is kind of in the world with CF and Vex this is how kind of the triage process looks like so first you can query your sbom and then you then you see that you have maybe I'll do it here can you can still hear me fine good so you quer your Asam you say okay I have 80 uh uh instances of that vulnerability present in proprietary third party open source code okay so we know now we have a seaf to I can query and see the security advisory

and let's say the oh you don't see because of the full screen but it says um so I can issue a Vex like in the seaf we have a Vex profile that says the vulnerable code is not present so okay vulnerable code not present not exploitable I have Vex it's machine readable great so what about the other uh 16 instances which are not I I couldn't find uh uh seaf because again it's not uh I don't know the vendor didn't provide it whatever so now we go to the next phase let's say I have a reability analysis tool that can tell me whether something is reachable or not is it being used um and then we see okay

let's say we have eight of the instances out of the 16 are reachable uh but eight are not so okay if they're not let's issue a v statement say vulnerable code is not in execute path so we got rid of that as well um and then okay for the other eight maybe I have a network tool that can tell me that the the vulnerability is only exploited via the network right we know that so if it's protected it's not exposed to the internet then it's not relevant as well so then we issue a Vex statement with the justification that the vulnerable code cannot be controlled by an adversary um so that's that okay fixed 8 to zero nice what about the

other 20 in our proprietary code so again we have the reability analysis let's query that okay we have 15 that are not reachable not in use great again vulnerable not code not in execute pth we are only left with five again we have our Network tool that can tell us whether it's reachable or not let's query that we know that uh uh there's one instance that is not Network facing great again vulnerable code cannot be controlled by an adversary so we're left with out of 100 with four and we take that for and maybe we have an application configuration again this is only exploitable under a specific condition so whether this is a human analyst that that H only has to three

four instances instead of 100 that's good but if if you have a security tool with visibility into the configuration that you can query then you can automate that aspect as well and let's say that three are not exploitable because they don't use that specific configuration so again you issue a Vex statement vulnerable code cannot be controlled by an adversary so you're left with one this is the only one you need to address and all of this is automatic like the entire Tre process so all the false positives of the world again it's not all of a wish but but again you can filter out a lot of the noise so this is kind of the strength the way I see it

with CAF and Vex uh hopefully in the future uh hopefully not too far away um and then okay what about um so this is a one example maybe a bit optimistic but uh we have another layer which is uh also something that really strongly live in ssvc which it's the stakeholder specific vulnerability categorization again kind of raise of hand so I know the where we're at okay none okay good or not good but uh uh basically it's a method methodology for prioritization vulnerability prioriti vulnerability based on the needs of a specific stakeholders because different stakeholders have different uh perspective different needs um and different um risk capita different way that they make decisions in terms of

risk it's a trans arent way basically it's a decision tree I'll show it in a bit it's explainable it's model or you can change it like the anyway we'll see it um and most importantly it can take into account multiple facets of risk so if we go back to the early uh part of the talk so how does that look like so this is again like a a specific example so we have again it's a decision tree um so let's say that uh the first kind of layer in the tree is the exploitation like the threat context so in this case I take into account uh cisak epss threat intelligence to know okay How likely is

this vulnerability to be exploited so if I have active exploitation that's one thing maybe it's highly likely to be exploited maybe it's not likely and again like the decision notes are less relevant than the like the concept so okay let's say that if if I have active exploitation obviously then it's a it's one thing okay so the next kind of phase in the tree let's take the vulnerability context that we know so let's say we want to see if the vulnerability is automatable what is automatable um it's uh the attack Vector is from the network it doesn't require any privileges or no authentication um so that's obviously if it is that's something that is more

important than for me again let's it's a metaphoric discussion but I decide that for me it's more important than if it's not then okay we get here and now what's the impact so now I can take into account um various aspects like uh my asset criticality data exposure um uh would I need downtime what's the financial loss etc etc so I make this decision and now if the impact is high I act now if it's medium I act now if it's low maybe I I keep track um and and get to it after I finish all the act right so this is kind of the the tree and and you can see as you go right basically

you get to to things that you're you're tracking um but they are far less important so if exploitation is not likely and it's not automatable and the impact is low or even if it's high but again it the exploitation is not likely because you have no indication low epss score hard to exploit vulnerability um um basically all of the decision points that we took in the past then maybe I don't need to treat it now so if I have a limited resources sure I'll get to it but um but but uh I need to start somewhere again it's the same kind of the playing field is this I have limited resources and I need to uh

decide where do I put those uh in action um and um so let let's try kind of to put it all together so this is how I kind of Envision it so you have the scan results and then you have CAF and Vex that can that can kind of take away all of the things that are not deterministically because you have context and again I will caveat and say that is it's not exploitable in a single point in time because our our environments are Dynamic it has to be something continuous so maybe something isn't loaded today it will be loaded tomorrow something is not in use today it will be in use tomorrow a configuration isn't there today it will

be there tomorrow so but again this is where I like the security tooling have the capability hopefully if they're applied correctly uh again we come back to the basics and and we have the coverage and we have all these aspects that we talking initially to to provide as insight and then okay all of the things that I'm not sure about then I can again we come back to the risk management aspect and I can take uh the ssvc model which is basically again a decision treat the same logic that you apply to decide whether to treat something or not let's formulate it let's communicate it internally externally this is how we manage risk here um and then uh um I'm left with the

most impactful vulnerabilities and then you start here and work your way up um so that's uh the way that I see the future of uh vulnerability management and how I think we can move away from this kind of wacko reactive stance that we're in now uh so I'll say uh the future is already here a lot of the things aren't science fiction there are uh obviously in work still a lot to go but some of the things are here um so the future is already here it's just not evenly distributed this paradigm shift is inevitable like if if you don't feel the pain today you'll feel it tomorrow if you don't feel it tomorrow you feel

it in air nvd is certainly feeling it today uh or for a few months now um so something has to change um and I will say start thinking about how do how do you define risk in your organization uh this is this process alone is is super valuable and you can start simple it doesn't have to be all the things even you know a decision to with two nodes is is even it's something um and again this is something that I can't stress enough in order to do to effectively manage uh the risk you have to take into account all uh a lot of sources of context but Bas basically various aspects of uh of

uh of risk um so yeah we're traveled through time uh we saw the ghost of the past the promise of the future um so hopefully we'll get to a future where we're not afraid of ghosts or vulnerabilities um and uh I will say again people often expect kind of want like a single number tell me how to prioritize I want the CVSs score is it a nine is it a five it's nice it's not realistic um uh risk is much more complex than that and I think the key is how do we model that complexity into our automatic processes um and how do we integrate these different sources of context which we have uh into a decision

um so uh yeah that's it for me uh if you have any questions I would love to take them if anyone's interested in this topic this is a topic I'm very passionate about I love talking about so feel free to reach out whether it's on LinkedIn Twitter or in person uh would like to discuss this further um yeah thank you

all uh we do have a few minutes for uh a couple of questions yeah references also if you want to take a picture for some of the things that I talked about so just go ahead and raise your hand and I'll pass you the

mic yeah go ahead okay so this is cool and you know I agree this is the inevitable future I can't talk but today when I have vulnerabilities I have an auditor that tells me what I have to fix MH how do we convince the Auditors that this is something we need to do that that's a that's a great question I think that is that is key because I can have a crystal ball I have it actually now but I'm not using it because even if I tell you what are the three vulnerabilities out of the 100,000 that will be exploited in your environment B adversary tomorrow then your auditor will tell you okay but you

still need to patch everything CVSs four and above so that disconnect is something that has to change so I'm this is something that I'm very passionate about I think we talked about it uh I think what gives me a sliver of Hope is that most Auditors don't say fix everything anymore that used to be the case so things can change it will take time I think the data driven approach the explainability the evidence that is something that is crucial in order to to convince the auditor uh and I think this is something that as a community we must work together to change because otherwise we can have yeah I can talk about you know my my breath out about it and nothing

will ever change so yeah it goes beyond Auditors it's uh compliance for government regulations that you have to comply with uh yeah government regulations so hopefully like again the Trend won't change eventually the the chips will fall like it's inevitable I think we need to make sure that it happens sooner rather than later and we need to drive that change uh because again this is the only way I I see to get on top of this thing um so thanks for bringing up the awareness yeah sure my pleasure cool thanks guys thank you very much [Applause]