← All talks

Navigating The Volatile Vulnerability Landscape: Strategies For Resilience

BSides Munich 202525:5233 viewsPublished 2026-02Watch on YouTube ↗
Speakers
Tags
About this talk
Jerry Gamblin examines the fragmentation of vulnerability intelligence systems and the crisis in CVE data quality. He traces how the National Vulnerability Database has become overwhelmed by exponential CVE publication growth, explores emerging alternatives including the European Vulnerability Database and open-source feeds, and proposes a practical framework for organizations to build resilient vulnerability management programs that prioritize exploitability over CVSS severity.
Show transcript [en]

Uh thank you everybody for being here today. Um just a little about me, my name is Jerry Gamblin. Uh during my day job, I work for Cisco as a principal security engineer in their CTO's office leading their threat detection and response group. Uh I came from a startup. Uh we had a successful exit about 5 years ago. So I have a little bit of free time on my hands. So I started something called rogolabs.net. Uh the goal of the lab is to allow me to publish open-source products that make threat intelligence available for everybody. Um we'll get to a little bit later on, but we're getting to the point where vulnerability intelligence is only for the big companies who can afford it.

And there's becoming kind of kind of a poverty line that if you work at a small medium business or if you work at at a university, you might not be able to to afford €50,000 to pay for a vulnerability data feed every year. And so you're not getting the data that you need to protect your company. So let's talk about the hidden cost of broken data. The CBE programs is in a time of growth and reshaping right now. Uh we know that the question is not how long you'll need uh sorry the question is no longer if you'll need to diversify where you get your vulnerability data from. it's how soon and how quickly you

as an organization can do that. Uh and we'll go through how and where you can pick up those data sources as we move along in the talk. So we'll just go through three sections here. This will be fairly quick. I got the slot right before lunch and I like lunch so there's no chance of me going over. So we will be done on time. I promise. So let's stop here and talk about this. And if anybody has any questions during during the talk, just please raise your hand and we can take them during this. We need to talk about diversification versus balkcanization. And that's a really fine line. And what I want to think is

that when everybody puts out CVE data, it's free, it's available, everybody can read the same size. If the EUVD puts something out, it can go in the NVD data correctly. If the NVD puts something out, it can go in the U EUVD perfectly. And the data is available for everyone. We're at a nice edge right now where we're kind of just one little push either way can make a government organization make their data not available to everybody else. Uh, I know this is a European conference, uh, but we'll get to some of the issues with the with the US-based CVE program. We've had a lot of budget issues in the US lately. Uh, and they

stopped funding NVD for about 3 months. So, that stopped CVSS scores and CWEs and CPEs from being put on there. And then we had a little hiccup with the CVE programs budget earlier this year. It lapsed by two days. Everybody thought it was the end of the world. And it really spun up a bunch of things that had been sitting in the background like the EUVD and and some of the other global CVE programs that are starting up that we're trying to do. Um, and the analysis delay. The issue there is the NVD has four people that are hired that full-time that look at every CVE that is published. puts in a CPE, puts in the

CWE, puts in the CVSS by hand. It's not automated. It's not there. They were doing a great job. They were able to keep up when it was 20,000 CVES a year, and it was probably an okay, cushy job for for the federal government in the US. Uh, in the last 3 years, we've had about a 30% year-over-year growth. Uh, last year, we broke 40,000 CVEs published for the first time. Uh and this year we broke uh what it's going to be closer to 55 I think was my last estimate. So we'll have about 55,000 CVEEs published this year that need to be handched and it's just not scaling. What people who don't spend a lot of

time in this area don't realize is that the National Vulnerability Database is is ran by NIST which is our measurement organization. They are responsible for making sure when I buy a gallon of gas in the US, it's one gallon. When I buy an egg, it's the right size egg. Right? How they got how they got not caught, how they got kind of put into being NVD is a long story about federal government and about how people try to protect their their own kingdoms in bureaucracy. And it it's really interesting, but that's not the story for here. The story is is that their whole job was to take CVE data, enrich it for other federal organizations to use and not for the

general public. They have no mandate or no charter to provide NVD data to the public. They've been doing it for the good of the the good of the community as kind of a social contract. Uh but they're kind of at their wits end and said, "This isn't what we signed up for. We can't keep up with this. you're not funding it well enough. So, they've kind of pulled back and started to go to only enriching CVEEs that the United States federal government uses and and wants them to do. So, we'll talk about that. Uh that should have popped this slide up. So, here's where we're at today. Um close to 90,000 CVEes have been deferred. That

just means that they just took about a half of the CVE database and said we're never looking at these CVEes again. They said anything that was published before uh June 1st, 2018 will be put in deferred status and it's just long-term storage. So if those start being used again, we'll never update those records. uh we have the awaiting analysis which is about 35,000 CVEEs which is just over 50% of all CVEes published in the last two years have not been analyzed. Uh they're doing about 135 to 140 CVEEs a day. So you guys can do the math. So how long would it take at 140 CVEes on a 5day working week to to clear the

backlog? It's going to take them years to ever catch up, if they ever can. Um, you can find this graph and a lot of other graphs on this subject on a project Rogol Labs runs called cve.icu. It's updated every 4 hours. It puts out graphs like this. So, you can take pictures of the screen, but if you want to see today's numbers that they're on there and updated automatically. So, I spend a lot of time talking to people about CVEes and about how they run their vulnerability management programs in their companies. And it really comes down to data quality crisis. We're in a we're in a data quality crisis with CVEes because the way the CVE program was structured in

1999, they built the program to just publish vulnerabilities for red teamers. This was built as as a list of vulnerabilities for people who are doing offensive work to be able to go and look. Over the last 20 years, 25 years, it's turned into a much needed tool for blue teamers, but the focus of the program and who runs the program has not changed. Uh, it only takes three pieces of data to publish a CVE today. You need a CVE ID that the CVE board gives you. You need a date published and you need a description. That description has no limitations on how small or how large it is. So a lot of times you see CVEes like

people talk about like the unicorn CVEes or the funny CVEes that come about those are valid CVEes because of the way the CVE program is set up. Uh we tell people all the time to please come help us in the consumer working group which just started with the CVE program to to push and say hey no you have to have a CVSS score or CWE or CPE or Pearl before we're going to allow you to publish this because this is needed data. Those discussions are going on right now. If you're interested in being part of those discussions I'm happy to to get you on the consumer working group for CBEES. Uh, we do a European friendly time

meeting one week and an Asian-Pacific time the next week. So, it's like either I think we use 7:00 a.m. Central, so that's like noon European time, and then we do 7:00 p.m. at night, which is a good time for people in Asia. So, so about two years ago, CISA decided that they knew that the CVE program was going to fail and that they needed to do something to help once again US government federal agents agencies that use the data. So, they put together what they call their known exploited vulnerability list. It's a list today of about 1,500 CVEes and they update it daily and it says here are the CVEEs that we have seen in attacks that we

know are being used by the bad guys to attack your networks to attack federal networks. You have 90 days to go and patch this on all all your networks which is great right but you can't ask any questions about this. I love the CISA Kev. I I'm friends with people who work at CISA. I was on a panel with them at Defcon about this. If you just trust the federal government, this is great. But if you're say, hey, how is this CVE being hacked? You know, who used it, where, when, why, they'd give you none of that information. They literally said, patch this CVE. So, it's great for a place to start, but if you have any

questions about how it was actually exploited or why it was exploited, it it's a black box. Um, I understand their reasoning. They don't want to explain to me every time I ask a question or to 100,000 other people where they get their threat intelligence from. They just want a nice simple list that they can tell every agency to to patch. So that's all the CISA Kev is. So I speak to a lot of apps teams and apps teams are the people who are who are getting the blunt of blunt of the CVE problem. Uh to be completely honest, people who run Microsoft shops or Linux shops, they're normally in an automated patching cycle now. So that just runs

and unless you've done some really hard stuff with Windows 11 and the newer servers, it's almost impossible to turn off auto patching. So we're seeing less and less patch issues with people who are doing infrastructure patching outside of VPN and routers, and we can talk about those later. and more about people who are having to work on software. Um, it's just it's terrible. Um, GitHub security advisor or advisory is probably the best place to look for data on open-source projects now that you use uh in your Python or your your Ruby or any other open- source language that you use. And they're trying. They're not exactly able to keep up either. They have about 130,000

advisories in the global GitHub security advisory today. So like if you use those, there's a good chance that something that you're running in your stack at your work has a GH GHSA advisory tag and not a CVE. And if you're not pulling this data today and looking at it, you will miss that data. So So let's talk about the global shift. So Ana is amazing. They're starting up. They came online right after the CBE thing and said, "We're going to do the European version of the database." Um, we're going to keep it consistent with CVE JSON schema 5.1. So the CVE program and EUVD are able to change back and forth CVE records as of today with no

problems. Uh where they've stepped out is this bottom part here. And it's it's super important. They are now scoring based on on verticals. They will look and they will say okay here's for critical infrastructure. This CVE CVSS score is now two points higher because we know that critical infrastructure organizations really need to to look at this. Uh health care is this and finance is this. It's a really neat product and a really neat project. So if you have any chance to interact with the NISA team doing this through any through any aspects and you could tell them what a great job they're doing um that that would be great. We we need things like this. We need more people to look at

CVEEs and say, "Hey, this is really an OT threat." So if you do OT, you really need to do this. Or or like the last speaker was saying, if you're an AWS heavy place, this these are the CVEEs that you really need to to look at if you're a cloud first shop. So there are two there are two big databases out there and sometimes I think Google on the left and Microsoft on the right here are racing to see who can get the biggest database but osv.dev is from Google. It looks more at npm and ruby based. They do some PI stuff. Go, of course, because that's their language in Maven. If you're interested in those

stacks or you use those stacks in your vulnerability systems, you really need to be pulling in your data from osv.dev. Um, everything in GitHub advisories is based either on stuff hosted on GitHub, our stuff reported to GitHub. So, there are a ton of of open advisories there too. Uh there is a little bit of overlap, but OSV.dev is still handinentered most of the time. So the data quality is a lot better on that one. Uh GitHub security advisories are great, but they're only as good as the data people put into the forms on their GitHub uh repositories. And then then we get to this part. I I'm a capitalist. I like being paid. I like

making money. I I don't think that that's a bad thing inherently. But if you have the money and you run a TI program, you can go out and buy recorded future mandate or crowd strike intelligent. Those are the three leaders. They're great, but they are not for everybody. They leave a lot of people in the lurch as we say in the US that like you just can't afford to pay for vulnerability intelligence when you're you know core router is out of out of support because you can't afford that bill either. So it the data is there. Their data is good. Um in a perfect world I don't think that there should be the ability to sell

vulnerability data to the public. I I really think that should be an open and free free marketplace just because it's so important for people to be able to have access to that data. So let's talk about the fragmented landscape challenge here really quick and then we'll have some time for some questions. So if I was going back today and I was sitting down with your organization, here is the matrix that I would draw up to help you guys understand how to implement a modern vulnerability management program today in the effort that we're in. You know, we would start with exploitability over severity. We would kind of throw away CVSS today because we know that it's not getting

applied to everything and we know it's not there. So, we would look everywhere we can to see which vulnerabilities are being attacked and how. I have a website called patch this.app that has a list of 7,000 CVEes that are pulled from all of the main uh tools like Metasloit, Whiz, etc. that they know are exploited. It's a free open- source list. So I would start there. I got that list is free of course so everybody can see that um we look at the context of the CVE. We don't do enough looking about this. We say oh there's a CVE in this project product we use. We got to patch it right away. You know but

if it's not in rce maybe that can wait and maybe we should look at patching harder vulnerabilities and older vulnerabilities first instead of just firefighting all the time. I would use the CISA Kev catalog. It's it's not perfect. Like I said at the beginning, they don't give you the reason why and how they're seeing how it exploited, but it's small enough now that most organizations can absorb that with little risk or change to their patching schedule. And I work on a project called EPSS. Um, what it does is it takes every CVE and determines the likelihood that they'll be exploited in the next 30 days and gives that CVE a number between zero and one. So, you can then go and run

some basic data queries and say, "Hey, here are the products I know that are in my network. Here are the ones with the highest uh EPSS scores. These are the ones that are most likely to be exploited in the next 30 days." And patch from there. So, the next step is to diversify and automate your intelligence. Um, if you don't use VEX or SBOMS yet for your products or you don't ask your vendors for those when you buy products, you really should. It's it's a way to understand what is being built into your software and how your software is being built. uh I don't know about in Europe but in the United States um you are now

required to provide an SBOM with any hardware or software purchase in the US. So I know that my my employer Cisco has spent a lot of time automating that. So they are available they are interesting to look at and they do help you know what is on your network at a base level. So, we talked about VEX, we talked about SBOM, so we can skip this slide since we're running to the end here. So, the future is decentralized and AI focused. I I'm think I'm the first person to say AI from up here. We had an LLM talk that was really good. Uh we're going to have to get to the point where we can can figure out successfully

vulnerabilities and what they're doing and what their criticality rating is by by AI. Um we spent a lot of time on this. I've spent a lot of time on this. Uh, we can get to about 90% correctness on this so far, which is good for a math test for me, but probably isn't great for vulnerability data when we know we're we're, you know, sending out 10% that we know is wrong and people are, "Oh, why don't you just go with that?" Because we can't tell you what 10% is wrong. So, you end up having to hand check everything anyway. So, we're really kind of in a place where we know AI is coming. We just know that it's not, you

know, chat GPT5. We don't think it's chat GPT6. We'll see what seven and 8 bring along and hopefully we can get to the point where where it can take take on some of that responsibility. Um, federated intelligence. We've talked about this that that's the main part of the reason why I'm here is to talk to a European audience and to a German audience about how important the work Ana is doing is and how you guys should support that and talk to the people in that organization and in the EU about funding and making sure that that stays strong and and how US centralized the CVE program is today and and everybody in the US I talk about this who is who

is passionate about the CVE program really thinks we need to take a more worldwide stance on this and it's getting other countries to contribute to something that's a public good is really hard. I mean everybody loves a paved road. Nobody loves to pay taxes. So having something like an Nissa started to to basically do what what the US has been doing for free for 20 years is always is always a hard a hard sale. And we're we're looking at Asia and and Africa too to come along. So we're we're spending a lot of time as the CWE program talking to these company countries and companies in these countries to really try to to bring that

federated intelligence view together. So just some key takeaways, prioritize with EPSS, diversify your intelligence sources, automate with VEX and ESBOM for that, and prepare for fragmentation. I I hate to think that that's going to be there, but with with the way the world's going, it would be great if everything stayed together. I am not willing to to bet money that in two years I'll be standing here and there won't be three or four different vulnerability standards that that companies have to deal with. And with that, thank you and I'll take some questions and we can go to lunch.

So, thank you. Um, we'll have a short time for some questions if uh I see two hands in the middle. Okay, you catch the microphone. Oh, >> thank you. Uh, very good presentation. Uh, on your takeaways, you are asking us to prioritize uh EPSS and Kev, which is great. It's something we are doing. But on the diversification side of things, the challenge is that for example, EPSS and Kev are built on top of uh NVD, right? And CVS. So when we diversify we have a an enriched data set that we can enrich with EPSS and CISA Kev. >> Yeah. >> But whatever comes from GitHub like I I cannot enrich that right. So I think

that's the challenge I have with diversific diversification >> diversification is hard and and it really takes a lot of individual work that shouldn't have to happen. So I'm hoping there's some open- source tools to come together. It it would be nice to be able to go back through the GitHub security advisory and say, "Hey, you have four GitHub security advisories for this library package, but no CVEEs, but we know this CVE covers it. So, let's just soft link these together." Um, that hasn't been built yet. It needs to be built. I can't build everything. So, if if if you if you have an idea and you want to take that on, that that would be

great. But yeah, we know there's a lot you diversify and then we need the the federation part, which is the hard part. It's getting everybody to to speak a language that we can easily match and and and we're not there, but I'm definitely with you on that on that statement. >> Let's build something together. Um I think um we should go to lunch. >> It's whatever you want to do. Whatever you want to do. >> You're here during lunch? >> Yeah, I'll be here too. Yep. and SNA is question. >> We have a question in the back. >> Are you aware of a certified numbering authority program for EU? >> Uh the EUVD has not launched a CNA

program as as far as I can tell and from reading their minutes. Um they're they're fairly just we can talk at lunch. They're growing. They they they're doing some good things, but they they have some growing pains to to get through. So >> Okay. Then I just have two short announcements for the hacker hacking village for the badge. That's um just back to the registration and that has open in the afternoon. So if you want to solder your badges and the second important announcement was for lunch. So you just go out past partner K and then you go through the two doors and then you should find it. And then with that I would give a short round of applause to

our speaker.