← All talks

The Product Security Imperative: Lessons from CISA

BSidesSF · 202531:1086 viewsPublished 2025-10Watch on YouTube ↗
Speakers
Tags
CategoryPolicy
StyleTalk
About this talk
The Product Security Imperative: Lessons from CISA Jack Cable Policymakers worldwide have recently taken up product security, making topics like memory safety prominent. In this talk, hear from former CISA Senior Advisor Jack Cable on lessons learned leading CISA's Secure by Design initiative, and what a shift towards product security means for the industry. https://bsidessf2025.sched.com/event/dc4ef991adc134411fd9ec5935ea28df
Show transcript [en]

Hi everyone, thank you for attending. I'd like to introduce Jack Cable with his talk, the product security imperative, lessons from CISA. For the Q&A, um, please feel free to access slido via bsidesf.org and select theater 9. Awesome. Well, thank you so much everyone for coming in. I know this is the only thing between you and lunch. So I'll try to really draw this out and make sure that um go goes on a while. Um but but anyways um so my talk today is going to be on CISA and actually this this is kind of fun because this is the first talk that I've given since leaving CISA a couple months ago. Um and I was

at CISA for a couple years leading on some of the secure by design work. Some of you might be familiar with that. So, really want to take a moment here to just take a step back into some of what that looked like inside of CISA. Um, and I'll make sure to leave time for for questions, too. Um, a bit about me. So, I come from background in security research and computer science. Um, I got into security through bug bounty program. So, did a bunch of those. Got into the top 100 rank of bug bounty hunters on Hacker 1. Studied computer science at Stanford. worked at places like Vanta um as well as the Pentagon at

the Defense Digital Service. Um did a fellowship in the Senate doing cyber security policy wrote a bill on open-source software security and then most recently was at CISA for two years leading on the secure by design initiative as well as open source software security. I chose to leave a couple months ago and from what I've seen it's been you know very uneventful. Absolutely nothing noteworthy at all has happened um since then. So, um, but but I decide to, uh, start my own company called Corridor. Um, and we're using AI to help companies build products that are more secure by design. But, but really for this talk, I want to dive into the secure by design work at SISA.

So, if you're not familiar, SISA launched in 2023 the secure by design initiative. This was coming out of the White House's National Cyber Security Strategy, which called for a fundamental shift in the burden of cyber security responsibility away from those least capable, whether individuals, small businesses, hospitals, school systems, you name it, onto those who were most well positioned to bear it. And in particular, that's the technology manufacturers who are building the products that underpin every aspect of our critical infrastructure, every aspect of our daily lives. So we launched this at SISA in 2023 and um followed that up a year later. Um I I built out the secure by design pledge where we got commitments from hundreds

of companies to build products that are more secure by design. Uh but I I really want to focus on here now. We are about a year later which is the timeline by which companies who took the pledge are supposed to report on it. So let's check in. Let's see how that went. Let's dive in a bit to the the origins of secure by design at CISA and see how things are going. Um so as mentioned and you can see on the CISA site there's a whole bunch of companies who've signed this but but really I wanted to see okay did this actually affect things. So so naturally I went to the place where you get people's opinions. You go to Twitter

um and I learned things. I learned um that the pledge was substantially meaningless. um you know a cynical exercise by both SISA and industry aim of concealing a lack of influence by the former and a lack of concern by the latter. Um you know it's also utter nonsense. I supposedly have to know better. Um so so maybe requires me to do a bit of introspection still still working on that part. Um but you know there is the optimist in me that that thinks that even though change won't happen overnight that progress is still progress. Um but but let's unpack this in the talk. Um taking a step back. So the the secure by design pledge as

mentioned this was launched in May of last year. So coming up on the year anniversary in just about two weeks. And really what we set out to do in the pledge is to to get companies to commit to to doing better. And again going back to the national cyber security strategy, this is recognizing that the vast majority of the the cyber security attacks we see whether by cyber criminals, whether by nation state hackers like we're seeing most recently with the uh People's Republic of China targeting of our critical infrastructure, we know that they get in not through the mistakes that end users are making, but all too often this is due to either an insecure default

configuration in a product or a vulnerability that ultimately is preventable. So the questions that we want to ask are how can we get technology manufacturers to take on more of this responsibility? How can we stop blaming the end users? How we can stop blaming the intern who configured a poor password or um the the IT technician who failed to apply a patch and start asking these questions of okay why wasn't there a secure default password in the first place? Why was this SQL injection or command injection or memory safety vulnerability present in the product in the first place when we've known how to prevent these vulnerability classes for decades? Um, so that's what we set out

to do with the pledge. Um, and it really started from a point of trying to to encourage action from industry. So this was back in December of uh 2023. I wrote up a document of kind of my wish list for what I wanted every company to do when it came to secure by design was um very ambitious. No memory safe or memory unsafe programming languages. Um no default passwords. Have everyone using multifactor authentication. Circulate this around to a couple companies and the response we got back was basically okay this is great but we'd never commit to this. What's in for us? We're not going to just go and do this because this will cost us a lot and there's no

value to it. Um, so this set off a multi-monthlong process of trying to get buyin uh from industry to move the ball forward in a way that um they would would um actually also get benefit from. And ultimately what this came down to was um in my mind a lot of peer pressure where the whole idea of a pledge, right? It's not um regulatory. There is no requirements for for doing what's in it. it's something that companies have to voluntarily step up and do. Um so so we worked with industry to um kind of fine-tune that list to something that in my mind was both ambitious but also something that over time companies felt more comfortable committing to

especially as they learned for instance that okay if one of their competitors is signing on then um they they're ultimately going to start getting questions around hey why didn't you sign on so through that we were able to again grow that list from um just a handful of companies to when initially launched 68 and now over 300 companies who've committed to bettering their security. Um so so we were able to to get commitments there. Um and really what does this entail? So the pledge has seven goals um that you can see here um around actions like increasing the use of multifactor authentication for your users. Again making sure that it isn't the responsibility of the end user to

have to think about it and go and configure MFA. Why can't that be the default configuration given that we know that that prevents um the the vast majority of credential theft attacks for instance? Also has actions like publishing a vulnerability disclosure policy allowing security researchers to report vulnerabilities to the manufacturer. Um and I'll get a bit more into kind of where I I really want that to to go into. Um and then there's some other goals. Um, one of my favorites, for instance, is around reducing entire classes of vulnerabilities. Recognizing that, again, software manufacturers have the ability to root out some of these pervasive vulnerability types. We for all we talk about zero days and these

novel attack vectors and all the things that I'm sure um will be circulating the the vendor floor at RSA, ultimately we know that even when we have say a zero-day vulnerability, it it really isn't anything new. It's going to be a command injection, a SQL injection, a memory safety vulnerability, a known vulnerability class. We just didn't happen to know that was present in a particular product in a particular location. But again, we know how to prevent these vulnerability classes at scale. Um, and just to dive into this, so um, here we have MITER put out an analysis of SISA's known exploit vulnerability catalog. Shout out to to Todd. um where they looked at the root

causes of um vulnerability types and saw basically that for the top 10 most common vulnerability types in the Kev um that I mean really it shouldn't be anything surprising that memory safety vulnerabilities dominate the uh top exploit vulnerabilities command injections got some serverside request forgery we have uh path traversals um and I'm pretty sure if you went and showed this list to someone maybe, I don't know, 20 years ago, they'd say, "Oh, yeah, that looks about right for 20 years ago." And yet here we are in 2025, and we have the same vulnerability type still dominating headlines. Um and if we go into kind of specific classes of vulnerabilities, let's say memory safety for example, we know that these

vulnerabilities are preventable through the use of memory safe programming languages where for instance this is a chart from Google in the Android uh product where they found that as they started to introduce memory safety vulnerabilities that the amount or rather as they started to introduce memory safe programming languages um into the Android operating system that the amount of vulnerb abilities steadily decreased and that was even as they still had um memory unsafe languages in the code. They found that what attackers went after most was new code. So if you start writing code in a memory safe language, you would see vulnerabilities going down and down and down. So we know that these work. We know that these

languages are available. Now of course it's not free to switch to a memory safe language. It's not necessarily easy. But um languages like Rust are becoming more accessible. they they're more interoperable with existing languages. So to me really the case is strongly there that companies for instance if they're building a new product today they absolutely should not do that in a memory unsafe language and they should begin to think about for existing product lines that have memory unsafe languages what that path to switching to memory safe language is. Likewise for vulnerabilities like SQL injections where um if you all remember the u move it compromise um in I believe this was 2023 um that was a SQL injection

vulnerability and you might be thinking oh I thought we solved those 20 years ago and yet they're still dominating headlines. Uh fun fact my SQL introduced parameterized queries in 2004. So it's been 21 years where we've had capabilities to really root out SQL injection vulnerabilities from products and yet 21 years later we still have SQL injection vulnerabilities dominating the headlines. So again, next time you see the kind of new novel zeroday vulnerability come out, think about whether it's actually anything novel or whether a manufacturer could have prevented that at scale across their products with some basic knowledge. Okay, let's dive into a bit now in terms of what is actually happening as a result of the pledge. And the good news

is that um we have seen some positive progress reports from companies. So again, the commitment to the pledge was both to take action in line with those seven goals, but also to report publicly on the measurable progress that companies have made within a year of taking the pledge. Um and for instance, here I have a sampling of the the reports that have come out in the past year. We have Google, Microsoft, Forinet, Octa, um, a number of other companies who've put out progress reports. Um, and if we dive into those, um, I think you'll find some pretty interesting stuff. Uh, for instance, all of the major cloud platforms, so Microsoft, Google, and, um, Amazon have

all now, um, made multifactor authentication mandatory um, across their their cloud platforms. Um, and if you think about that, that alone is a pretty big move. the fact that now anyone who's using AWS, Azure or GCP um is now going to be required to use multifactor authentication. And again, we know that for at least credential theft attacks that MFA is a great way to prevent those um even better fishing resistant MFA is um allowing us to make uh fishing attacks virtually unpo sorry impossible. Um so I'd say that we've seen some positive moves. Um there there's other aspects like a number of companies have talked about reducing entire classes of vulnerabilities from their products. Google put out a really

great white paper around how they've managed to not have vulnerabilities like SQL injections or cross-ite scripting uh because they've made it hard for their developers to do the wrong thing. So they introduced kind of this idea of um really focusing on the developer experience um and instituting guard rails to ensure that developers um can't make mistakes. So in Google's case, they have specific types that you have to use when you're making database queries and those types just don't accept user input. Um so as a result, you can't introduce a SQL injection vulnerability. Likewise, they do a a similar approach for for cross-ite scripting and have found that many of their product lines, they just haven't had to worry about

these types of vulnerabilities. And they're not the only one. Other companies too have managed to eliminate again some of these rather basic and preventable classes of vulnerabilities that many companies still struggle with. Um, and then there's also been some interesting reports and particularly some good transparency. Uh for instance, uh Fordinet here um put out a blog about their their approach to uh requiring automatic updates for uh their product lines. And what they saw essentially was that there was a a large uptick um you can see here um in terms of patch adoption once they turned on automatic updates. Um they did see though some interesting um statistics where even after they did that they saw the number

of users on uh latest versions um declining a bit in some cases and um they've been very transparent about this. They found that some users were actually going and turning off those automatic updates because for whatever reason they wanted to be on an older version of the software. Um so so it's the sort of thing where kind of one shows that in the real world things can be messy but I think this level of transparency is exactly what's needed and that's one of the um principles that that we put out at SISA in our secure by design white paper was around radical transparency where even when things aren't going as well as you want them to

in foret's case it would have been great if they could say wow everyone is suddenly on the latest version we have no one on other versions of our software but again the real world is messy and people aren't going to do that. So, so I think this level of transparency is what's needed if we are actually going to to get to a better state. Um so, so there has been good progress made. Uh but I do also want to to think about what aren't we seeing. So there were um 300 companies who have taken the pledge. 68 of those were part of that initial batch at RSA last year on May 8th who took the pledge and committed to within

a year of progress uh within a year of taking the pledge demonstrating their progress. So a year from then is May 8th, we'll be in about two weeks. We've seen maybe 30 progress reports to date in total. Um so so leaves a bit to be desired and I'm you know hoping that every one of those companies within two weeks will put out their blog or press release on the progress they've made. But you also have to wonder and see kind of whether that's the case. Um but again I think part of the value in having a pledge is this allows the people who are doing the good work who are making progress around secure by design to

really distinguish themselves um and to show the good work that they're doing and it's an opportunity for those who who maybe have a bit of catching up to do to talk about the progress they're making. Again the pledge wasn't kind of as at least originally stated to be 100% done within a year. We know that's not possible. But I think what I care more about at least is kind of that the derivative right? How quickly is a company changing their practices? If they haven't been maybe great in security in the past, are they at least owning up to that? Are they showing transparency in the actions that they're taking? And are they really being forthcoming in terms of how they are

meeting the pledge goals whether around reducing classes of vulnerabilities or other actions. So I I hope that within 2 weeks we'll see all these progress reports from the 68 companies in the initial batch. But also I encourage those companies who maybe haven't thought about what that progress report looks like to um think think a bit more about that and maybe you all can help in terms of encouraging um companies to to meet their their commitments. Um moving on a bit in terms of what's next. Um I I think there's a lot of potential for positive reform and one of the things we did with the pledge was we started to set expectations for what responsible software manufacturers

should be doing when it comes to security. Um and for instance, one of the goals I mentioned was around having a vulnerability disclosure policy. Um I'm sure many people here have um been in situations like myself where you've had to report a vulnerability to a manufacturer. And really um both that is contingent on how the manufacturer responds in terms of their own security but as many people here know that also can open up risks to security researchers. So I think it's very important to protect and really empower security researchers to help play a crucial role here in driving transparency in driving more secure by design software. Um, so on Friday I published a piece with Jen Easterly

advocating for reform in terms of how we are approaching vulnerability disclosure in the United States recognizing that cyber security threats are only getting more drastic and that we need to counteract that with more secure by design software and in particular security research can play a key role in driving that. Um, so in the piece we called for two things. One are changes to anti-hacking laws in the United States to exempt good faith security research. Um so that looks like the DMCA, the digital millennium copyright act as well as the CFA, the Computer Fraud and Abuse Act, both of which can bring legal consequences for even good faith security research. Um we don't think that should be the case. We think

that the United States should follow the example that other companies have, other countries have set and exempt good faith security research. We also call um for the FTC to establish some minimum requirements and expectations for software manufacturers above a certain size in particular to operate a vulnerability disclosure policy and to file CVEEs for vulnerabilities in their products. Um that's another area that was in the pledge and as I'm sure you've all seen there there's been some some recent um uncertainty discussion around the CV initiative. Um, we think that this it's a essential program and not just to allow companies for instance to know when there are vulnerabilities in the products they're using, but there's immense value too in the CV database as

a source of truth as a a record of vulnerabilities that we've seen before. Much like in other industries, in aviation, we have the NTSB, we have whole databases that log the incidents and their root causes. In um automotive, we have the National Highway and Transit Safety Administration, something like that, which um maintains a similar database for car crashes. We need to have that for software. And while vulnerabilities aren't um you know um the same as incidents, this is still a valuable record. And we think that companies um should be taking responsibility of filing CVEes in their products for vulnerabilities in particular including fields like the common weakness enumeration C.WE that has information on the root cause of

that vulnerability. Um I'll open up for questions in a bit. Um so so start thinking about that. Um two more things to call out. One um one of the last things I worked at at SISA was this document called product security bad practices. Um, and in this it really laid out what some of the baseline expectations for software manufacturers should be. Going as far as to say for instance for software companies building new products in 2025 that the companies should not use memory unsafe programming languages that they should not pass user input directly to a SQL query or to a operating system command. again, things that seem pretty trivial and yet many companies are still

making these mistakes. Um, so so encourage you to to check that out and really think about how we all can help to advocate for again companies to do the basics. I don't think any of this is particularly groundbreaking. It's not um kind of again th those this isn't going to cover like the specters or mel meltdowns of the world but again the reality is that 99.9% of attacks out there are about the basics. So if we can help raise the national baseline I think that makes a strong case that we can help to protect against ransomware attacks that we can help to raise costs on adversaries um and strengthen the security of our infrastructure. The last thing I'll mention is that um

there are changes happening and in particular I think changes to how companies are writing code with um the these AI assistants out there cursor co-pilot or others a few changes are happening. One the amount of code that's being written is going to drastically increase. Um and two the types of people who can write code are are changing quickly. Um nowadays anyone can go and code a a basic website or app and sure while it might have some bugs you know they can get that out there and start sharing their ideas which is really great and I think there's a lot of potential in that but we also have to think about the security implications as

more people are writing more code with let's be honest less eyes on it what does that mean for security um I I don't think we're at the point where these AI assistants are going to be outputting perfectly secure code. I think we're far from that and we know they've been trained on guess what data sets of insecure code that people have been writing for decades. So, so I think that this really is only going to be making the problem worse and that again it's not going to be like people talk about oh the the risks of AI um it's not going to be introducing any risks um at least um kind of for the most part that are

unknown. It's going to be introducing the same types of vulnerabilities that we've known about for decades and again have been preventable for decades. So how can we make sure that as AI code is being written for instance that it's not introducing memory safety vulnerabilities or command injections or SQL injections? Again I think this is a solvable problem. I think that we can get to a point where we don't have to worry about SQL injections and I don't have to be here in a decade giving the same exact talk. Um I I really hope so. Ple please help me. But um again, we don't have to be in this future. So um that that's some of what I'm working on

too with my my new company. But um really I would say that if we want to continue putting forward secure by design, if we want to continue to build on the progress that CISA along with, I'll note 12 other countries have made um I I think that really does take all of us. It takes all of us to push the industry to be better to start to have some baseline expectations that in my view aren't unreasonable but but very much welcome thoughts and get to a point where we can stop worrying about these basic vulnerabilities and start thinking about the the more interesting stuff. Um so so with that happy to open up for questions. All right. So, our first

question from Slido. When you had push back, did you consider highlighting the benefits of the UK cyber essentials scheme, which has had a demonstrated impact on improving security since it was rolled out 10 years ago? So, so yeah, that that's a good question. And maybe I'll talk a bit about I I think some of the the existing um I'll say challenges with even if you look in the United States or the UK or elsewhere um that for like for instance when people talk about cyber security regulation most of the time they're talking about regulating the end users the customers of these software products. So they'll talk about um regulating the financial sector and there are some regulations

there or um energy sector but what that really means is telling people who are using these software products to go and patch better or to not um kind of configure the same password across products to enable multifactor authentication and that's all well and good but again driving back to this idea of putting responsibility on those who are most capable to bear it um I I don't think that's really addressing the the root cause of the problem. So, um there there's um been been positive developments um I'd say both with secure by design also point um to in the EU the cyber resilience act which is enacting um some baseline requirements for manufacturers of products to do better

around product security. So, I think shifts are happening. These will take time, but really I I would say that at least from a policy perspective in the United States, I think we need to be um much more focused on product security and less so on regulating the the end users. State anti-hacking laws are far more regressive than federal DMCA, CFAA laws. Missouri, Ohio, and Texas are recent examples. What can people do to reform state laws? It's a great call out as well. I I think certainly um as um the the kind of submar the question rightly calls out. Um we do have both at the state and federal levels a need for reform with anti-hacking laws. Um and in

our piece we focused on the the federal laws, but I do think there is a lot of opportunity to call for reform um at the state level. So, so would very much encourage you all to um kind for each state that you're in, look into and see what the anti-hacking laws are. I think in what you'll find is that in every single state um there's room for improvement. Um and what we called for for instance um for um at the federal level was one for the DMCA to codify there's an existing exemption for security research. the kind of weird way that the DMCA was written where the turns out the US copyright office which is housed in the Library of Congress of

all places gets to decide and they've um been smart and introduced exemptions every three years for security research um to ensure that good faith security research doesn't get penalized under the law. So we call for that to be made a a permanent part of the law and then for the computer fraud abuse act to follow on some of the work the department of justice has done where they updated their charging policy for these cases to say that this should not be prosecuting good faith security research. We think that should be turned into an exemption in the law. Um so I think there are some good precedents and there's other countries um Belgium for instance enacted changes to their anti-hacking

laws to exempt um security research. So absolutely I think change is need at the the state level and would encourage people to to advocate for that. Do you think safe harbor laws will accelerate the adoption of secure by design practices? I think yes in so far as the more we can empower security researchers to do good work. the more transparency that provides, the more this allows us to understand, okay, are companies actually rooting out classes of vulnerabilities across their products or are we seeing the same types of vulnerabilities again and again and again. So I think that's the crucial role that security research really provides is to give a um kind of thirdparty look at the the state of

product security of companies because they can put out all the these progress reports and say that they are doing good work but really the proof is in the pudding. The proof is when you see kind of how they're they're working with security researchers. And again the presence of vulnerabilities alone isn't necessarily a bad thing. It's actually good for companies to be transparent in the vulnerabilities they've had to be filing CVS to be working with security researchers. I think really the key is more to see one are they being transparent and two are they learning from their mistakes are they adapting their products going forward to be more resilient to the sorts of vulnerabilities that they've seen

before. And our last question, if S providers fix their own vulnerabilities and there is no action for the customers to patch, it seems like CVEes end up causing confusion or unnecessary concern. What are your thoughts on how SAS companies should use the CVE process? So, yeah, I think part of this is kind of evolving our mental model of what a CVE means. And traditionally, and I know many companies kind of use CVEes as a source of truth for what um products they should patch. Um, and that's all right, but I think again this points to a bigger need to really use the CV database not just as like a record of when to patch software, but more of a

database over time that will reflect on the state of product security and vulnerabilities that are out there. Um there's been a push for instance as part of the the pledge to uh make sure that companies are publishing CVEEs for vulnerabilities that actually don't require actions by end users. So think about um software as a service companies where um sure their customers don't need to take any action if a vulnerability comes out but I would argue they still ought to know about that um and they ought to know about how the company is responding. So, so companies like Microsoft, for instance, have started to move towards publishing CVEes for vulnerabilities in their SAS products. Um, and I think that's a positive

development and would encourage other companies to to follow suit. Thanks so much, Jack. Appreciate it. Um, that wraps up our talk. Just letting everyone know lunch is happening now. Thank you, everyone.