
Thanks everyone. Um yeah, excited to to chat a bit about uh some of the work I doing at CISA and some of what I I'm up to now with Corridor. Um looks like the conference might be be starting to fan out a bit, so feel free to come up towards the front if you want to. Um so um let's dive into it. So I'm Jack. I'm um currently the CEO and co-founder of Corridor. uh come from background in security research and computer science. Got into security through bug bounty programs and uh this started when I was in high school. I stumbled across a vulnerability in a cryptocurrency website where I noticed I could send a
negative amount of money to other people on the site. Um so through the course of that um was able to you know steal money from people's accounts. Fortunately, they had a bug bounty program, so I was able to report to them, got paid, and then started teaching myself more about security. Um, eventually got into the top 100 rank of bug bounty hunters of all time on Hacker One. Um, studied computer science at Stanford and then most recently was working in government at CISAT, cyber security and infrastructure security agency. um and was really leading on the secure by design initiative doing a bunch of work to um encourage tech companies to prioritize their product security. Um
and I'll be focusing both on the work then at SISA as well as how I've been applying secure by design through my new company uh corridor which I started uh back in January when I left SISA and really were focused on securing AI coding. So really want to kind of walk through this journey of you know working in government to advance the the state of software security and now uh from the the other side here um so so can get into that um to start let let's motivate some of the angles behind secure by design right um so where we are today we still have these same basic preventable security weaknesses that are causing uh
the same vulnerabilities again and again so if you look at the um for instance here we have the OASP top 10 in uh 2017 compared to 2021 um and not much has changed if you look there's MITER for instance published um this paper on unforgivable vulnerabilities in 2007 spoiler alert it looks basically the same as this right so we're going on several decades of the most prevalent types of vulnerabilities uh being the pretty much exactly the same um right if you look had and I'll I'll dive into a few case studies, right? Bud, you know, memory safety issues were first documented in the '9s. We're going on three decades of having buffer overflows and they're still one
of the the most commonly exploited vulnerabilities out there. Likewise, you know, SQL injections. Um, fun fact, my SQL introduced parameterized queries in 2004, which can prevent SQL injections at scale. And yet here we are, you know, several decades later still dealing with, you know, vulnerabilities like this, the move it hack, um, which compromised, right, this file sharing software that led to hundreds of individual breaches across universities, others that was a SQL injection which we've known how to prevent for over 20 years. Um, and it's still making headlines. Um so so really right we're in this pattern where these basic vulnerabilities are using to or being exploited to cause significant harm. If you look at some of the exploitation of
network edge devices, right, it's the the same types of vulnerabilities in a very focused set of products, right? These corporate VPNs, those sorts of devices are appealing targets to attackers because they're right at the boundary of the internet and everything sensitive a company has. So, naturally, right, we we've been seeing um state sponsored um cyber attackers going after them. This is part of right the the volt typhoon salt typhoon uh campaign you might have heard about and if you really trace it back to the origin right most often it's either a preventable software vulnerability or an insecure default configuration um right something that the vendor could have done more to to prevent the vulnerability in the first
place. Okay, so we have this problem, right? Um, it seems like, you know, some basic practices at the vendor level might be possible to make this better. So, how do we actually get that adopted in practice? Um, so I'll talk about the work I was doing at CISA through the secure by design initiative and then get into what I'm doing now at corridor. Uh, for some background on secure by design, right? So, um, we launched this initiative at CISA in 2023. The focus was really to um pinpoint this was in line with the White House's national cyber strategy at the time to shift the burden of responsibility for cyber security away from those least capable
end users and onto those who are most able to address it namely the software vendors. Um so we published a bunch of guidance with um I believe it was 13 other countries that you can see here across um three different continents. um and really got um significant adoption both on the the side of other cyber security authorities as well as from companies. So there's over 300 signers of the secure by design pledge. Um this is something I put together to get companies to commit to basic product security improvements. Um and there's uh seven items associated with the pledge to do things like increasing the use of multifactor authentication across their products to reduce the rate of default
passwords reduce entire classes of vulnerabilities. All these areas right that seem relatively basic uh but you might be surprised by even then the you know amount of work that that required to to get commitments from companies to to even the these pretty basic items. Um but but the good news I would say is that real progress is being made here, right? So so there's a lot of work around consensus building and uh you know it's not easy to to get one let alone 300 companies to to agree to do something the government says. Um so we spent a lot of time in figuring out what would be both sufficiently um you know actionable, something that's
achievable by them while also raising the bar and continuing to do so. Right. Um, so the goal of the pledge was for companies within one year of signing it to document the progress they've made. Um, and you know, this the pledge came out in 2024, May of 2024. It's been over a year. Um, so some progress has come out. Um, but you know, it's not without the the critics. I pulled a few uh fun tweets um about it. Someone said that the pledge itself is substantially meaningless. Um, and you know, the use utter nonsense. Jack Cable has to know better. Maybe I do. Um but um really want to dive into right what actually happened as a result of this, right? How
effective are some of the levers that we have in government? How does that compare to what you can do within private sector? Um so so and want to dive a bit more into what I was talking about right with with the fact that really at the end of the day for all the talk about zero days for all the talk about you know these very sophisticated attackers doing very sophisticated attacks that there's nothing we possibly could have done to prevent and yet right so this is here from MITER they're um known exploit vulnerabilities top um root causes so essentially the classes of vulnerability that led to the most exploitation in the wild. And you'll see
right the I think what top three there are memory safety errors um use after free buffer overflow out ofbounds right these are all things again we've known about for decades and now have many great memory safe programming languages that prevent these from happening outright and yet these are still the most commonly exploited vulnerability um so so let's dive into two um so start with memory safety right um and here I have a interesting graph that comes from uh Google as looking at the prevalence of vulnerabilities in um Android operating system and what they noticed is as they started um on the left right lowering the amount of code that they wrote in memory unsafe languages I
believe that they switched to cotlin for writing uh parts of the OS um you'll see that the number of memory safety vulnerabilities dropped quite significantly and what's interesting is that this isn't like on the left right that's the rate of new code on the right that's the rate of memory safety vulnerabilities as a whole. Um, so what they observed was that since the vulnerabilities that were would be exploited discovered were most often in new code, even with all their legacy code still there, by changing their new development practices, they were able to reduce the rate of memory safety vulnerabilities by about 50%. So that's pretty impressive, right? And it's not just Google. We've seen, you know,
Microsoft is rewriting parts of the Windows kernel in Rust. Same thing happening with the Linux kernel. Amazon's done a bunch here. So, so there's a lot of good progress being made. Um, but still obviously a lot of room to go, right? And especially when we get into the legacy problem where uh companies can have millions, billions of lines of code written in memory unsafe languages. It's something where that will take time to address. But certainly, for instance, if you're a company building a new product today, I think the argument is very clear, right? you should not use a memory unsafe language. Don't use C or C++. If there's, you know, one thing you you take away from this. Um likewise, right,
I mentioned SQL injection vulnerabilities where these are again a type of vulnerability we've not just known about for decades, but we've known how to prevent at scale for decades. And yet software companies are still introducing the same type of bug over and over again. Uh but that's not to say it's impossible to prevent, right? many companies, Google for instance, has talked about how um they've done this at the type level to make it impossible for developers to introduce a SQL injection vulnerability, right? Because if you want to construct a database query, it literally won't let you include user input as part of the the query string itself. Um so, so it's not that this is,
you know, some deep technical problem where we have no idea how to prevent it. There are technical challenges to it. But the the bigger aspect, right, is that this is one of business incentives where the majority of businesses out there today lack the incentives to produce more secure code. Right? We like perfectly secure code is something that that you know we we can't achieve, but certainly code can be much more secure than is today. Um okay, so getting into some of the action as a result of the pledge. Um we saw some good things. For instance, we saw um each of the major cloud providers, Google, Amazon, Microsoft move towards requiring um MFA for um their users. We've gotten some
emails around that. Uh we we saw action from a number of other companies as well. Some have published a bunch of good statistics on things like patch uptake numbers, um the the rate of of various vulnerabilities in their code. Um but I think it's also important to look at what we're not seeing, right? where um and now that I'm outside of government, I can speak with a little more flexibility where over 300 companies have taken the pledge. As of today, if you go on CIS's website, there's a list of companies who've put out progress reports. Um I I counted a few weeks ago, 40 or so have put out progress reports, right? So there's around 250 or so companies who haven't
yet said anything despite taking the pledge and saying we're going to put out progress reports as a result of this. So um in some ways I don't know if this is a bad thing right because it allows the companies who are actually doing a good job to distinguish themselves to say hey we actually followed through on our commitment here right but but I think it is a signal that you know voluntary action from companies alone isn't enough um and does get into something deeper right with with the market forces where um all too often today companies just aren't incentivized to produce more secure software um So, so that's kind of the the the takeaways from the government's side,
right? And can dive more into um other elements around, you know, liability and really where I think this needs to go. Uh but want to step into a bit now from my current perspective, right? And in particular going to point to right one of the uh biggest changes that when I was at CISA was only starting to bubble up uh but but now is you know pervasive and that's around AI coding. Right? If there's one use case of AI that has actually materialized, it's around software development. The fact that, you know, it's hard to deny that an an engineer now with AI coding tools can do so so much more than they were able to
do ever before. Right? There's some statistics out there uh from Stack Overflow. 84% of developers are now using AI coding tools. Uh 30 to four companies encourage the use of AI coding assistance among their employees. that's from GitHub. Um that that number seems pretty low to me, right? And I expect that there is some bit of a lag between companies who are fully making use of this and um you know where where this needs to be. But I think it right if you've spent any amount of time using these tools, it's very clear that um that we're we're not going back when it comes to software development, right? That AI enables us to move so much
faster than ever before. Uh which is really exciting, right? For so many reasons. it's going to enable so much more innovation. You know, the we're at the entrepreneur track. I'm sure it it's going to be a lot easier to build a new product um than has been before. Um but you also have to think about this from a security perspective, right? Where uh we know that AI is introducing these same types of bugs that human developers have been introducing for decades, right? There's no shortage of examples of um a coding agent that um introduces SQL injection or hardcodes your credential or deletes your production database. You know, all these things that are maybe some unintended side effects of all of
the value we're getting out of these tools. Um and they're not just oneoff examples, right? There's benchmarks out there. Backbench is one looking at how well LMS do at generating secure code. Um and the answer is that even the best models right GPT5 is on top there introduce vulnerabilities about anywhere from 20 to 40% of the time. Um and there's a lot of work that needs to be done in building better benchmarks here work from the you know model providers from open AAI and propic in order to um actually assess the their models against these benchmarks see how well they're doing and see how they can improve. Uh but but the reality is right now, right?
And I'd say really breaking down into two main types of vulnerabilities. There's the more basic flaws, right? SQL injections, memory safety, things that I would hope a year or two from now the models are quite good at not introducing. But then there's also the more contextual ones, you know, the authorization flaws, the other um sorts of business logic errors that aren't going to, in my opinion, be addressed at the model level. Um, I'd say that's really where where I think there's a lot of opportunity, right? If AI coding is the future, if it's the way that every company is going to be building their products, then we need to make sure that AI coding is not just, you know, helps
you build a, you know, quick demo, but can actually build enterprise ready code and a core part of that is around security. Um so that is what led us right to to start corridor with this premise of making sure that AI is able to write software that is more secure by design. And to me it's a really exciting opportunity, right? Because and one of the things I've advocated for for a while is for security to be, you know, introduced um into the um education process for computer science students, right? Um right now none of the top 20 universities in computer science um have um any security course required in order to graduate, right? And there's a lot
that needs to be done to make sure that developers know more about security. Um but um right I think there's the promise and peril of AI where on one hand as we've seen right it's introducing a lot of vulnerabilities on the other hand for the first time we don't need to right the developer doesn't have to be that final layer to make sure that um this code is free of certain types of vulnerabilities um if the models if the coding tools can get better then we have a better chance of preventing some of these vulnerabilities at the source right maybe I don't have to be here in 10 here is talking about how we've been
dealing with SQL injection for four decades now. Uh because hopefully we'll we'll have you know made that something of the past. Um so that's what led my co-founder and I towin to start corridor. Um started this back in January uh when I uh left government really this with this focus on securing AI coding. Um and it's been a a great journey since then. Um I I'd say right where we went from it was us working out of our living room. Now we have eight employees and we're um doing a public launch of the product to actually next Thursday. So, so definitely would welcome any product feedback, but I'd say, you know, in doing this and as you
can imagine quite different from working in government and yet in some ways I I think there are some similarities where and actually Ashua and I had worked in government together before in 2020. Uh we were at CISA building out tooling to help election officials understand where their vulnerabilities were ahead of the the 2020 election. And I and I think I'd say really the you know biggest qualities that made me excel both in government but now in private sector I think persistence is the biggest one right where um you're going to get all sorts of things thrown at you. You're going to have to move extremely quickly and adapt um based on that. So I'd say
really thinking about you know takeaways, lessons learned. Um I I think one is is just the the speed of execution. You know, especially now with AI coding, right? The amount of technical effort it needs to build a product is significantly lower, which means that it it really, you know, is up to you in order to get out there, get in front of customers and um build something that that's providing value and likewise be able to, you know, adjust and learn quickly. uh based on that. Um the the other thing I think is you know there there's been no better time than now to to start a company where um especially right um you know there's VCs are are throwing all sorts
of money um at this um but also like you don't need a ton of money to you know build a functional product now you can um you know take some of these AI coding tools and do a pretty good job at um building even even just a P or turning them into a more functional app. Um, so I'd say there's and happy to talk more after the the talk as well, but just so many great opportunities, right? Especially in the security space, right, where we are at no danger of this industry going away. I think that the problems are only going to be magnified and even, you know, for all the talk of um, you know, AI putting security
professionals out of work and all those sorts of things. Um I I think we're a long way off from you know truly getting to the root cause of um a lot of these issues. Um so so I think you know that's well maybe not great for society and there there's you know a lot of vulnerabilities that that Mike introduced. It does mean that that there's never been a better time to to get into security uh than now in my opinion. Um so I know we have just about five minutes left which I want to leave for questions. Um so with that thank you everyone and happy to take your questions. Yeah.
So I think that's certainly a possibility right the hope would be that we can make sure that you know AI is writing code that's not just secure by design but you know high quality as well. Um right I I think the state of the the AI coding tools today is that they you know do introduce you know bloat and um technical debt where right if you spend time with you know some of the these coding agents you ask it to fix something it tries something doesn't work tries something again and like the the code just starts to to pile up. So, so I do think that is a a real possibility, right? And I I think that's
where in some ways, right, we're at a crossroad now where it's a lot easier to write secure code from the start than is to take, you know, legacy existing piece of software and and augment with that with security um in some ways, you know, you're you're better off rewriting things. So, I think from from that perspective, yeah, the right the time to start um is now and and that's really where where I hope we're going. >> Yeah. >> Just to follow up on that, have you seen
Yeah. >> Yeah. Yeah. So they are and this is something that there's actually a DARPA initiative called tractor great acronym translating all cedar rust that's focused on this is funding you know research into this that'll put out open-source tools as part of that. Um, so, so I think there's a ton of promise there, right? And kind of a lot of what we're focusing on with corridor is focused on new code going forward. But yeah, I think there's tons of opportunities, right? And because if we want to get to a better place, we have to both be looking at the the new code, but also getting to to a better place when it comes to to legacy code. And um,
yeah, turns out that's something that, you know, if you combine AI with some more deterministic methods can can get pretty good results. >> Yeah. >> You mentioned that New York.
[cough]
Yeah. Yeah. So, I'd say that there there's a couple levels here, right? There's both the individual developer level where certainly I think we, you know, need to do a lot to make sure that that developers actually know a thing or two about security, right? And while um improvements at the AI level can help, I think, you know, ultimately my mental model of a lot of these AI tools is right, they're best used um when in the hands of someone who knows what they're doing both from, you know, a general software perspective as well as security. Um, so, so I'd say that's certainly one element where yes, we we we need to to make sure developers are
are more knowledgeable and, you know, I think every school should should be making security a fundamental part of kind of how that they're teaching future software engineers. Uh, but then there's also the, you know, more corporate incentives where, um, both are necessary, right? If you have a company who's solely prioritizing seed speed to market over any semblance of security, then that's not going to positive results. And you know, there's been discussion around should we have a form of software liability where software companies can actually be legally liable for um security vulnerabilities in their their code that lead to harm. I think that the answer is yes. And there there's um probably a long way to get
there, but you know, it's not unprecedented where every other industry does have a form of liability, right? If you buy a toaster and it explodes, then that's on the the vendor and um you can sue them. But right, if a software product has a vulnerability that enables a cyber attack, then right the their warranty means that that there there's pretty much nothing you can do. Um, so, so I think that's the other direction getting more serious around that. Cool. Time for maybe one more question. Anyone? >> Yeah.
for >> so yeah I think there's an element there right the one of the other reasons why right our we have a market failure in many ways around security is that there's a lack of information when you're going to buy a product um about the the security measures in place right companies can say we have like military grade encryption and all these sorts of things but in practice when it comes to like you know are they preventing entire classes of vulnerabilities um that's not something you can readily See, you can look at, you know, proxies, how many CVs do they have, things like that, uh, which is also challenging since we want to encourage, you know, the publishing
of CVS. But, um, I'd say ultimately, yes. Yeah. More transparency here, um, will, um, help quite a bit. There's a good letter, if you haven't seen it, from Pat OPET, the this is at JP Morgan Chase that that goes into really what, um, they want from from their software vendors. So I think you know the more that companies who are buying software can articulate these demands the better. Um and with that I believe we're out of time. So, thank you everyone.