
Let's have our our folks understand what are we dealing with here. Is this your first Bides? Show of hands in the back there. Couple here. Would you say 50% maybe? Yeah, roughly. Yeah. Well, thank you for being here. Welcome to Bides. Yeah. All right. With that, we've got uh Mike Shima, Secure Designs, UX Dragons Vulnerability Dungeons Application Security Weekly Podcast. Mike, thank you for being here and please take it away with the with the team. Thank you. Thank you. Hello protocols, packets, programs, and bside. This theater has several exits. In the event of an emergency, just keep shifting left until you find one. At the end of the session, please be considerate for the next one and pick up
any CVEs that may have dropped. Garbage bins are provided in the hallway. Which means today we explore some secure designs, avoid some UX dragons, and throw some vans into dungeons. Grab some dice and stay tuned for Application Security Weekly because this is episode 328 recorded live here at Bides SF on April 26, 2025. Let's have a round of applause. Yes, I'm your host and dungeon master Mike Shima. I'm here with Kyani Pawar. Kyani is a seasoned appseac expert with a deep passion for the startup ecosystem. She has extensive experience in designing security programs for startups from the ground up and advises early stage apps startups on their products. Hello Kayani, thank you for joining us. Hello everyone.
And we have Jack Cable. Jack is a hacker who works at the intersection of cyber security and public policy. He's currently the CEO and co-founder of Corridor and before that served as a senior technical adviser at CISA where he helped lead their initiative on quite helpful for us secure design. Hello, Jack. Thanks for joining us. Hello. Thanks for having me. Now, I do want to take a moment to uh set the premise for this adventure. If you ever want to see a group of players demonstrate the nuanced thinking of threat modeling, have them encounter a door in a dungeon. Give them one minor detail like a little bit of rust is visible on the lock or
there are three small scratches on the floor. then watch as analysis paralysis sets in and they deliberate on how dangerous the door might be. Sadly, that also typically grinds the game to a halt. In the world of apps, we don't want to be stuck in internal threat modeling either. We have to have some sense of when a secure design is secure enough, and we want those designs to have a UX that doesn't bite us and vul. It's like condensing four question threat modeling into are you really sure? So, let's open that door. We're going to need a map as we explore or at least some blank paper to draw that map on. So, let's grab some A4 size paper to
go along with our OASP A4 entry on Insecure Design. Coincidentally, the A paper sizes follow this cool rule where each number has the larger dimension of its predecessor. So if the top 10 list were to follow that rule, being fourth would make insecure design 1/8 the size of broken access controls, which is kind of sad because it's first in my heart. I also mentioned that because appseac still struggles with metrics and quantification like what does more secure even mean? That OASP entry for example gives us a weird amount of decimal points for CVS averages. It's a bit of precision theater. I I I don't think we care about the tenths of a CVSs score, let alone hundreds of of one. But
we do care, or at least I care a lot, about dragons, or at least UX dragons. One of my favorite formulations of this problem comes from the 1999 review of PGP's usability in the famous paper, Why Johnny Can't Encrypt. Now, to put 1999 in context, that was the year that The Blair Witch Project and The Matrix came out in a theater much like this one. By the way, both movies were about a small group of people who find themselves in a horrible situation due to two very different problems related to batteries. But back to the paper, it highlighted the importance of helping people avoid dangerous situations, which holds true for horror movies, D&D, and of course,
appsac. But if we look at this through the lens of secure design, part of the problem becomes how did this design fail the user? And that's on top of figuring out an end state like what is secure enough or what does more secure even mean. So obviously we want secure designs, but I wanted to talk explicitly about security that protects users and and to help figure out when we might reach one of those end states. So that four question threat model already touches on what can go wrong. So it's not like this is a huge difference, but my framing is how do we prevent dangerous errors? So let's talk about those errors, who makes them, why they
happen, and why blaming users is a very lazy appseac mistake because dragons, dungeons, and errors are all dangerous. So Jack, like any horror movie or adventure, we should start off on a happy note. What's a what's an example of a secure design principle or or or example that that's a favorite one of yours? So So a couple thoughts here, and first of all, thanks so much for for having us here. um one of my favorite topics to discuss. So I I got add to this panel a couple days ago and was more than happy to join on. Uh for me a lot of this comes back to the work we were laying out at SISA where we
published in 2023 so just about 2 years ago now our initial secure by design white paper and did this with a coalition of 13 other countries their cyber security authorities and in that really laid out three secure by design principles. Um, and I'll draw on one in particular, which is leading with radical transparency and accountability, where really what what we recognized was that at the end of the day, a lot of the these aspects of secure by design, while they're grounded in technical elements, a lot of the challenges aren't technical. They're organizational. The reason that say Johnny can't encrypt that we we end up with these bad UXs is often that just this isn't a
consideration that goes into building a product. It's not that we don't know how to build a relatively secure product. It just is that organizations to date don't prioritize it. So, how do we get there? Well, that starts with having a really solid understanding of what does and doesn't work when it comes to building secure products. Um, and that's why when I was at SISA, we were so uh forthcoming in really calling on tech companies to be transparent about their security. So last year we launched the secure by design pledge coming up on just about a year which now 300 companies have committed to not just taking actions to improve their product security but demonstrating that progress. And as we
come up on the anniversary we see that a number not all of the companies who commit to doing this but a number have documented the the approaches they've taken and glad to to dive into some of those examples. But just to really set the stage, I'd say that there is a need for companies to be more forthcoming with what they're doing here, both positive and negative success stories because at the end of the day, that's how we get better. Yeah, that's and I think that the theme there too is, you know, I led with let's not blame the user. It's also let's not necessarily blame the company either if they're making that good faith attempt on
accountability, transparency. Exactly. Keani, give us a favorite principle or example of yours. I'm going to go with two. Um so being the startup representative on this panel um I want to say at startups we don't have time to like build everything all the time securely right for example if we're going to go have cloud infrastructure are we going to go build our own clouds no right we just go to AWS or Azure and then build off of it why do the same for security right obviously you want to do the same thing for security and so don't build your own cryptographic algorithm use secure frame frameworks from the get-go So yeah, use secure defaults. That's my always forever favorite. And
the second one most applicable to the ecosystem would be it sounds basic, but it's so applicable. It's principle of least privilege. I feel like having been at startups, I've seen like everyone have had admin access, especially in like the early days in in like the seat days and damn everybody gets upset when you take that away from them. So yeah, those are my two favorites. Yeah, nobody's happy when you take something away, which is one of those challenges of secure defaults. Maybe let's talk about that a little bit too because secure defaults are also one of the one of the most consequential things you can do as an org to protect your users. I'll go back to two examples maybe log forj
and XML. Log forj everybody was scrambling to patch a feature, a bug and a feature that nobody really needed or was even using. So why was even there by default? You know, that's one of those shake your fist at the sky. And even with XML, if you look at XXC's, it's kind of a surprise that my app is going to read from the file system, even though all I needed was a couple of Okay, I'm not a fan of XML, but it's the format is at least, you know, a little bit human readable, but you just need that file. It doesn't need to call anything where else. And I don't know, Jack, you know, is this when when you
look at designs or just these secure defaults, how do you balance that that discussion of well, let's turn this on or turn this off and figure out who who we're making happy, who are we making sad, and that balance. So I think I I'd say first of all to to some of the discussion here, it's much easier to build a product that is secure by default from the start rather than shipping an insecure feature and then over time realizing, hey, we got to turn this off and then suddenly you have a whole bunch of customers who will be really disappointed because that's ingrained in their workflows. Um, and that there there's just lots of kind of
technical work to then overcome and make something into a default. Just to give one example. So I I got into security through bug bounty programs. did a bunch of those and and one example that came up again and again is that in AWS uh when you configure your EC2 server for instance there's by default at least there was this metadata server that had this wonderful property that if you visited it from an EC2 server it would return things like the um access credentials for that instance which sounds okay in theory because sometimes you'll need to access that but then it turns a vulnerability like serverside request forgery that by default, you know, you can access some stuff, but
turns that into essentially full control over an AWS environment. Um, so that's the sort of feature where when they were deploying that, I'm sure they didn't think about, oh well, this is going to allow everyone to basically get their their AWS accounts fully hacked. And it's taken AWS the better part of, I don't know, five plus years to move towards a more secure default. and you still have all these instances that are relying on that old metadata server um in order to function properly. So they can't just break that. They have to to slowly improve when new servers are created for instance. Um so so again it's much much easier and if you're out there say building a new product or a
new feature today, think about the both security implications that insecure default would have but also when the time does come to try to turn that into more secure default. you'll really be wishing that you had done it the secure way from the start. Another time you would wish that if you were losing out on sales big time like I have seen certain companies lose out on sales just because they could not pass the vendor security reviews and yeah so that's I mean instead of like lastm minute scrambling for building secure infrastructure why don't we bake this in the first place and and part of that too is and me I'll go to you Kalingani for
this as well is that you know Jack made that point IMDSV2 came out is secure hey we don't have to even worry about SSRF anymore, but Amazon can't just roll that out for all their customers on V2, that force migration would break tons of things and a lot of people would be unhappy. So, as we if we talk about then the burden on the team, especially like a small startup team, how do you how do you figure out to to say this is the time we're going to spend on dealing with someone else's insecure default and we want to make our own defaults ourselves better? I think it's a debate, right? It's a patch versus rebuild
debate, if I can put it that way. If it's small but high impact, um, if it's maybe a couple PRs and then it's fixable, let's let's do it. Let's patch it right away. But if it is like a deep workflow dealing with like authenticating your payment structure or something like that, I'd rather have the rebuild approach, you know, I it's going to be deep, but I think it's going to reap the benefits long term. Um, there's also another angle to this. We often talk about the buy versus build, but at startups, you don't have the liberty to go buy at whim. So, yeah, rebuild. Yeah, I'm I'm going to riff rebuild and and I'm going to riff on that because we
could also say, well, what if we just rewrite or write everything in Rust from the beginning and maybe that's oh, engineering hat is on that fixed everything. I don't think that's the case because Jack, one of the things that you kind of alluded to when you're using that AWS example is that it took several years for IMDS v2 to come out to be available. It sounds like really easy to say just flip it so CSRF isn't isn't there by default. What are the speaking of like org accountability, radical transparency, what are the other types of org frictions that make that secure design hard or Yeah. Yeah. So, so a couple thoughts here. I'd say that one
really getting back to the SISA principles where this is something that needs to be ingrained in how an organization operates, not just at the technical level, not just say SIS an organization, but needs to really be coming from the the chief executive officer all the way down to to prioritize security. Uh because as we know, it's it's a lot easier to build a product that is relatively secure from the start than go to try to to retrofit a product to be more secure. Um, I left CISA a couple months ago. I've been working on a startup where we're working with companies who have lots of legacy technical debt and we're we're working to help them mature their their
application security capabilities in order to to root out classes of vulnerabilities. A and really to to a point that was made earlier, right? We are in a fortunate position where we're not perfect now, but there's well um proven examples of secure defaults and secure behavior where and a lot of this gets into for instance various frameworks that are out there where cross-ite scripting used to be something that you would really have to to think about and figure out, okay, how am I going to make my application resilient to these vulnerabilities? Now you can use React which developers prefer anyways because that makes it easier to build out a product and unless you do something dumb you're going to be secure
or or likewise for database queries where unless you want to be handwriting your your own SQL queries there's libraries out there that will make developers lives easier and also have the side effect of improving security. Um, so, so I'd say for for companies out there, it's figuring out, okay, what are these guard rails for each class of vulnerabilities that we'll take where possible using these existing frameworks. Um, and really then it does become easier to to for new products at least to to build those out with with secure defaults. Yeah, the React is one of my favorite go-to examples. I I definitely want to give up the big plus one for that because it also came out of
an engineering effort. This was a team that wanted to create more easily create front-end UIs and oh by the way we're going to add that dangerously set inner HTML is the only way to introduce you know XSS in there but we also I I'll say the counterpoint to the statement there about the my like the SQL is that I think it was easily back to 2004 2005 my SQL had prepared statements available it had that available in PHP uh I think it was PHP 4 even introduced it and today we could probably look at this year how many probably at least I'm going to guess 200 CVEes related to WordPress PHP SQL errors. So there's got to be some failure along the
way there. I'm going to pause it maybe it's API like let's go back to that usability as developers of the users of these. How does that influence whether or not this you know our React example is probably good but our prepared statements it feels like we're still failing there. Yeah, I mean I'd say that the tools are there to build more secure applications, but again at the end of the day, are companies going to make use of those tools? Are they going to prioritize security from the start or are they going to do what many companies do and rely on the fact that as of today for for the vast majority of products, they um outsource the the risk for that
for their their weaknesses. So when a product gets breached, it's their users, it's their customers that that really have to pay the cost. And as a result, um we we see companies continuing to underprioritize this. So I think we can do more to both make it so that um the default route when you're building an an application is the secure one, but but also we should be thinking about how we shift some of those incentives so that companies are more um especially financially motivated to to prioritize security from the start. Yeah. Where Kelly where do you see that incentive coming from? Because on the one hand you could just the CISO is like well I can
say it doesn't take too long to write that blog post that starts we take your security seriously takes a little bit more budget to invest in security from the start. How do you get that how do you make that convincing? I think well again it depends on um what the product is right like if it is something that is directly interfacing a consumer an everyday consumer and if if there's anything that's jeopardizing their privacy their PIIs obviously it's easy to make a case for something like that um but I think if it's it it very much boils down to what is at stake here what is the price that we're going to pay in terms of say um PR business finance um
what is the price we're going to pay? You know, I think once you can analyze that impact better, it it again it's a very subjective answer with respect to what your product is. Yeah. And I think it's interesting how you framed it that way because my wish is that how do we prevent those dangerous errors that the users make and even that back to the you know why Johnny can't encrypt how can we have a better UI so users are led away from mistakes or they're called to attention of what mistakes they are as opposed to protecting the company. Now I I don't think you're necessarily wrong in saying that. That's a very um it
might be cynical but I think it's a realistic type type of approach. Yeah. But maybe the question for both of you and I'll start with Jack is that how do we introduce h how have you seen that use the the UX type of questions as part of threat modeling like how is the user going to be hurt by this? Yeah. So, so I think that that's an aspect where again really a core part to building secure by design applications isn't just making sure that in isolation in kind of the ideal state the application can be secured but but making sure that the default experience when users get the product it's out of the box secure. So
you can have the the most secure product ever that's super resilient. But if you ship it with a default password and put in the manual, hey, you really ought to change this password. U we know that the the vast majority of users um just aren't going to follow that. And seems like a silly example and yet we are still having critical CVEEs that are being used by state sponsored actors exploiting vulnerabilities like default passwords. Um, so, so it can be the these really simple examples where again it's not that we can't deliver a product without default passwords. It's not that we can't deliver a product that has certain ports disabled by default, but it's just that it's become the the
status quo where again companies will often prioritize say backwards compatibility rather than some of the these forward-looking security implications. Um, that's an aspect we addressed in the white paper at SISA as well. So, so really um what this comes down to is doing these kind of evaluations in the wild. We were talking about before the the show. There's a really good example by uh Thinx uh Canaries where they realized that um they had in their documentation essentially configure a DNS record with like your company name.com and they've realized that about 40% of users literally copied that and create a DNS record for your company name.com which in turn uh defeated some of the security properties. So, it's things like that
that you can only really see by applying some of these same UX principles that are are used to to make usable products. They also need to be applied for security. I'm going to go back in history a little bit here and then give the City Bank example, right? The City Bank UX error which enabled someone to wire $500 million, right? I mean, what what is the what coming back to like what is the risk here? Isn't it a huge risk? H had this not, you know, had this gone a whole threat modeling phase, we wouldn't have been here. Yeah. And and that was just from a very bad UI that wasn't even that wasn't even part of a
like a a dedicated security tool, for example. It's just a UI mistake. And so I'm curious then maybe to you know part part of what Jack's emphasis is it actually can be better easier to secure by start secure by default and Kani in the startup world you often have a green field. So one of my questions there is is like I love a design principles behind something like web assembly. Wom has all these wonderful uh isolation. You can't get to that file system by default. You can't get to even to the network by default which seems good. That's similar to like we can say there's a corlary there with Kubernetes or with the cloud. How much of these are
just you have a secure default environment you're building your app in versus you need to be focusing on what are the problems of your app that you're building. Does those distinctions come up? I think something that we're trying to touch in in both of these cases is is this some is this app something that only experts can use? You know what I mean? Like if if only an expert can enable these things then the app is not useful for anything. And if if we're implementing security in such a way that only experts can use it, you're not making it easier for anyone to use it. I think that's a debate. Um yeah. Well, I think in startups, it's easier to get
that answered because um you have very clear-cut priorities. That's just what startup culture is. You know how fast-paced it is. You know exactly what is on your road map. And I think it's easier to get to those answers uh rather than being like how many more obstacles can I place before implementing this certain feature in an application. Yeah. So maybe let's turn that to then that that idea of what's when is the design secure? What's secure enough? And uh I you know I like the paper from from CISA. That's one of the reasons we have you here. and they they even made that call out to uh you know considering the the user experience do the field testing
but there's nothing around quantific quantifying what when when are you secure when is there a number why why isn't there any quantification Jack so um I I'd say part of this is and I fully agree that we do need to better quantify here um that we need better ways of measuring at the end of the day security outcomes and not just effort because you can put all the effort you want into securing your product. You can go through sock 2 and all these compliance regimes, but then the day they they often say very little about um the the actual product security steps that were taken. So so I I think the the current state is there's a lot of companies out
there who think they're doing a really good job of securing their product because they follow these checklists. They are putting in a lot of effort, but then that doesn't always correlate to actual results. Um so so I agree that there needs to be a more outcome based approach here. There also needs to be more ways of measuring a product security. Uh, one of the last things I worked on at CISA was a document called product security bad practices. Essentially a list of 13 items to say, hey, you really shouldn't be doing these things in 2025 when when building security products or when building software products. And we got a little spicy. We called out things like using
memory unsafe programming languages as a bad practice because the reality is that if you're building a product in 2025, there's languages like Rust that are as performant and interoperable that that can be used and that there C++ are just inherently insecure. You can't build a secure product, I believe, if you write in C and C++ because even the the best developers can make mistakes. um and there aren't guard rails around that. So I think this really calls for uh companies to begin to actually measure the security outcomes of their their products and not just the the effort put in to to make the product secure. And I think that ties into that transparency you started with because if the the
companies aren't being transparent about both a here is a security problem, but here is what we did about it. Here is the postmortem, then we're not getting that information about what how to learn from those those bad practices, right? Yeah. Yeah. Exactly. Yeah. Keani, there there's an aspect here of, you know, I I mentioned to jokingly rewrite everything in Rust. You can't quite do that. We could also I I'll be snarky and say well we could rewrite everything in PHP so it's memory safe but uh possibly we still have some problems there. Is programming language like the decision point for startups? What what where are they what are they thinking of when they're like we need a product we needed
to be secure? What are some of the principles that they're either should be avoiding? We'll come back to Jack for some more examples or like a principle that they should continue to embrace. I think when we're talking about like programming languages in terms of startups, I think it's always what will get us faster to that point. It's more so a conversation where it's like okay if Python is going to help us ship this thing faster and all of us are comfortable you know coding in Python and making the application in Python um then it's easier for the security team to build frameworks or you know build guard rails in place once we know what our um you know like the baseline cookie
dough looks like. Yeah. Um, but that's how I would like to look at it. Like it I would like to think about this from more of a finance perspective. Like what what what are we trying to get here? Because we're strapped for cash. We need to prove ourselves out there. We're on a deadline. Um, if this answer is going to help us ship fast and meet all our goals, then we will think about security on that. Well, and Rust to be fair is going to be performant. It's going to be stable. is going to have code quality, which security is part of quality. So that's that that probably is a pretty good pra, you know, don't use a memory unsafe
language. What what else showed up on that that bad practices list that uh you'd like to highlight? Yeah, so so a number of other areas, some of which we we've touched on today. So things like SQL injection, there's no reason that you should be handwriting your SQL queries in this day and age. Um similarly for vulnerabilities like command injections where again raw user input should not be supplied to an operating system of command and yet we see um vulnerability after vulnerability really those three SQL injections command injections memory safety vulnerabilities dominate the list of say CISA's um known exploit vulnerabilities. So, um I I would say that that really where where this is going is and where
we see some of the leading companies working is blending a lot of these ideas we've talked about and thinking about developer experience where it's not just what is the experience for your end users but how are developers either encouraged or not encouraged to follow secure defaults and companies like Google for instance have written extensively on how they've made the secure development route for their developer or the default route for their developers to be a secure one where they just can't introduce user input into database queries because they have specific types that don't allow um user input um to be supplied um and so on for cross-sight scripting and other examples. So that that really I'd say is
a area where the the security community needs to be thinking about not just how can we make something that in isolation can be secure but how can this be adoptable and used in ways that react for instance I think is a very positive case study there there's others um let's encrypt for instance where um we we now almost take for granted that the fact that the vast majority of internet traffic is secure a decade ago that that certainly wasn't the case um so we have many positive examples here and the the more we as a security community can advocate for the the default route for software development to be a secure one the better and again security doesn't
have to be the primary motivation the primary motivation might be quality might be speed but if security can be a side effect then we all win yeah I wanted I I love the less encrypt example because for me there's there there was the ancient age of what happens if your certificate is compromised especially or leaked etc especially when you have twoyear or even fiveyear you know, certificate lifetimes. So, we had CRLs, then we had OCSP that was trying to fix some of the problems, design problems of CRLs, but OCSP introduced some additional privacy concerns and it didn't actually it it fixed a bad design of CRLs, but it didn't fix the bad design of what
happens if you have an attacker in the middle and intermediation. Whereas, let's encrypt now, we almost don't need these examples cuz we can get down to 6 day certificate lifetimes. That's a great ex that that that's a great progress. You also mentioned Google for as one example amongst others of having the humans code you know write code and avoid a lot of problems. Let's make this a little more complex and say humans aren't the only one writing code anymore. We have something you know MCP agents gen AI what happens when LLMs are starting to write code. Are they introducing you know new problems different problems? How does this become the design secure design challenge? So, so a couple trends I'd say. So, as
I'm sure sure people are closely following, right, we're we're quickly moving towards a future where where there's a few trends. One, there's going to be a significantly higher amount of code being written. We're seeing this already as people are going to use cursor or some of these other um AI assistants to write their code. They're able to to deliver products much quicker. Um, and not only that, but it's really expanding the remmit of who can write and publish code where it's not just software developers who can write code, but can be hobbyists or product managers or designers. Really, anyone who wants to um turn something into an application again might not be a great or fully functional or fully secure
application, but they certainly can do a lot more than they were able to do before. And then the third trend is really that um with all this more code, with more people writing it, there's going to be fewer and fewer eyes on code as it's written. Um so so really what some of what I've been focusing on now with corridor is thinking about how can we make sure that as more code is being written, as there's fewer eyes on it, as the security tools of today just won't be able to keep up with that. thinking about for instance the UX of security tools which I'd argue there's a lot of crappy security products out there today
um and if you have 10 times more code what does that mean for the developer who has to go through their alerts one by one and um figure figure that out a lot more of their time as of right now will be eaten up by um the the these noisy alerts. So figuring out how can we make a more seamless system where the secure route for instance as code is being developed is um the the default one. Um and that's really where where we think we have to move towards this model where again it's not something that you have to be thinking about necessarily. Not every developer is going to know even what what security properties their
application has to have but moving towards a future where as these AI assistants are helping write code that they can output code that's more secure by design. I definitely want to add one more thing to it though. Um, as as we're moving so fast towards like all these new tools and we hear all these new terms every day, I feel like the core principle of security still remains the same no matter how many iterations of toolings we go through, right? I think if you do follow your secure design principles, your secure design practices, your basic input validation and threat modeling and things like that, the possibility of having really insecure design or insecure code, I
think that lowers it brings us back to the very principle of are we baking in security since day one. Yeah. Yeah. And I mean MCP is sort of is they self-described as the USBC of, you know, the Gen AI world, but since I've been making all the all the 90s and early 2000 references, it's also the soap and the wisdom of of the Genai world. We we've been there, done that. And the reason I bring that up is, you know, to to Kani's point, why do we already have all these MCPS and our calculator app can read from the file system? You know, we I thought we saw this and we learned from Android and iOS building their
walled gardens. Admittedly, there are trade-offs here, but constraining what these apps can do. And Calani just pointed out that suddenly now all this all the Genai LMs are right back to insecure designs that ostensibly we should have learned from better. So that's more of a comment than a question. I'm not sure how to hand I'm gonna have to hand this off to you, Jack. Yeah. Yeah. I mean I would say that that's exactly right too that the both the upside and the downside is that AI isn't fundamentally changing that we do know how to build relatively secure applications we know how to root out common classes of vulnerabilities but the reality is that unless a company is
taking an active approach the default is that um AI generate code is going to introduce more of the same old vulnerabilities that we've known about for decades and have also known how to prevent at scale for decades um So it's an area where where companies need to be actively thinking about how are they uh both um applying the these secure guardrails and doing so in a way that will scale to to however the the code is being written and and there's a there's a trap here though and for for example with just with MCPs I like the idea of adopting OOTH they're executing on behalf of us the user you know my credentials so that I'm not able to get
you know I can make a lot of whatever MCP things I want to under Mike rather than under Kyani's. You know, I can't get into Kyani's email or something like that. But if we go back to some of the CISA principles, just this year, OOTH released their best practices guide that was several pages long. And you know, best practices is kind of synonymous with hardening. So, um, are we getting any better, Jack? I know SISA has some opinions on hardening guides. So yeah, that that's another area that we addressed in the the white paper is essentially this idea that we should flip hardening guides on the head that it should instead be loosening guides that the the default
configuration of a proxy should be a secure one. If users want to weaken that and and make make the conscious decision to use a less secure product then they can go do that but the default should be secure. So, so I think that of course progress takes time and that um we we need to be careful especially again thinking to this developer experience standpoint. Are we just giving more hardening guides and whatever kind of um recantation that looks like to developers and again putting the burden on them to write secure code or are we moving to a model where like react like let's encrypt the easy route is one that is secure. So, so I think that's
something particularly as companies are building their MCP servers or whatever kind of this next generation of tooling looks like, think about some of the lessons from decades of writing insecure software and figure out how to to not repeat some of the same mistakes. Yeah. So, I I can't resist. MCP was also the master control program, which was the bad guy in Tron. Um, so, you know, but as part of Tron, the reason I bring that up, it one of the themes was fight for the user. Um, so I think our theme is we're fighting for the user as both what Kalani and Jack are saying. Now both of you were pointing out LLMs are there.
They're assistance for coding. Have you seen them? And maybe Kyani, you could start us off with this. Have they helped you with threat modeling, let alone is there any secure design assistance that LLMs are going to give us? I haven't had great success with it, unfortunately. Um I don't think LLMs have been a great threat modeling assistant to me. Um I have started all kinds of prompts like you are a master threat modeling expert but it did not render me the kind of um you know results I wanted it to render and unfortunately there's so many business logic flaws that it will miss out on. Um that and then well don't even get me started on hallucinations. the
number of times there has been like this weird sense of typo squatting like these packages don't even exist. You're hallucinating big time and just generating names at this point. So I mean it's still nent. I'm very hopeful though. Okay. Yeah. What about you, Jack? I mean you were saying you've seen all this tons of code. What how do you follow this? Yeah. Yeah. So, so I'd say that I I'm also optimistic, and this is in part what we're we're trying to do, is leverage some of the the upside and capabilities of LLMs to combined with deterministic methods that that we've used for securing products um historically to to really be able to to really in a much more tailored way
assess the the security of an application. So, so I agree that this is going to take time that it's not like you can just say, "Hey, chat GPT, secure my app and it'll magically output a more secure version, but I do think that we are able to use them to find vulnerabilities like say authorization flaws that before it's really hard to say, okay, here's the um like consistent way by which I will make sure I don't introduce a insecure direct object reference or some other vulnerability." LMS can help with some aspects of discovery because they're able to parse the the unstructured data documentation or other um aspects of of how the application operates and really digest
that and help um understand where where you might be be taking missteps. So, so it's an area where where we think there's a lot of potential to not solve application security, but at least help companies who who traditionally have been underresourced, not able to take advantage of the state-of-the-art and do a better job. Yeah. So, sounds what what I hear, and I'm I'm going to I'm going to be the pessimistic one, the panel for this, but it sounds like LLMs can maybe help us answer or ask those questions and perhaps answer in in what could go wrong. here's what what could go wrong with authentication or what could go wrong with me using this endpoint. Is
there a command injection? Those seem targeted and maybe specific, but maybe they're just that's a that's off of a checklist. You know, this is that's good, but the the LLM isn't giving us insight into was this a good C program. And here I'll give the example of like I think curl I would point to as a as a project that actually demonstrates good design through its practices even though it's in C. Um but this also has a lot of burden because it is in C. The opposite would be something like OpenSSL which is pretty messy and I don't know that an LLM could help us analyze to say why is one if you all agree even why is curl a
good design and why is OpenSSL a bad design. I don't know does does that does that resonate in any way and maybe in that case even if you are able to take something like curl and reasonably secured maybe you should have just started in a language like rust in the first place and not had to put in all this work in I think though there is still the roadblock of context right I think sure it can give you a very generic answer maybe about how one framework might be better than another but if it lacks context of what your application is or what your security culture is. I think it's I think it's a very tailor made answer to you which
only you know since you have hands-on experience with the the tool. So yeah, I think context that that's a good we're starting to have to to wind down and that that's already a good takeaway message. Um Jack is what else would you give us as a as a takeaway on this topic of you know secure design and UX dragons? Yeah. So I really would go back to to some of the secure by design principles we had SISA right where again that that companies really need to be leading from the top making sure that this is a priority from the the executive leadership that they're leading with radical transparency and accountability and and lastly that they're really prioritizing customer
security outcomes. And at the end of the day I I do think that the best way to to get here is to combine some of these ideas. Think about the developer experience. think about the the fact that that um as I've um as much of I've advocated for change here, the vast majority of developers still don't learn much about security. Um so so they're not going to be security experts. And particularly as AI is changing the definition of what it means to be a developer, making it so more and more people are able to write code, we can't rely on the people writing code to know how to secure it. So what are those secure guardrails we can put in place as
companies? How can they make sure that their developers are are writing code that is conforms to their their security best practices? Make sure that they're using these secure patterns out of the gates. So really that that's what I I think company um application security teams need to be thinking about. I think that's a great that that's a great message to end on. Thank you. Uh let's give Kalani and Jack a round of applause. Also want to give a shout out to our crew behind the scenes. Tyson and Sam are here today and shout out to Tom, Renee, and John back in our virtual studio. And thank you once again to Bides for bringing us all together.
Please subscribe, check out the show notes, share us on the socials, and speaking of things that are dangerous, check out Deathraer by Carpenter Brute. We'll see you next time on Application Security Weekly. All right, well done. Awesome. Let's get around, folks. Mike, Jack, Kyani, thank you very much.