
I am thrilled to be here. Um, last night I was talking to people from San Francisco, from Portland. Um, we've got some folks down from Vancouver and, you know, just really excited to be here and also want to thank our sponsors who help make all of this possible. So, I'm Adam Showstack. You may know me from threat modeling work. You may know me from the CVE. You may know me because I'm on the review board at Black Hat. I'm also an affiliate professor over at the University of Washington and I run a consultancy where I help people threat model. And today I would like to share with you some thinking that I've been working through lately about the idea of
risk because we often treat risk as if it's an axiom as if it's a thing which is taken to be true without any question. And I think I think that hurts us. And I want to start out with a couple of questions. Um, so who here has been told to do risk management and then seen executives ignore the results? You're laughing. Why are you laughing? Uh, maybe half of you. Um, who was told to do risk management but couldn't get key data that you needed to make the decisions you were asked to make? Many of you who has done risk management but had a strong belief about the results and then changed your mind when new data came in. Oo, many of you.
Impressive. Nicely done. That was a that was supposed to be a trick question. Question amiguous. The question's ambiguous. Okay, fair enough. But we're going to leave that be for now. Who's changed their mind about risk and then changed the riskmanagement tool so that it gives you better results? A much smaller number of you. Thank you. Thank you to those of you who have done that work. Um so what I want to talk about today, I want to talk about how we build secure systems. Give you some context for where this work comes from. going to talk a little bit about threat modeling tools because it sets the stage for how we think about risk, how we use it as a tool, how we
use it if it's a frame, how we measure it, and then ask where that leads us. And by the way, um, lots of different ways to think about risk. I'm talking about risk as we use it in cyber security, not physical risk or stock market risk or things like that. And so axiomatic sort of an unusual word. It means self-evident or unquestionable. Parallel lines don't converge. And in cyber security we use risk. We use risk measurement. We use risk management in ways that imply they are axioms. I'm going to ask to hold questions until the end since we've got a large group here. Thank you. So, this comes in when we're building secure systems. And when I say secure
systems, secure isn't quite the same as saying a,000 bits per second or lift a,000 kilograms to low Earth orbit. Sort of means stuff doesn't get hacked. And when stuff doesn't get hacked, is that a result of preparation by defenders or is it luck? So there are some frames, there are some definitions and there's one here NIST 853 which is it results from the establishment and maintenance of protective measures that enable its an organization to perform its mission or critical functions despite risks posed by threats to use of its systems. And there's others like resilience doesn't surprise its creator. But when we look at the NIST 853 definition, I've struggled with this. Maybe you've struggled with it. What do
I need to do to actually manage that? What are the risks that I am supposed to care about? Which of these threats matter? And they're using threats in the sense of threat actors. And if I'm trying to be complete, if I'm trying to show good diligence in having done this work, where is the thing that I can check off to make sure that I have done at least the expected work? There's also this sort of interesting um use of the word mission which comes from the fact that NIST writes standards for the government. I don't think businesses have missions. I do work with government agencies and they have missions in a very different sense than
the businesses which I serve. And we can think about cascade issues like incident response doesn't work because Slack is down because a botnet crashed their DNS provider a couple of years ago, right? Is that a risk that I'm supposed to care about? when I go and before you answer intuitively, go back to this definition and say where does that def where does my example hit that definition? And I think this is important when we're trying to do compliance sorts of work. So I don't want to use these definitions as an excuse. We know in security especially here at bides where we're enthusiastic about security and first time I gave this talk I actually was privileged to go to
NASA's jet propulsion lab and give this talk and I was talking to some people and they mentioned Akin's law of spacecraft engineering and they apply surprisingly well to security. If you haven't looked at these they're worth looking at. And one of the laws is that design is based on requirements. There's no justification for designing something one bit better than the requirements dictate. And if we apply that to security, our security requirements aren't crisp. How do I get one bit better than my security requirements? What? And that leads to this challenge that we all face, which is what's enough when we're trying to build a secure system and we're trying to do it at the speed of today's business with the
pressures to get things out into the market. What is enough security? And so why is this all hard? It's hard because we lack a crisp definition of secure. We have a lot of standards and rules that we're asked to comply with. And all of this comes at a cost that can be at odds with the security work we want to do. And what's more, we don't have like a germ theory. We don't have a way of saying what causes most security issues. So, are most breaches caused by fishing or a lack of patching? We we have a shrug. I appreciate the shrug. Um and while I appreciate the honesty and the shrug, we might have different
opinions in the room and no good way to resolve this question. And I think that's a problem. And I'm I'm doing a lot of work around that, but that's not the subject of today's talk. So what do I do? I've been working in application security for over 25 years. I've been a consultant these days. I'm a consultant. I'm a trainer. And I go out, I talk to people and they say nothing is good enough. And what they mean is nothing is perfectly sufficient for what they're they're dealing with. Um, not doing any security work is often fine according to the business leaders. And that leads me towards very very lightweight methods. I was talking to some folks last night at
the speaker reception about excuse me talked to some folks at the speaker reception about the threat modeling processes they're using in their organization and how heavyweight they can get. And I really these days focus on how lightweight can we make it because the more lightweight we can make it, the more likely it is that people will actually pick it up and do it and get something useful out of it. And so that leads me to the threat modeling techniques that I focus on. And so starting out, what is threat modeling? Threat modeling is using models to help us think about security. And here threats means possible future problems. Um, unusually for Seattle, it is not
threatening to rain. Um, and we can use these techniques for systems we build. We can use these techniques for systems we are going to deploy. We can use these techniques for systems that we have deployed. And I like to quote Frank Lloyd Wright these days. He said you can fix it on the drawing board with an eraser or on the job site with a sledgehammer. And threat modeling is the techniques that let you fix things on the drawing board before you've written code be uh before you've actually pushed things to production. You can find and fix problems. And that's so powerful because the earlier you fix problems, the less expensive it is, the easier it
is, the fewer conflicts we have with the folks around us who are being told ship ship ship. And so techniques, whatever they are, that help us move earlier in the cycle, you know that shift left jargon, the techniques that help us move earlier in the cycle are so valuable, so important. So, how do we do it? I like to use a four question framework. These are really simple questions. What are we working on? What can go wrong? What are we going to do about it? And did we do a good job? And the reason that I like to start with what are we working on is because I have a very strong belief that if I walk into a room of engineers and
ask what are we working on and I don't get a clear answer, I am unable to help. Um and and this is by the way in stark contrast to some other ways of starting threat modeling. For example, there are people who say start by making a list of all of your assumptions. Back in 2006, I joined Microsoft was working over in building 27, whatever direction that is. And the official process in the security development life cycle started with um make a list of all of your assumptions. How do I do that? What's a good enough list? What do I do with that list? Spoiler alert, you didn't do a darned thing with that list. It was a sort of
bad way to start a process. But what are we working on is a thing that people can know and answer. And so that's why I like to start it. I like to start there. And these are covered, by the way, not only in my books, but also in the threat modeling manifesto, a thing that about 20 of us came together and wrote to think about how to help people threat model. What are the values? What are the patterns and anti-patterns? So, if you haven't seen that, check it out. Which reminds me, I should mention the slides will be available. You are of course welcome to take pictures and notes, but all of the URLs in the slides you can
get to later. So when we threat model, we answer those questions and we can do so in specific ways that help us answer the question and we can align these to the way we engineer. So if you're engineering spacecraft, you are still doing a very waterfall delivery model, right? You're going to build the spacecraft, you're going to launch it, and no one will ever touch it again. Um, if you're building a mobile app, you have a very agile process. And if we align threat modeling to those engineering processes, we get better results than if we try and align those processes to some idealized approach to threat modeling. The other thing I want to mention is
threat modeling is an opportunity to both identify and improve your boundaries. So boundaries give us isolation. For example, uh firewalls isolate network segments. Unix user ids are isolated from one another by the kernel. It's a form of protection that we can find opportunities as we threat model to think about where do we need better enforcement? Where do we actually make our boundary code more resilient? And so the four questions sometimes take away a little bit from attention to the value of boundaries. And so I just wanted to mention those here. So as we answer each question, we use like whiteboard diagrams or data flow diagrams to help with what are we working on to answer what can go wrong.
We might use stride which is spoofing, tampering repudiation information disclosure, denial of service and expansion of authority. I say that a lot. Um we can use kill chains like MITER's attack or the locked kill chain to proactively think about what will go wrong with a system. And when we get to what are we going to do about it, we can address problems in various ways. And when we don't have ways that really cleanly address the problems either because we're making tradeoffs, usability tradeoffs, we might do this, but it would be hard. Um, or compatibility. We would do this, but we would break things. We get into things. We get into risk management strategies such as eliminate, acceptor, transfer
and often people at this stage or even earlier start using likelihood to help them prioritize and all of that starts sounding a lot like risk and risk management. So as I get into risk, I want to mention some work by the UK's research institute in sociotechnical security where a team of people went out and reviewed things and found 200 riskmanagement standards. Um, and for clarity, I'm talking about all of them. Um, and I think part of the reason that we have 200 standards, well, let's be let's be kind here. Part of the reason we have 200 standards is to address different scenarios, different industries, different needs. Part of the reason we have 200 standards is because many of them don't
work very well. And I'm going to start with NASA. NASA has a super mature approach to risk and they've got two main riskmanagement systems for reasons I don't have time to get into. And these are combined in a riskmanagement handbook which is really nice. And it and in the introduction it has this sentence which I've broken up a little bit into the phrases. before I say actually I'll just read it to you and then say this says while there will probably always be vigorous debate over the details of what comprises the best approach to risk management few will disagree that effective risk management is critical to program and project success and affordability lot of smart people worked
an awful long time on this and before I say what I'm going to say next I've written a couple of books I am sure there are people who can go into those books, pick a sentence, and nitpick it to death. That's not my goal here. My goal is not to nitpick. My goal is to respectfully look at this as the work of smart people who ended up having to say something like this because I believe it is pointing towards an important problem. Um, and so those problems are if we can't agree on how to do it, what are we agreeing to and why? Is risk management just being treated like an axiom here? Are we really improving project or
program or mission success rates? Are we achieving that affordability? And so in in a real sense, I I do I am in that few. I'm going to disagree that risk management gets us to these things and I'm going to talk about why. Before I do, I want to be clear. This is not just looking at NASA. This this example is from an article in uh communications of the ACM and and they made these statements and the key statement the key statement over here is as a result so system support operations can never be 100% secure. Okay, fine. And then they as a result risk management approach must be taken to balance the mission of the
organization. Um and then they go to compliance frameworks specifying an agreed on set of activities and practices that manage risk to an acceptable level. Spoiler. Um I think they're jumping over some important things. I don't think the thing they say as a result is a result of the preceding statement. I don't think it's the only possible result. When they say must, I don't think it's a must. Does it actually do the things that they say it does? Again, I I think it does not. And I'm not citing this particular article because it's a common place. It was just a thing that jumped out at me as they were making these statements in their introduction before they got to the meat
of what they had to say on the assumption that no one including the editor of a technical publication was going to disagree with any of this. And I'm looking at this and saying, "Huh, is that really where we are?" And and by the way, when I say must, there's an article by uh Steve Lipner um who those of you who work here might know as the person who created the Microsoft SDL and Butler Lamson who's a distinguished engineer here and they wrote a thing called risk management and the cyber security of the US government um which is worth reading for an alternate perspective. The next one I want to mention is David Spiegel Halter. Uh
David Spiel Halter is the chair of the Winton Center for Risk at Cambridge University. He's been thinking about risk, how to communicate about risk for his entire career. And he just wrote a fantastic book titled The Art of Uncertainty: How to Navigate Chance, ignorance, risk, and luck. And in it, he says, "Risk can mean almost anything you want it to." And he says in everyday language, it's often used to describe a threat, right? that broken paving stone is a definite risk and the chance of an event. And and I like this in part because he ties risk and threat together, but also because he knows way more about risk than nearly anyone else in the world. And when he's coming in
and saying um risk can mean almost anything you want it to, I think that's a red flag. So what is risk in this section? By the way, I've been working with Shannon Lansancy. She's got a PhD in decision science and so a lot of what I'm going to say here. Um, all of the correct parts are hers. All of the mistakes are mine. Um, but but I just wanted to mention that before I get in. So, the first thing I want to say is risk is not a physical property, right? There is no riskometer in the sense of a thermometer. Um, and it doesn't derive out of physical units the way force equals mass times acceleration does, but it's a
concept that we use without having this sort of physical basis. And you know, there's a lot of people who will say things like you can't manage what you can't measure. And so we get to the qu question of does managing risk depend on measuring it? Possibly. And so I went I dug into some risk measurement history. This book Peter Bernstein's Against the Gods is great and he talks about a lot of the history of risk. Um insurance for merchant ships. Um merchant ships used to go out and come back a year later or not come back. Um if you can figure out how likely it is your merchant ship won't come back, you can sell insurance for
cheaper than the next guy. and that's useful in making profits. There's a lot of risk measurement in gambling. Um the mathematicians Pascal and Ferat had an extended correspondence over a problem called the interrupted game. Two people are gambling. The game gets interrupted. What's the fair way to divvy up the pot? And in gambling, we can think about this, right? We can think about if I spin a roulette wheel a 100 times, what per 100 or percent does each result come up with? And they worked through that math. Um, and a lot of the way we do probability comes from that. There's pension plans, there's stock portfolios, and foreshadowing a little bit. All oops. All of these depend on
iteration. All of these depend on being able to run the game over and over again and break out which percent of outcomes happen how often or which outcomes happen what percent of the time going a little more deeply into the history at NASA. Actually, in the interest of time, I'm just going to mention Richard Fineman. And after after the Challenger accident, there was a presidential commission looked into what happened. And Fineman added some personal observations. And he starts his personal observations by saying the estimates range from roughly one in a 100 to one in a 100,000. and being a polite gentleman, he doesn't swear or grab anyone by the lapels and shake them. But he does point out that if your
estimates vary that widely, you might not be doing science in the way he thought about it. NASA has produced huge amounts of stuff. Failure methods analysis predates a lot of the threat modeling work that happened here at Microsoft. There's techniques like stamp by Nancy Levenson that are great. And then there's tech readiness. And tech readiness, I'm going to come back to this is a way of thinking about how mature technology is. So we start out with um what if there were unicorns. That's a level one. Level two is we've drawn a unicorn. Level three, and I I love this. Um we've got unicorn v8 final final.cad. Um, and and this all this all terminates with sending a unicorn to space. And you
can determine how mature technology is because they put it through its paces before they send it to space. We'll come back to that. And so there's two parts to this. One, risk measurement is easier with iterations. The tech readiness levels are empirical. And I think that this leads us to the idea that we might start treating riskmanagement techniques as hypotheses. This riskmanagement technique will help us in this situation and we can test those ideas. we can subject those ideas to analysis because each of these riskmanagement techniques has measurable properties. So some of those properties are accuracy and precision. And accuracy means how close our measures are to the truth. And so up here we have accuracy.
Everything is clustered around the center but it's not precise. We have precise and accurate. We have precise and nonaccurate, right? We've got a very tight collection of um of estimates. And then we've got neither accurate or precise. These are measurable things. We can test these in all sorts of ways. And the riskmanagement literature is full of tests that show that riskmanagement techniques, even when applied to things where these tests are relatively easy, it's much harder than you would think. There's uncertainty and I think uncertainty is super important. Um, there are two types of uncertainty. One is resolvable uncertainty. We could look that up and figure it out. And the other is radical uncertainty. And this is a paper that
was in the journal of cyber security from Oxford last year. And they talk about obscurity and ignorance and vagueness and ambiguity and illdefined problems and all of these things that contribute to what they call radical uncertainty. And they and then there's a key sentence at the end. in some cases, but not all, we might hope to rectify at a future date. And I think when I look at this, that attacker behavior, especially advanced persistent attackers, um, our lack of information is not something we're going to rectify, right? They're they're adapting, they're changing, they're they're thoughtful. And so it seems a little different than uncertainty. There's another property which is sensitivity and sensitivity is how a change in your
input changes your results and this this helps us generally understand how reliable or robust our conclusions are. Um but also it tells us that if the equations that we are using, if the methods we're using to estimate risk are not sensitive to a certain input, why would we waste our time figuring out that input? Um you know, there's intellectual curiosity, but if our if our methods are not sensitive, why do we use it? And so we and so this came out of the um the US government at Sandia and they said improve estimates that would lead to the most improvement of the estimates of the output. It's a really important point is that we want to focus on the things that
will help us most. There's another property which is cost. How much work does it take to produce the measurement and how much work does it take to record the results? And this is something that I am very aware of because often times when we threat model the work to do the analysis can be a few minutes and then the work to turn that into a nicely formatted written report that makes auditors happy can be 10 15 times longer. And a lot of people think they need to have those beautiful reports and some of them do but not everyone does. And when we think about cost, understanding where our cost goes really helps us focus our
designs. There's a cost to vigorous debate, right? If you think back to what NASA said, there will always be vigorous debate about this. A process that involves vigorous debate every time you run it. Um, might help your might help your answers, but it might just also make for a really expensive process. Um this relates to return on investment. Um and actually one thing I do want to say is one of the failure modes I see in threat modeling is people trying to prioritize the issues they find with a risk calculation. And it's very frustrating because you say this one's a critical risk and engineering says I don't care. you you've experienced this. Okay, good. Good. I'm not making this up. It's not
just me. Um the issue is that the definition of critical risk didn't include the other important thing in priority which is engineering effort. And so whether or not and so we if we leave out the investment part of this as we're thinking about return on investment, we end up with that conflict that we want to avoid. So we could measure some of these. Okay, we could measure some of these, right? The way people do this is you have many people do the quantification of some risk in an experiment and you can measure their reliability, right? I can measure my reliability versus Wendy's and we can see um how how often we get to the right
result. We can measure within a single person. Um we can and they do this to doctors on a regular basis. um they present the same case to doctors repeatedly in training and they check whether or not the doctor gets the same result and a good doctor gets to the same result 70 80% of the time when presented with the same set of symptoms for unusual diseases not for super common ones um and these differences might not be independent they might not look like a normal distribution like a bell curve but we can one of the things that I think is crucial about measuring precision is we can measure precision without knowing the correct answer. And given that knowing the
correct answer is hard, it's really useful to be able to measure something that doesn't depend on knowing it. We can also measure accuracy. And I think I'm pretty confident that this requires iterations, but I've got a question mark there because maybe someone is smarter than I am and has a way to do it without iterations. But every failure gives you data and failures may be really really costly. Um and in aviation um when there's a plane crash the NTSB steps investigates and they produce a long report. When there's a near miss in aviation the pilot, the air traffic controller, etc. fill out a one-page form. They send it to NASA who does a bunch of near miss analysis. that's much
less expensive work and thinking about how we might measure this. I think both the NTSB model and the as um aviation safety reporting system model give us useful data and are worth thinking about. So as we manage cost if work is either expensive or unreliable we should only do it once right and I one tool that we are missing I'm going to use the example of fishing tests what's an acceptable failure rate for a fishing test for your organization as a consultant I get that all the time I don't know the answer any better the next time I hear the question. So why don't we just have NIST write down an answer of 5%, 10% and stop all of
that vigorous debate. Stop the worrying. If you're below 5% you're doing okay because NIST said so. Why why are we laughing? Right? this um the Food and Drug Administration has a table for acceptable rat parts in food. It's a very small number, but that's the acceptable answer because we don't want food companies making their own choices about it. It's expensive. There's no way to get it right. And so FDA says this is the acceptable amount of contamination of different types. And it's way lower for something like salmonella than it is for something that's just gross. Um but we but if we go and we just say here's an acceptable answer, we manage cost. None of you have to worry about
this again. Unless you somehow come to better data, right? Maybe you work for an insurance company and you say, "Wow, companies that are under 3% really get popped a lot less than companies that are under 10%." We might want to adjust the standard. Cool. But if you don't have better data, use those wrong acceptable answers because they save us a lot of time and money and effort and conflict. The other things we can do is we can acknowledge that many of the security improvements we care about are actually not sensitive to the risk numbers we calculate. Right? The cost of the fix does not change because I call it a CVSS7 or a CVSS9. The cost of fixing it remains the
same. So this is a place where we are not sensitive to the numbers. the impact on schedule does not change because we go from an EPSS3 to an EPSS7. In that case, why are we spending a huge amount of energy on those calculations? Um, and there's other factors, right? If we say fix everything, there's going to be an impact on the rate of launches, the rate of experiments that we do, and that matters to the business. And if we as security people spend our energy focused on risk management, if we define our professional identity as risk managers, sorry Jason, um then we can get frustrated when people don't make good use of those answers. And so one approach that we
might take is we could measure precision and cost. This seems sort of unsatisfying, but when I say this, I wonder, are we comparing real systems or are we comparing real systems to ones that we hope exist, ones that we hope have a certain set of properties that they might not really have. So, where does that lead us? And I think we're doing well. We'll have lots of time for questions. Cool. Um, so where does that lead us? The first place it leads us is stop doing risk. You can't predict single instances. When has a heat map ever settled an argument? Yeah, give me the odds that Vault Typhoon is going to do X next. Anyway, but this brings us to the
question and this is an important question of if not risk what? If not risk, what? And and I want to object a little bit before I answer this question. And the reason I want to object is because this question treats risk as an axiom, right? If not risk, what assumes risk works and assumes risk should be defended against replacement. And I do think bug bars are a very useful tool here. A bug bar says if something is a remote anonymous privilege elevation attack, it is a critical and we fix it as a critical and it measures only the severity, not the likelihood. And then there's a policy that backs the bug bar. But there are
two risk ccentric approaches. If you're a fan of risk and you want to keep doing risk, um there are two approaches that I think we can apply and I'm going to talk about applying tech readiness and treating them as hypotheses. So what if we take our riskmanagement approach as a technology and actually subject it to a tech readiness level analysis? So we might say fair has a tech readiness of five and some other risk management mechanism has a readiness level of seven. We can also say risk management as a whole might not be as tested as we want it to be. It doesn't perform on mission the way we want it to perform based on these criteria. and we can say
maybe we should stop maybe we should apply the concept to risk management as a whole. The other thing we might do is start treating risk or risk management as a hypothesis. So we might say the green approach um has a precision of 85% at a cost of $5,000 per risk estimation and we do a three-year retrospective predictive accuracy and it comes up at 25%. The red approach which costs 1/5if that has a precision of 60% and a slightly lower predictive accuracy. Um and then we can start saying does it actually meet these criteria. Um and I am being by the way intentionally somewhat provocative in the numbers that I've put there but I don't think they're
unreasonable. I think that risk estimates are expensive. Um, and I think that our predictive accuracy is not as high as we hope it is. All of this, all of this is in the service of good business outcomes. Good business outcomes require decisions. They require execution and they require luck. And the decisions are between choices. Should we ship this AI feature or should we make the product 20% faster or should we fix these security problems? That's the decision that an executive is going to be making. Risk is an input to that decision, but it's really an input to a decision between possibilities. It's not yes or no to this thing that we want to achieve. It's which of these things is
the business going to prioritize and focus on? And when we start to think about it like that, we get to the point where risk is only one of several important inputs to those decisions. And when we don't know the starting values, we don't know how the starting values of likelihood or impact. We don't know how much our effort is going to change either of them. Then management sensitivity to other factors is much higher because we can't say that this thing will reduce our risk by 25% or this will reduce impacts by 30% over the year. And that is one of the reasons that executive say I don't want to bother with this risk stuff. And they
don't often say that because we as a profession, as a community often define what we do as risk management. And therefore the executives don't want to think. They know they need to do some form of security. And so they don't want to say don't bother with risk. but they often feel that way. So to conclude, I believe that risk management is fundamentally limited by attacker choice. It's limited by the lack of iteration between tests that allow us to really assess whether or not our assess whether or not our assessments assess whether or not our estimates are as good as we hope they are or need to be. And they're limited by these other factors in
decision that draw leaders away from risk. And I believe that threat modeling techniques like the four question framework help us do security even if we don't have risk. They help us deliver more secure products without doing risk estimation. And I think that's a really useful feature. And you know this by the way midjourney designed for me a high heritage space unicorn. And I just sort of loved everything that it did in this picture, including that you can't close the um the face plate um and and have it breathe. So with that, um we've got some time for questions. Do we have some mic runners so that the the questions will get um you can do that. All right, we've got
a question here. I will take the first question and then he'll run my I'll just repeat your question sir. What you said just for for the recording is when you threat model you hear people socializing the threats that you discover they repeat them back to you and it is hard to measure. Absolutely correct. And the way I tend to measure these things, which I think is the point of your question, is number one, I ask the question, would you recommend threat modeling to a colleague? Because that that gets to the point of do the people we're asking to do the work think it's valuable? And let me warn you, when you start out, you get like a 5% 10% yes
rate and it's very depressing, but it can improve. The other question that I ask is are are um escalations dropping? Do we have fewer problems at the end? Um and do you want to get the mic to the next person who's either that or you can repeat the question? There's two mics there. Okay. Thank you. You're welcome. So in a large organization um you tend to get a threat modeling and management team who say these are the risks and this is what we're going to do about it and a separate suite team. A lot of the statements you made about managing the cost of risk management assume that the person doing the threat modeling is bearing the cost
of the consequences of their modeling. But when you have a separate risk management team, they're just paid to find risk. They're compensated on finding risk. How do you structure an org at that scale? So that is a great question and one of the things that I think is happening today is that t engineering teams are taking on more and more responsibility for what they deliver. They're responsible for quality usability reliability. And I like to move the security reliability, the security resilience into the teams who are shipping because the problem that you raise 100% true, right? Those team if you separate the responsibilities, the risk team finds risks and wants to escalate them. And so what that means is
that we need the people who are doing the engineering work to have techniques that are sufficiently lowcost that we can train them to do that we can expect them to do it and then the risk team moves into a support mode and audit and assessment mode and a help mode. And this is a this is a multi-year transition for many of the companies that do it. It is not a snap your fingers and it's done. It's it's real change management at a corporate level. But once you get there, a lot of the problem you you get a very different experience, right? Um hi. Uh was great great presentation. Um I just wanted to get your sense of where
you think um kind of like psychology fits in with all of this stuff. So um you know there's a kind of famous kind of uh saying of kind of you know people only get insurance after the flood. Um and I know um a little a little um tale that someone told me was that you know they knew a software team um they had they had found some remote code executions. They had highlighted it to the their you know their upper execs kind of thing. They decided not to do anything and then all it took was like a red teamer to come along and literally just speak to the dev team. They just recreated it and just made made it real
kind of thing. It was the exact same risk, the exact same impact, but somehow just the psychology of everything just like changed because it suddenly seemed real. So I'm just wondering where where kind of for you psychology fits in with all this stuff. It's it's a great story. I love it. And I I think that what's happening is I might describe slightly differently. I might describe it as management paid for a report that validated it and then management decided to take action once someone else had said, "Yeah, this is a real thing." Because they don't actually I don't know the situation. I'm going to guess that they did not trust the assessment of the people who did the
work. And that's a problem. It's a psychological problem. But also there was a okay now that I've paid to do the report I'll fix the stuff in the report because otherwise I'm wasting the company's money. Lot of psychology here whole set of talks there that we can't do today. Uh so I love the idea. Go ahead. Yes. My scientific method applied hypothesis measurement iteration. Uh but it does have some additional cost components of observability and measurability. Yes. Which is a big part of your equation. I'm wondering how you balance these things. Yeah. So great question about the the work I'm suggesting has cost and some of that is scientific cost that should be done. That isn't things I
think one engineering organization should necessarily do, but should be done more broadly. But also, if we're spending a lot of money on doing this work and it's not getting us to the results that we need, maybe we can stop doing some of the work and start doing some of this analysis with the money we save by not doing work that doesn't result in better decisions. Um, have you looked into using Monte Carlo or some sort of simulation to to do risk analysis like say, you know, set up different risks with a miter attack framework and just see a kill chain or just Okay. Yeah. Have you looked into uh using simulation or some sort of Monte Carlo
methods? Okay. Yes. But the the fundamental the the issue with Monte Carlo is you assume a roulette wheel or you assume a blackjack deck, right? That the what your model tells you is dependent on the assumptions that you put into it about probability distributions and you can chain those to get more complicated things. But if your initial assumptions aren't right, the Monte Carlo isn't going to get you to an better answer. Okay, I'm seeing you nod there. Oh, the gentle over there who had waited earlier has left. So, I can't call on them. Why don't we take like one more question and um then I will take questions. Actually, I'm just going to put that up there.
Forgot to do that. Where in this risk management threat modeling space is there room for automation and tooling? Okay. So, that that is a phenomenal question. [Music] Um, it I'm trying to figure out like a two-minute answer to that. Um, so I think a lot of the tooling that is really useful is tooling that helps you keep track of your answers, expand your answers from short answers into longer ones. LLMs do a really bad job at answering the question, help me threat model this. They do a somewhat better job at answering the question, where are the spoofing threats to this specific design stated this specific way? Um, fuchan answers really help here. LMS are actually pretty useful for
helping you get to what am I going to do about this? Um, and if you say I am deploying this sort of thing in this Azure thing with these configuration things, I need some help with authentication. They're actually not bad at that. The the first problem with LLMs is the pertubation problem where small changes in input result in large changes in output. And we always talk about adversarial pertubation. That is not what I mean. What I mean is accidental perturbation where it just goes off the rails. Um, and the second problem is reliability. How do we know if it's really actually getting us a good answer? Now, all of that said, I said at the beginning that this is all that I'm
a big fan of lightweight methods, and I think that asking LLMs for help is a really promising thing because it lets us get cheaper faster. To your question, it gets us to um how do we get this into the engineering org versus a riskmanagement org? Um, and I think that we are still in the stages of experimentation and figuring that out, but it's a great question and a really interesting area for research and I'm really looking forward to seeing more talks about how you make it work, how you don't make it work. All right. Do we have one more? Yes. Wow. Okay. Can you hear me? Um I think one of the things that your talk
has introduced to me is the you know we talk about tech debt over the legacy of an organization and it seems like risk management has its own debt depending on you know which model has been used for how long by who which teams within the organization and whatnot and I guess my question for you going back to decision-m and iteration is how do you avoid uh an already burdened process in terms of like bringing that into an iterative cycle because it feels like you'd be asking for or introducing potentially fatigue in that process and the rigor that you're expecting. So, this is really it's a great question and it is really dependent on what we're
working on and what our engineering process looks like. If the demands we're making are too high, no one is going to say yes, this process feels like it's worth doing. And that is really frustrating to me because I will sometimes feel that a more in-depth approach makes more sense. But in practice when I try to roll out that indepth thoughtful approach and then nobody actually does that work, I don't end up with the result I want. and and acknowledging that and getting to the point of saying what can we achieve? What can we do in five minutes? What can we do in one minute? To start getting some answers and start getting people to believe that these
questions have merit and they're not just an exercise that that risk management team over there wants us to do. gets us to the point of being able to say, "Okay, the two-minute thing is working out. What if we spent 15 minutes on it as an experiment? Would that get us better results?" And so building up to deeper processes that the organization can stomach can take time. Um, and that's really frustrating for a security person who wants to do the perfect thing now. And finding a way to get there is is a thing that it it takes a lot of work. It takes a lot of iteration. It takes a lot of experimentation. With that, um, I am
getting the zero minute sign. I want to say thank you all for your time and attention. Enjoy the heck.
Enjoy the heck out of Bsides. Thank our sponsors and um yeah, thank you all very much.