← All talks

Fire, Brimstone, and Bad Security Decisions

BSidesSF · 202545:32145 viewsPublished 2025-06Watch on YouTube ↗
Speakers
Tags
StyleTalk
About this talk
Fire, Brimstone, and Bad Security Decisions Wendy Nather An important facet of resilience in cybersecurity has to do with recovery from making wrong decisions, such as a strategic choice in policy, design, architecture, or even procurement. How do you back out of something that seemed like a good idea at the time, but that you now realize is creating problems? And how can we stay curious in the face of being wrong, as well as design security for the future to make redirection easier? This session covers the need to plan for human fallibility – and may itself be wrong … https://bsidessf2025.sched.com/event/a7b5046389aee2e375cd44a56b33799a
Show transcript [en]

Hi everybody. Uh I'm here to talk about fire, brimstone, and bad security decisions. Or in other words, uh wait, do do you smell dragon? Uh you know how you're you're in a campaign and you're going someplace and you start to smell dragon and you start realizing that maybe you took a wrong turn. Um, I used to play uh D and D in my 20s with a group of young men and they had a much higher risk tolerance than I do. So, we'd be going along and, you know, we we'd be hearing something bad about a cave. They'd all decide to go in the cave and I'd be like, I'm I'm going to wait right here. They'd all get killed and then I'd

have to sit there while they rolled up new characters over and over again, which is probably why I never really got into Dn D. So the thing is that you know it's it's important for us because recovering from wrong decisions is an important part of resilience. And I don't just mean uh by being wrong I don't mean mistakes like you clicked on the wrong thing or you know you forgot to carry the one or or something like that. I'm talking about decisions that we make in security in design or strategy or policy or whatever that we keep making uh continuously because this is the direction we've decided to travel in. Um, and the the other thing I want to point out is that

um, usually insecurity when you are trying to figure out why something is the way it is, you start pulling on a thread and you end up discovering that it comes from a decision that somebody made long ago that was probably a good idea at the time. In fact, it probably was given the situation, but just now it's not good anymore. It's not serving us. And so we have to figure out okay now what? So an example of this is when we decided that it would be a really good idea to store credentials in the poetry generating custard between our ears. You know that probably happened well I know it happened when there was just one computer and you only had to

have one password and we said okay memorize it and don't write it down. That's how it all started. And as you know there there are escalating consequences as a res result of that original decision. Now we have hundreds of accounts and uh at some point somebody figured out that it took 90 days to crack a password and therefore we should be rotating those credentials every 90 days. Um and that became an audit standard for a really long time. And think about how long it took us to get rid of that because it does not take 90 days anymore. We have rainbow tables. We have we have passwords stored in the clear on sites where they can just be

scooped up by attackers. There is no point in a lot of the the re rules and policies that came, you know, as a result of that. Uh it turned into make them longer, make them more complex, don't use the same ones everywhere. And before you know it, upsprung a whole part of the industry around password managers. Hello, one password being one of them. Um, we wouldn't have a job if it weren't for the fact that we are now trying to shield users programmatically from those passwords and from the the accretion of all of that as as a result of this one decision way back when. So this is the sort of thing I want to talk

about today. Um because it I think it's it's not incident response and that it's not something on fire, it's a slow burn, but we are all being burned by it all the time. The other thing that I want to make clear is that I want to talk about us being wrong, me being wrong, not other people being wrong. Because unfortunately, we also have a whole part of the industry dedicated to what I call scanning and scolding. Like, oh, you missed a spot. Oh, I found this thing that's broken that you need to fix. Good luck. Or, here's a fix. Have fun. I'm out of here. Here's a patch. Just patch. So, uh, if you're in that line of work,

I'm sorry, but, uh, as somebody who was born and bred blue team and has been working in this for decades, it's really annoying to have just a whole bunch of people pointing out what's wrong with me, what's wrong with the decisions that I've made, uh, and then going, "Yeah, well, we're going to come back and check on you again." That's kind of that's that's not helping. That's being helpy. Uh, please don't say, "Oh, I'm helping you improve." If the only thing that you're doing is pointing out what they're doing wrong, let's turn our focus inwards and think about what we're doing wrong just for the purposes of this talk. So, how do we recognize when we've

made a wrong decision? Uh, I want to talk about what I consider to be our original sin, which is antivirus. somebody decided, oh yes, this could be exploited. We'll just fix it in a different piece of software instead of fixing the original software. And thus was born a I think it's 20 billion industry. Now, it depends on, you know, which analyst you quote, but a billion dollar industry around having security separate from the thing that it's trying to secure. Just just insane. the the ongoing cascading consequences of that decision are are why we're all here today pretty much. And oh, and if you want to argue about this with me, I would love to argue.

Let's go out on the on the patio and have a good knockdown dragout fight. Um, so how else do we recognize a wrong decision? Well, first of all, if the environment or the situation changes, think about anything that you've decided to do and say, well, okay, now we've been acquired by this large uh company, things are different. Uh we've moved into a different geo and the rules are very different there or technology is involved and therefore uh the situation is different. We should be rethinking what was working for us before. Another clue is trying the same thing over and over again. What's that definition of insanity again? Some things that we keep trying how and again apologies to people who

work in this line of work, but how how many decades have we been trying to do security awareness training? If we just train people harder and and louder, maybe it'll work this time and they'll stop clicking on things that it's their job to click on. Why did we decide that this was the way to handle it? Uh why do we decide again that what security awareness really is is deploying block lists, dynamic block lists into the poetry generated custard of our employees, expecting them to be the front line of defense and storing these things in their brains that change all the time. We know how well block lists work or don't work. We know how

well memorization works or doesn't work especially among populations. Why are we still trying the same thing over and over again? So think about if if you are trying the same thing over and over again and it is still not working at some point it is time to think about you know what assumptions are wrong about this. What are what should we be trying instead? Maybe it shouldn't matter whether they click on the wrong thing. you know that would be really nice. Um in other ways how to recognize a wrong decision backlash you know either from your stakeholders from your your customers from your users. Um, I remember when I was working for the state of Texas and I

had a deputy who uh I had hired fresh out of 21 years in the military and so he had certain cultural expectations around security that you know were not really appropriate for the civilian world but he brought them anyway and for the first time we set a 15minute screen lock on all the workstations. So after 15 minutes they lock up a v idle time and people came to him and they complained and uh and he she just said work more then it won't lock up and and I I had to deal with the aftermath of that. So yeah th those sorts of indicators are that maybe that policy was a wrong policy getting the backlash

listening to the people who are affected by it. Uh another indicator that maybe you have been wrong is if people are trying to avoid what you have in place. Uh also again um at the state we discovered as many as 10 users sharing one login that we assigned to people and the reason behind it was that we could not completely federate our identities. We would say you know to a school superintendent yes uh we believe you when you say that this person works for you but we have to have the final decision on whether they can access the student data that we are stu stewards of. So it can't just be in your hands to

say yes or no. You you start by saying yes and then we as application and data owners make the final decision. And this was all done by a form that was faxed. Yes. Actually faxed. And people found that so ownorous that they just started sharing you know one login for for their district role. And you can't really blame them because it was for them it was not an individual responsibility. It was a role that they could hand over. It's your turn to report to the state. Can you log in and do this? I'll just hand everything over to you. And we would we would discover this. If one person forgot the password and called into the help desk and we

changed it, then nine other people would call us and say, "I can't get in." And that's how we found out. So avoidance, you know, when they're avoiding a process, again, it's it's a it's a sign that they don't understand it. they don't, you know, they don't agree with it, they don't appreciate it. It's just too much friction in trying to do their job. Evasion. I'm sure you've seen people evade security controls and share hacks with each other about how to do that. And then exceptions because for every policy there's an equal and opposite exception. That's my law. Somebody named it Nathther's law long time ago, so I'm I'm just going to go with it. Um, when you are approving

exceptions, if they start to pile up, that's a pretty good indicator that the the policy it's an exception to is not working very well. If you think about it, and I do sometimes, I can't really help it. Every firewall rule is kind of an exception, isn't it? It's like we're not going to let anybody in except for this and this and this and also this and this range over here and this protocol over here and this and this and this and this and by the time you get to allow any any you know you're in trouble. Um so keeping track of the exceptions that you're having to approve for a policy is a good indicator of whether it was wrong

at some point. Another good indicator is technical debt. Now that a lot of technical debt is unavoidable. You're, you know, okay, we don't have time to fix this. We've got to do this. We will get back to it later. Um, right after this launch, uh, right after this push to prod, uh, Nick Selby wrote a really good article, uh, which I quoted here on, uh, on tech debt. And he gave a really good thumbnail rule on how to spot when you are incurring too much technical debt. And the indicator is really short commit messages. Like if developers are so rushed that they do not have time to document what they're actually fixing, they just do a

commit that just says, "Fix some Fix some more Fix that." Then that's an indicator they're going too fast. They may be working too hard. You may be incurring more tech debt than you really wanted to have. But it can also be an indicator of things that you are now realizing you have to tweak or fix or move or change or pivot on. And that's part of the tech debt that indicates that some decision you are continuing to make could be wrong. Um, a lot of these were made by good intentions based on bad assumptions such as the assumption that security would always be a control function, not a service function. Now, years ago, it was the case that you

didn't touch a computer unless somebody issued it to you. you know, somebody like your employer or um your your off ranking officer or whatever, you know, some authority gave it to you. They controlled it. They dictated what you could do with it, what you couldn't do with it. They managed it. They could take it away. And um all of security came along because of that because you were working on behalf of an organization and their interests came first. That's not the case anymore. Tech, as we all know, has exploded. It's become democratized and security needs to be democratized as well. Now, I talked about this five years ago here in San Francisco uh at RSA that security needs to serve

everybody. It needs to serve farmers and artists and attorneys and doctors and my kids and everything. Uh, I like to tell the story that I' I've never used parental controls. And one day my teenager came to me and said, "Mom, I need you to help me set the parental controls on my phone." And I said, "Honey, why?" And it's because she wanted to enforce her own study times. She wanted to disable her social apps so that she would be forced to get work done. And I said, "Okay, great. You know, you set it up the way you want it. I'll put a password on it and whenever you want to change it, come back to me.

And so she she did that, but it was serving her. It wasn't serving my interests. It was doing what she wanted. And that's what I think security should be able to do for everybody. But if you think about all of the security controls that we have now and the efforts that we make are all in are still for the purpose of protecting the interests of some uh authority or organization not what the people uh need from security. So think about that as well. Another assumption is that everybody should understand it the same way. When I got into tech, mumble mumble, 40 years ago, hi fuz face. Um, I I just discovered today that somebody I I

talked with online. We actually met one time 40 years ago at a party. That was pretty scary. Um, back then though, everybody had the same background, the same understanding. So, you could say that something was intuitively obvious and and you'd be, you know, pretty right. But that's not the case now. uh everybody understands things a different way. They're also affected by security in different ways. One example uh and this blog post by Maggie Angler I think is wonderful. She talks about the people who are affected by zero trust policies because they are homeless because they only have one phone that they share amongst their family and multiffactor authentication is just not going to work for them.

um tracking one device. They don't have a device. They go into the library and they use whatever's there. So, a lot of a lot of the um strategies that we are used to using today are assuming things about our user base that just don't work. Um, when I was working for Duo, I learned that apparently in India they disable the phone SMS service overnight because there's so much SMS spam, which means that if you're using SMS for authentication, it's not going to work overnight. So, all these sorts of things that we discover that aren't necessarily so have led to things that we now have to backtrack on. Another thing, and this is a really good place, I think to to talk about

this this being uh you know, in the Bay Area, um don't assume you're the first one to have thought of something. I'm I'm sorry. All the analysts are going, "Yeah." Um as an when I was an analyst, I talked to literally hundreds of vendors. There is very little new under the sun. And if you come in to me and say you're the first or the only to be doing something, I'd immediately want to prove you wrong. And nine times out of 10, I could. Uh, now that doesn't mean that you shouldn't, you know, come up with things or reapproach things that didn't work well. They were tried 10 years ago, they didn't work, but now the

text evolved, we've learned a lot. You know, we're going to try again and we're going to try it another way. That's great. Um, just make sure that you do your homework and that you understand that a lot of people have tried a lot of things before. Um, and also somebody thought or assumed that it would be a great idea to make assertions without factchecking. Um, yes, this is a sub tweet. Uh, if you if you ask chat GPT for my bio, it will claim that I worked for the NSA. I have never worked for the NSA. And this is where you make the joke about, well, of course, she has to deny that she worked for the NSA. No, I mean,

my bio is everywhere. Why couldn't it just go scrape LinkedIn and and just throw it out there? No, it had to make stuff up. And I guess statistically speaking, if you are a woman with a certain background in cyber security, at some point you will have worked for the NSA because that's what it claims. So you know that but think about as as tech is evolving as our attitudes are evolving as we are dealing with the cultural implications of something like this uh that not only you know are people making assertions without factchecking or they're choosing their own facts but also we are building technology that's doing that. Here's here's a random example. Um, so I was I was at a committee

meeting in DC and I found somebody's cap, somebody's DARPA cap uh left over uh after we had dinner and I said, "Okay, I'm going to bring it back the next day and I'm going to turn it into lost and found." And just for fun, I put it on and a friend took a picture of me. It turned out the next day that actually the head of my committee had had left that cap and so I brought it back to him and he said, "No, you can keep it." So I still have it. I kind of feel like it's stolen valor to wear it though because I've never worked for DARPA. But I uploaded this to uh a gen an AI model

and it everything that it assumed about me is all based on the fact that I'm wearing a DARPA cap and uh you know everything from the assumptions about my uh my salary range things I would be interested in my political leanings my religious leanings everything is based on on my wearing that Darba cap and also the assumption that therefore I must be in um in the uh national en energy uh what is it national can't even read this uh renewable energy laboratory actually it's it's um the third floor atrium in the national academy of sciences kek building but whatever we'll we'll let that go and I don't know why it's says I radiate an eerie joy I still haven't

figured out that part but you can see all of the results based on that one assumption, the thing that I just wore a borrowed cap. And if you think about how that affects everything that we decide in security, the the strategies that we make, how many of those go to an assumption that was well-intentioned but just didn't work out. At least they're saying that they can sell me a whole lot more DARPA swag. So, you know, so that's good. The other thing that we don't want to assume is that every user is having a good day. Now, um I know this is kind of the my Clint portion of the keynote. [Music] Um, we we already I already mentioned

things like people who are um, you know, people who are homeless, who do not have the equipment that we assume that they must have in order to to uh, function with our security controls and who can't get away from that tech anymore. You can't apply for a job anymore without going online and and being verified in some way. There is so much going on now with artificially generated résumés flooding HR that they have to defend themselves by also using automated screening of resumes. So there's this battle going on and and people who are you know searching for jobs are getting stuck in the middle of it. But there's also the group of people that Lee

Honeywell describes as being cognitively vulnerable and she kind of means the sort of people who tend to fall for for online scams. Uh unfortunately that often also equates to being old and as an old person I kind of resent that. Um, my kids, uh, one time, one of my kids complained that I knew too much about tech and I wasn't supposed to because I was the wrong generation. And I said, "Honey, who do you think built the internet?" So, um, [Applause] yeah. Uh, so it's it's not just people like that. It is all sorts of people who are struggling, people who have incidents, people who have things happen to them. Um Clint talked about some of

this yesterday, people who are suffering from mental illness. If you're in the middle of a psychotic break or um a hypomomania episode and you suddenly cannot follow the instructions that you you knew before, you cannot handle the security design uh coming and going. I had to deal with the tech for my parents as they declined over years. They would have good days and bad days. And so I had to set things up so that I could say, "Mom, you look tired. Do you want me to just log in and pay the bills for you today?" Because I didn't want to take away their autonomy, but I needed to be able to do things for them when they couldn't

handle them themselves. And that comes and goes. We kind of assume when we set something up that okay, they've seen the training video. Um, you know, the the the help documentation is right there. Why can't they do it? So, I'm I'm going to share something that I've only talked about in public one more time la last weekend. This is my husband Marcos Va and he passed away in 2023. two weeks after he was diagnosed with brain cancer. So he by the time he got we got him into the ER, he could not speak anymore. He was only minimally responsive and the only thing that he left behind was a master password for his password manager on a scrap of paper

in a secure location. but he didn't tell me where it was. And he had like 10 laptops and all sorts of things. Over our 30-year marriage, he ran our infrastructure. He built everything. I didn't touch it. It was his. I didn't have login on any of his stuff because I respected his privacy. So when he was gone, you know, we were both cis admitt. We both had the same background. I should have been able to figure out how to get and find his password manager. I should have been able to do it. I just could not. And I I struggled. I struggled a lot. And finally, I messaged a friend and said, "Is there anybody at OC Austin

Hackers Anonymous that I could hire to help me with this?" And he said, "Oh, I'll I can come over after work on Thursday." So he came over That friend is HD Moore. HD, are you in here? Anywhere? Um, thank you, HD. So, you know, being HD, he broke into everything for me. He went and and just, you know, he cracked everything. He mapped everything with run zero, the whole network. We had 12 Raspberry Pies in a cluster. I don't know what they were doing. I still don't know. He had all sorts of stuff. This is a picture of part of it that he had set up and run. So he so HD set up everything

for me. He cracked the passwords. He reset them. He said, "Here, I set up a VPN for you." Because I was living in Colorado at the time. Said, "You can log in at any time. Just download the client." I still couldn't do it. There was something in my mind that just blocked me. And a year went by and I would come back to this house and look at this every month and I still could not touch it because it was the last part of him. You know, if if you work on something, if you build something over years, it's it's a reflection of your mind. It's a reflection of your soul. It's so much of

you is in there. is in the technology that you've set up. And that that was the only part of him that was left running in an empty house. And I just I should have been able to deal with it and I couldn't. So, another friend of mine, Pablo Brewer, finally came down to Austin and he started he laid hands on keyboard and started doing the backups for me, which would get me to the point where I could start going through what we needed, migrating what we needed, and then finally shut the last part of him down. So everybody has really bad days or months or a year or longer where what we expect from them implicitly in in their

part of security is just not going to happen. There are about I think there are four laptops in this picture if you can find them somewhere. Some of them are under other things. Um he had like eight or nine of them I don't know. And there was another row of servers behind me where I was taking the picture. This is this is not everything. Oh, and also he ran our email server. And uh yes, while I was being interviewed about it, our email server was in the bookcase behind me, our own being run right there. So, you know, people have really bad days and they struggle with it. And and the military understands this really,

really well. So, you know, this instruction front toward enemy is not because soldier soldiers are stupid. They're not. It's because when you're having the worst day of your life, you're in combat, you're injured, you're cold, you're scared, your best friend just died near you, you need something to be as plain as possible to be able to use this. Same thing with the back of this mine. And I don't know if you can read it. It it says um it it basically says that the uh smoke from from burning these things are poisonous. Don't ingest them. Don't burn them. And it it's not because you know they were trying to eat them. It's because apparently in Vietnam

soldiers used to burn these to heat their rations. And so, not only does it say do not burn this, it explains why, which is also why what we need in our security design, not just to say don't do this because not everybody will understand it the way we do. We we've got to explain why and and as plain as possible, even if we feel that it is hitting below what people should understand because security needs to be usable on your worst day. not just your best day. And you know, this is this is something really important. Clint did his talk yesterday. He was very vulnerable about what he went through. Um I I've shared this with you. Lots of

people have gone through this and can't talk about it, but it's part of the human condition and it's something that we need to incorporate when we are making strategic decisions. Now, now that we know that we're wrong, how do we change direction? I did not bring my sword today because uh uh because I had to travel. I figured TSA would object to it, but I love seeing the swords around here. Um now that we're we're going to be slaying these dragons, how do we do it? Uh well, first we've got to get through or past our own minds. It's really hard to admit when we're wrong. um you know the ego gets in the way.

There's also the sunk cost fallacy which also comes from technical debt. Okay, it's going to cost too much to fix this. We're just going to burn it down and start again or you know we're going to pivot. We have to keep going through this um because we've already come this far. Lots of fears about external reactions. were so external driven and some of them are very legitimate fears. But at the same time, if we believe that we need a change of direction, then we have to be brave enough to say, look, this this was a good idea at the time. It's not anymore. We're going to move in this direction. We're going to move on.

And I do have to admit that maybe boomers like me are not the right people to make this change in direction. My my kid actually sent me this um the Boomer and Zoomer chasm, but it's true. You know, we we solved things in a particular way decades ago for reasons and maybe we are not the best ones to navigate out of this. Maybe the next generation who's not burdened by those initial assumptions and that history are the ones to lead us forward. So, you know, just think about this. If you are a security leader now and you have, you know, our brains always do shortcuts. You know, they're always building shortcuts to we've always done it this

way. This is easy to think about it this way. We're just going to keep thinking about it this way. Uh and it's hard to get off of that that rut to get out of that track. But um that's this is where we need everybody to bring in their perspectives. So, how can we enable rerouting, changing what we're doing? First of all, if there is a way in your business to tie it to something else that the business really cares about like performance upgrades or getting into a market that we couldn't get into before, um this is a good opportunity to say while we're at it, this this lower level strategy that we've been following, can

we change it for this new initiative for this market entry? Is there a way to do some of this refitting or changing depending on how much work is involved into short-term projects? Just slice it up and say for every project that we do, we're going to do a little bit of of rework on on this side. Uh I talked when when I was at Bside Seattle last weekend, I talked to somebody at a very very very very large company and I said, "How how what percentage of your resources do you think are being used not on developing new code but on on uh improving and optimizing what you have?" And he said about 60 to

70% of the people that they have are working on optimizing and improving what they already have. That was pretty eye openening for me to think about. But I think it makes sense if you can convince your leaders to say we we really do need to put that much effort into fixing and jiggering and redoing what we already have. Uh you it's great you should be keeping a tech technical debt register not just a risk register even though they kind of map together in a lot of cases. and also keep track of how you're going to fix this over a longer term. We're going to do this part, then we're going to do this part. We have to wait for this to

happen and do this part and so on. So those are some things that you can try doing. And then finally, how do we predict the wrong decisions? How do we avoid them? Any ideas? That that's a a trick question. The answer is no. You can't. Um, Heather Vesson is a futurist who I've talked to. She's interviewed me for some of her research and she says, "You cannot predict the future." Sorry, you can't. Um, unless you control all of the factors going into it, you cannot predict what's going to happen. you may identify all the factors that could come together, whether they're cultural or or medical, political, financial, you know, all sorts of things. Um, even if you

think you know what's going to happen, you can't predict the timing. Remember when we talked years ago about sooner or later we were going to have some kind of pandemic? We didn't know when, and it turned out 2020 was the year. So you can have some pretty good ideas about what you think is going to happen in the future, but you cannot predict it. You cannot control it. The most you can do is is be aware, you know, that you're that things are going to change. Now, one thing you can do is you can try changing the rules of the game. You know that that old Wayne Gretzky quote, I don't skate to where the puck is, I skate to where the puck

is going to be. I hate that. That is that is such a fil quote because of course if it's skating along, you know, the ice, of course, you know where it's going to be. Try predicting where the puck's going to be in 10 minutes in the middle of the game in 30 minutes. You you can't do it. But what you What if you could change the rules of the game so that you didn't have to care where the puck was going to be? Then you'd have something. What if you threw all your people in front of the goal? What if you just parked everybody? then you wouldn't have to care where the puck was. You wouldn't

have to do that predicting. Now, of course, uh that's not the point of the game. The point of the game is to score points. So, it's that's not really a an an appropriate solution. But I'm just saying look at your assumptions when you're doing these things. And if you're worried about, well, we we don't know if this is going to happen, figure out whether you can change it so you don't have to care. Think about those rules of the game. Now, let's talk about how we're going to put things in place to be able to recover from wrong decisions going forward. Even if we don't know what they're going to be, we can't predict them. We know there are

going to be some, we just don't know which or when. First of all, let's try emphasizing agility over specificity. And by this, I mean, yeah, we we don't know exactly we're going to do these things, but we might change our mind later. We're going to we're going to see how this plays out. and then we're going to pick something more specific. Uh I don't know if you've seen the diagram that that shows the difference between precision and accuracy. Precision is getting all the bullet holes in a target around in the same area and accuracy is actually getting them in the middle of the target where the bullseye is. They're two different things. So try for less

precision. It's going to be hard to do accuracy. But if you emphasize agility and say we're going to do these things and then we're going to see how it goes and then we're going to figure then we're going to do the next step of planning instead of trying to do it all. When I was working for the state I had to do a budget a bannual budget. So I had to predict what we were going to spend on for the next two years. I had no idea what's going to come out in two years what new technology what new attacker techniques what circumstances. No idea at all. So you have to do things maximally agile to be able to say we are

going to change this. We just don't know how. The other thing is to work in the ability to do incremental changes and to do roll backs. Whoa, that didn't work. Let's roll that back immediately. Um so being able to put in the ability to react into what you're designing, what you're building. we're going to roll out this new policy. We know it might be wrong. How can we recover from it really fast? Um, another thing I'd encourage is to a lot of people do success criteria, but let's try to do failure criteria, too. This is how we're going to know if it's going wrong and ideally as early on as possible. um what how do we know if this is not

working as opposed to whether it is working and also please don't just measure everything using numbers I know quantitative analysis is such a thing people love numbers those who you know that you can't manage what you can't measure that's not true that's not true at all you can see immediately things that you can't measure that are going wrong that are not working you can feel the frustration when you're trying to deal with an interface that's badly designed, you know something is wrong, but you can't say, "Oh, this is five or whatever." I when I was working for a Swiss bank, uh, I had the head of equities come to me and say, "If I plug

this in, this modem in, how much more insecure are we going to be?" And I said, "I don't know, five." Uh, but that's the way he thought. And so I did a nice Excel spreadsheet with a graph even though I those were XR rectus numbers, but he wanted to see numbers. So I said, "Okay, you know, I think this is where we're going to be if you if you plug this in." You do not have to measure things in order to know which way the wind is blowing. Even the word measure implies numbers. You do not need to do that for everything. It's what we're in love with because it feels like we have more control. But there are

lots of indicators that I've talked about earlier. You know, how much backlash you get, how much avoidance you see, uh how angry people are or whatever, uh that you've made a wrong decision. Uh but if you like numbers and and I'm not saying all numbers are bad, here are a couple of people who are doing great work in research in this. Jay Healey at Columbia University has a great uh presentation that he gave at Black Hat last year called Are We Winning? How can we tell if as defenders if we are winning uh if if our security is getting better? And he does have metrics in there that you can actually use, but he has other indicators that

are not measurable like whether we see threat actor groups trusting each other less and less. You can't measure that. But there are ways to see it. So think about that. Adam Shack is doing this research from the perspective of what if we treated this like an epidemic as a public health issue. What would what indicators would we look at? So they have so much work uh that if you search for both of them, you'll you'll find the body of work and it's interesting reading. And then also beware of the one-way door. Um, we discussed this inside of one password, too. Is this a one-way door? Is this one where it's going to be prohibitively expensive to

change your direction once we do this? And I love this picture because depending on how you look at it, it's either going into a sunny day with fall leaves and it looks beautiful or it could be the flames of a dragon. You just don't know. It's really hard to tell what's what's on the other side of that door. Uh you can also spend time planning for regulations and standards even if you don't know what's going to come. When you see enough systemic risk building up uh there's usually somebody who decides that we need to start managing it on behalf of a group or society or you know whatever and that's where regulations come from and they don't spring up

overnight despite what it might look like. It takes a long time. Um, so even if you are at a startup that is small, you don't have your own attorney who can spend time paying attention to this stuff, do keep your ear to the ground because if if regulations or standards end up on the wrong side of what you have planned, it's going to be a problem for you. And also remember the cyclical nature of this industry. Things keep changing. They go up and down and up and down. There's been centralization, decentralization centralization depending on whether at any given time we have more network bandwidth, so we're going to shovel everything over to somebody else's computer. And oh no,

that's too much data. We're going to bring some of it back and and the processor speed gets better, so we're going to do it locally. And now the bandwidth is high again, so we're going to shove it over there. uh you know there was centralized computing, distributed computing, cloud you know you name it um it is always going to go up and down so plan for that as well. Now I I want to leave you with a message that it is it's human to be wrong. We should be curious about what we're doing and whether it's working. It's like it's like a really big experiment. Um it it's hard to admit when we're wrong, but

that's the only way that we're going to get through this as an industry. We are making this up as fast as we can. We're figuring this out, and it's it's called learning. It's a good thing. I I saw a great sign by a a researcher that said, "Of course, we don't know what we're doing. It's called research." So, you know, everybody should try this. So, I wish you a great rest of the day. Let's go out and learn together and let's be wrong together. Okay. Thank you.