← All talks

Talking to Devs about NHI Security & Governance

BSides 312 · 202548:468 viewsPublished 2025-11Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
Non-Human Identities (NHIs) now vastly outnumber human accounts and represent a critical attack surface. This talk guides security professionals through the OWASP Top 10 Non-Human Identity Risks and other published frameworks for securing NHIs, then argues that effective remediation requires governance aligned with IAM and organizational change. Rather than asking developers to simply "do it securely," the speaker shows practical patterns—like vault-backed credential replacement at the pull-request level—that meet developers where they work.
Show original YouTube description
Non-Human Identities (NHIs) outnumbered humans 45 to 1 in 2022. Given that their access abuse is one of the most easily exploited attack paths, we really need to get a handle on NHI security right now. But how do we start? What do we even tell the developer? We can't tell them to just not keep building applications and secrets security alone has not addressed all the concerns NHI security requires. Once again, OWASP is here to shed some light on the situation as this issue becomes a major, mainstream concern. In January of 2025, they released the Top 10 Non-Human Identity Risks, which highlights exactly how NHIs keep getting exploited and gives us a guide to raising awareness and prioritizing and remediating the situation inside our organizations. But they are not the only ones who released a guide or even a top 10 list. This talk will guide us through the commonalities of all the published wisdom around NHI security, and we will end with a discussion that governance is a path forward, but will need to go through IAM and, eventually, the whole organization. ABOUT DWAYNE: Dwayne has been working as a Developer Advocate since 2014 and has been involved in tech communities since 2005. His entire mission is to “help people figure stuff out.” He loves sharing his knowledge, and he has done so by giving talks at hundreds of events worldwide. He has been fortunate enough to speak at institutions like MIT and Stanford and internationally in Paris and Iceland. Dwayne currently lives in Chicago. Outside of tech, he loves karaoke, live music, and crochet.
Show transcript [en]

Good. >> Hey everybody. Oh, thank you very much. Um, I should update that because yeah, that was a while ago I spoke at MIT. Uh, that time I met Richard Stallman and he's a weirdo, but that's a different story. That's a whole different talk. Um, I highly recommend getting the slides for this because I have 109 slides in 40-minute talk or 50-minute talk. Uh, and I go fast. So, uh, I was hoping there were people more people in the room for the very first part. I know people will wander in, but uh if you really want to go see the other talk, I would not be offended. This is your moment of like, okay, I can just see

this. And if this makes perfect sense to you and you understand why this is like very important, then you can go feel free to watch something else or go on with the rest of your life. Uh so who here works in security? Cool, cool, cool, cool. All right. Who here's a developer? Awesome. Anybody in DevOps? Raise your hand if you have ever deployed something to the public facing internet and someone's touched it. Yeah. Yeah. Did you you push it through like you designed something like this? I used to be a Drupal developer. So I would have traditional up on top. I'd run actually I was running pass at that point platform as a service. So I was

letting this platform that I used to work for called Pantheon. I was letting them take care of all the backend stuff and I was just doing the Drupal stuff up front. They were even hosting all the database stuff for me. Did Did you use like a process looks something like this? This is just a git flow. This is I'm pushing my builds up and they're failing and then I'm fixing them and I'm pushing up and then it fails somewhere else and then I fix that and I push it and push it and eventually it gets out there deployed to the world. Yeah, that's what developers do. That is what they do day in and day out. And

they don't do it by saying, "Hey, do it the securest way humanly possible." They say, "Do it by now or don't get paid or go work somewhere else and AI is going to take your freaking job." >> And so when they end up doing things like this, which I know is hard to see on a screen this small, but that says, "Insert your bearer token here." This is code that chat GBT gave me two days ago when I said, "How do I do a basic HT or a basic API call in PHP?" Last week I spoke at a PHP conference, so I got PHP on the brain again. I love PHP. PHP 83 is amazing and 84 is going to blow your

mind. Um, yeah, seriously, it's can do everything Go can do, but better. And we have Laravel and the spaceship operator. Okay, enough nerdy [ __ ] Um, so that's that's the real thing. So, is it any wonder is it any wonder that they get really annoyed when people in security go over and say, "Hey, you did this wrong because they didn't care. Not that they don't care about security, it's that they got told to do a thing by a date and that's what there was. You were on their ass about this and now they're mad at you. All they want to do is go home and play Monopoly with their family." Like, I don't know if it been Monopoly.

I don't know what they were actually doing at their house, but like I love Monopoly. That's why I love this picture. I looked up family playing Monopoly and I use this in a lot of talks because this is the goal. If if you think the goal of security is to pop boxes and be cool, do something else with your life. This is the point of security to make sure that people can get home to their families and do their freaking jobs without having to stay up till 3 in the morning because they're pop boxes got popped and their domain authority is like now somebody else's problem. So this talk is about what we need to

tell them about not securing the human access to the world but the horrible horrible problem of this nonhuman because there's only two things in the world a potato and not a potato. And for some reason we as an industry have recently settled on this concept that there are humans and nonhumans. That is about as helpful as there are people and not people. That's literally what it says. Makes me mad. I'm Dwayne. I live here in Chicago. Live over by Wrigley a little bit. help people figure stuff out. That's my entire mission in life. I am very happy to help you figure out some of this stuff. I warn you up front. Some of this stuff is like experimental

and out there that I'm going to be talking about, but it's also IETF stuff coming down the pipe that you really need to be aware of about how authentication works and how we mix up off the Z and offend so hard it hurts my brain. I'll talk about the company I work for at some point here, I promise. But I don't care uh right now. Right now, I care about talking to y'all. So, if you ever want to connect with me, there's my socials. And again, I'll put the slide short URL up at the end. And I always push all my stuff online. It's already online for you. Quick shout out to the sponsors. Without sponsors, we

wouldn't be able to afford to be here. And we couldn't have besides like period. This is How much did anybody pay for this? Like 50 bucks. >> Yeah. I'm going to in Denver tomorrow. That's like the cheap tickets 1,200 bucks. Like it's ridiculous how cheap we can do this for. Thank you all the sponsors out there and thank the community sponsors because we are truly a community event. That means something. So, back to the point, humans versus nonhumans. What's the difference? Well, we hopefully I don't have to explain what a human is to anybody in this room. Uh, but in short, you have a fingerprint that you can use to unlock a pass key or you can MFA.

From my point of view in the world, that's what I care about. You have a retina I could scan. You know, there's some identifiable thing that's going to be true from moment A to moment B. IETF has this thing I'll talk about later. Don't worry about whimsy right now. We'll talk about it later. I love the name, but we'll talk about later. But a workload. A workload is a running piece of software executing for a specific purpose. Typically, it interacts with the other parts of a larger system. It can run for any amount of time. It can run for seconds, can run for days or years. OASP says non-human identities are this. It's a long explanation, but basically

because OASP sees the whole world and thank goodness they do from the point of the application. The application security is in there in the name. Um, we're not talking about your devices at this point. I mean, we are because that's what's running the application, not the network. Well, it is because what else do you have a network unless to get to the application and back? But their whole thing is look these things have to authenticate and most time they're using a secret. This report bothers me for a couple reasons cuz what's an identity related breach? I I had to stop and think about this and I've reread this report like multiple times and I love cyber arc but

that's a problematic word in my opinion but instead of going directly at a system directly through the bits and bites and the vulnerabilities and exploiting the shell or whatnot you are acting as if you are something else or someone else you are acting on behalf of an actual identity now again with humans you're acting as a human but what this is what this is like I'm going go in and pop or reset someone's authentication, including the administrator and reset their password on a WordPress site because I paid for a specific paid for theme that was in that was this vulnerability. This is from last week, by the way. This is the 20th this came out on bleeping. Um, that's one kind of

identity breach. Like you have breached the person's identity and now that's an identity attack because you were acting as if you were the administrator. This is more common though. uh things where people get an API key into a system and then they figure out how to move around it as the administrator or the that identity and they're behaving as if they're the identity, but they're not doing what the identity is supposed to be doing. They're hacking the identity. Literally, it's an identity based attack. So, why am I talking about non-humans when I'm talking about identity? Because in 2022, Cyberarch also put out a study that said for enterprises we talked to, it's 45 to1. That's what that is. 45

identities that are machine identities to every one human identity in the system. Uh the guess now is closer to 100. Their number of the last report was 89. I think that's underounting some stuff and I think think talked to only certain size of companies but that was also last year's data. We now live in the world of agentic AI. Agentic AI means oh it's building itself now. Um, we're deploying APIs that human beings never write, that human beings never read. And it's happening right now. It's terrifying. Uh, because we didn't lock down this problem of how do we actually guard against this um, identity breach problem with machines yet? Now, I'm not one of those people to come

up here and say a bunch of scary stuff and say there's no no solution for this and oh, just worry. No, that's the whole rest of this talk is what do we actually do about it? What do you actually talk to your developers about? How do we fix this together? Because we can't fix this from security alone. Operations can't fix this. Your exec team can't fix this. Your IM team can't fix this. Together, you as organization can fix this. But it's going to take the right people knowing the right things, using the right tools, following the right processes, and having the right conversations, and learning the right levels of trust. OASP this year. This is the latest of

the top 10. OAS top 10 for non-human identity risks. We're going to go through these, but not in the order you think because I think it's three areas of con three areas of concern, not 10. It comes down to these three. Ownership, long live secrets, and technical complexity. [Music] I really debated which order to talk about these in because I have a solution for one of them. The others I don't, but I have ideas. But if I had solutions for that, I would be a billionaire and speaking at a different event. Who owns NHIS? Ownership is the first hardest problem of all of this. Seriously, who who in your organization owns Active Directory? Yeah. go in your IT department and just

say, "Hey, just curiosity, who coordinates and like consolidates down our our active directory stuff in intra and they'll just stare at you." They might have an answer. It might be a single person. Who knows? Who wants the risk of that same NHIS? [Music] Um, people that actually go in front of Congress and judges or the CISOs and security folks like Solar Winds, the CISA or CEO did not go to jail. Well, neither did anybody else, but that's because it got stated at the last minute. Uh, but it was CISO. Why isn't it the IM person? Because there is a person who should own identity access management because hopefully you're not doing just all your own hiring per department, not talking

to each other, and someone has to onboard them and give them rights and permissions to machines and services, right? Well, that that works for humans because we've been doing that literally since the beginning of computers. But who does that for the machines? IANS uh the organization that um does research into executives they say that to get to the before you have a 55% chance or higher that this person exists at an executive level or reports directly to an executive level is the five billion revenue mark. So if your company makes less than $5 billion in revenue, this person probably doesn't actually exist in a way that's meaningful enough for the conversation where the CISO turns to them and say

what are we doing about this? And then that person is organized very differently depending on who you talk to. Ford and GM do it very differently. Same role, same exact idea. But if we don't get everybody on the same board, everyone get in the same buy in that we have to do this. It's just going to be another case of screaming in the wilderness and we're going to be in WAF rules from 2002 again. I think ownership comes down to this. This is I know it's hard to read from this distance. Um, this is about the smallest font I'm comfortable using in a slide and now it shows. Uh, this actually comes from Dear Abby. I love

Dear Abby. I read the Sun Times cover to cover every day and Dear Abby is one of my favorite sections. This is from years ago. Um, golden rules for living. So, if you open it, close it. If you turn it on, turn it off. If you unlock it, lock it up. If you break it, admit it. If you can't fix it, call in someone who can. If you can, if you borrow it, return it. If you value it, take care of it. If you make a mess, clean it up. If you move it, put it back. If it belong to someone else, get permission to use it. If you don't know how to operate it, leave it

alone. That doesn't apply to you guys. But if it's none of your business, don't ask questions. I also don't agree with that one, but these are generally good rules. So, what do you actually tell your developers? I mislabeled this talk because I thought it would trick people into saying, "Here's the magic words you tell a developer, and they just magically get better at security." And those magic words do exist, but they all come in the form of questions. This is what to ask your developer so you can get on the same page with them and help to do this together. Because if you don't do this together, you can't do it. Just give up now. Just

might as well go bag groceries. Ownership. Hey, when this uh this is um improper offboarding, you have an offboarding policy. At some point, you will not show up to work anymore and your access rights will get cut off. I hope I hope you have an offboarding policy. It's generally a good thing to have and almost everybody has this. What about your non identities? You created a brand new API. Awesome. When do we turn it off? No one's asking them this. This is not a conversation they are commonly having. Do you know how many zombie APIs exist out there? I've heard plenty of security talks about offensive security like going in and finding old API endpoints

that just sit there like, "Yeah, it's still active. I can still get into the root of this thing." Like no one thought to deescalate. This still runs as root. this Kubernetes cluster from a year ago. No one thinks about it. It's just still running and the API is still up. Why didn't it go out of service? Oh, because there's no offloading plan. Hey developer, what what are we doing about that? What's our plan there? When do we sunset this thing? Overprivileged NHI. This one bothers me so much because this is so hard and it's so pompous and arrogant to say, "Well, why did you just give it why did you give it so many permissions?" Again, they're getting

yelled at, "Do X by Y date, not make sure you did it right." Who has ever actually gone through the entire permissions list for GitHub apps? I gave up halfway through because it is page after page after page after page. That's how we ended up We'll get to this report later. I'm kind of jumping ahead of myself on this report, but this is where it fits in the talk in my opinion. The keys we found laying around out in GitHub public, 99% had full access or readonly access. Wait a minute. No, that's not the K. Read only access is good, but full access 58%. That's the number that scared me. But 96% of tokens had for

GitHub tokens had right access. GitLab people generally are safer with GitLab. I don't know why, but they are. uh 95% of GitLab tokens weren't locked down to a single repo. They were like full org. You could just get in and that's just GitLab. IM or IM AWS exact same boat. Like you can get in the report. I'll again I'll throw the link up later and talk about it in a different section. But this is hard. Getting the right permission for the API key to work right is just hard. How do we make that better? We'll get to that. I promise. Hey, NHI reuse. This is very tied to the next one. NHI, our human use of NHIS.

But reuse, can anything else ever use this thing? Why is that? Why why is this not a one to1 ratio? Like everything else should be, right? If this thing has to authenticate, that means it has a password. That means something else is reusing the password. This is the shared service account problem all over again. Should a human ever touch this thing? I could argue that these go later in the talk when we talk about complexity of the systems themselves and technical technical complexity. But these are the questions upfront you need to be asking. If it's yes, it's like why is that? Is anyone having this conversation with your developers at all? Again, [Music] we can't do this by ourselves. We can't

just say, "Here's the new policy. Here's the OASP top 10. Good luck." I know developers I have met in my travels at other I go to I talk at a lot of different tech conferences. I spoke at um Dev Nexus this year, the premier Java community conference. I spoke at PHP Tech last week. Um code uh not code sec days. What's that called? Anyway, Code Bash or Code Mash uh Code Mash out in uh Cincinnati, Ohio or Suscu, Ohio. I've talked to a lot of developers like, "Hey, what are you feeling about OAS?" Like, "Oh, yeah. I got told about the top 10 thing one time and that was like in a training like two

years ago." They don't they don't know. You can't just throw it over the fence at them. We need to have this conversation like, "Wait, what are we doing about ownership of this stuff? What are we doing about long live secrets? What are we doing about ownership?" Long live secrets. Well, that's the heart of my talk. I talk about this stuff a lot because I think about this stuff way too much. But long live secrets are the heart of so much of our problem that it's its own NHI number ni7 on this list. What is the shortest amount of time we can allow access before it's going to be a penalty in your system to reauthenticate?

Larry Garfield, uh, one of the authors of the PHP language itself, um, challenged me on stage last week when I was talking about ephemeral secrets and like the ability to some stuff I'll talk about later. I'm getting ahead of myself, but what's the shortest amount of time that's acceptable with this system in milliseconds that the off call can happen? Is that something you've ever stopped to think about? Your developers do. If they're have to do a billion transactions over a month, 3 milliseconds adds up quick. We're talking days of machine time at this point. So what's the shortest amount? What's what is acceptable? Because we can't completely eliminate the risk. There was a somebody did an exploit earlier this

year. They reused a token on GitHub task runner 1.3 seconds. They figured out they had a 1.3 second window to grab it from memory, reuse it, and escalate privilege. And they pulled it off. That seems like a reasonable amount of time to me. 1.3 seconds to like let it something live. You can't avoid every single risk. That's my point. But what's what's acceptable? Because this is a question for you as much as it is for them. Heart of this whole talk though right here. If you see a plain text credential, something is not right. There is one and only one actual exception I know to this and that is upon the first creation of the token and

you copied it into a vault system and you just happened whatever that stupid system was threw it out in plain text onto the clipboard and you accidentally pasted it somewhere. Okay, I'm not going to be mad at you about that. That is a very specific thing. That's why the asterisk is there. Other than that, if you see a plain text credential, something's gone wrong [Music] versus chat GPT telling you specifically, put the plain text credential right here, because that's the example I just gave you. And all the vibe coders have no idea that you're not supposed to do that. Well, some of them do. They just don't care. So, what are they supposed to do instead

if their plain text credential is not supposed to be there? Well, there's a lot of things you could do. You could go all the way to PKI for everything and digital searchs, which I'll talk about a little bit later. Uh maybe tie it into Carros. Um O 2.0 something. That'd be fun. Um your cloud providers have something here with IM rules. Secrets are the most common way we do it because it's the easiest. It's the way we've been doing it. I have a whole other talk about how we got there. We've treated like human. We've treated machines like humans for so long, we forgot they're not. And they shouldn't have passwords. But this is the actual problem. I use

the word term secret, credential, API key, uh, everything else you see on this list interchangeably because I give talks like this 30, 40 times a year and webinars and I live in this world, breathe it. So secret to me is that all-encompassing term. Apologies because I do know they're separate things. But again, this is the exact problem that we're trying to your access token, but that's not a call to a function. That's not a call to a variable. That's not a call to a vault. That's pasted in. Again, thank you Chad GPT from a week ago. We know this is a growing problem, not a shrinking problem at my company because we put out this report every year. You

can read completely for free the state of secret sprawl report 2025. And this is for a prize, a game I built called Spot the Secrets. We're going to play, if you know the answer, if you actually know this number, do not raise your hand. But if you want to take a guess, how many hard-coded plain tech secrets do you think we found out on GitHub? just in public just in the year 2024 like just only added to the year in the year 2024 across the 1.4 billion commits that happened. I need some hands and some guesses. You know this number. I can't say the number. >> Yeah. >> 250,000. >> 250,000. >> You said it was more than a billion

commits. >> Yeah. 1.4 billion commits. >> Half a million. >> Half a million. Anybody? >> I'll say like 2 million. >> 2 million. I meant half a billion. Sorry. >> Half a billion. Half a billion is a pretty high number. >> 1% over 10 billion. >> 10 million. Anybody else? Anybody else? Uh it's 23.77 million. It's 4.6 of every repo that got touched last year because you the math is good, but multiple commits against the same repo. So we're talking just total repos. Um 69.96 million contained some kind of hard-coded credential. Two million was actually the number from 2020 from the 2020 that we published. So good job there. Uh if you said the highest number, I'll trust you. Come up

and get your prize later. I got a couple of them. So if you I can give them away. It's fine. That's why I brought them. Um, here's here's the one that hurts my feelings the most though because we send out an email to every single committer that commits in public. Every one of them. It's an automated system. We don't look at them. We just like, "Hey, we detected a secret committer ID, our committer email. Here you go. Here's where exactly we saw it. You should probably do something about it. By the way, here's a link to our product and here's a page that explains this is our completely free service we provide for you. Don't do it again,

please." We try to validate every secret we find. uh 20 the ones we found in 2022 um we took a sample of the ones we tested as valid meaning they worked they returned of 200 or returned at least that they worked um got us in the system non-intrusive calls 11,000 of these we tested from 2022 in January 70% of them still came back as valid again hurts my feelings we email people and like hey please don't do this and they're still out there's some Fortune 100s on that list um yeah I'm not saying everybody should go and start dorking GitHub for pentesting money or um bug bunny money, but it's there. Uh 15% of all commit authors do

this. This isn't just a new person problem. We can't just say this is vibe coders. This is not this is not. This is everybody because it's hard. Developers going to hurry. We have a theory. We have a working theory at GitG Guardian that the vast majority of these are people that are in a hurry that we're debugging something and forgot what they had changed based on the usage of it. Some of it is just flatout dumb. We did this the dumb way. I won't even tell you that story. Somebody that works in security, their repos we found at a very large company last week and just doing some OSENT stuff. It's like all of his passwords

for his actual Gmail were in out on GitHub and it's like how does this guy have a job? Um but he does know it's hard. Now here's reality. We asked a bunch of people like thousands of people to fill in our survey. Over a thousand did uh this thing we released and 75% of respondents said we have strong confidence in our secret management capabilities. And then the same people said, "Yeah, it takes us about 27 days to remediate a leak secret once we know about it." It's like, "Yeah, that's that's pretty strong. That's pretty great." Um, only 44% of developers that responded to the survey and marked like, "Yeah, I'm also a developer." Reported following

security best practices because again, security practice slows down. This last one, this ties into something I'll talk about later, but keep it in your brain. They're reporting an average of six distinct secret managers. That's not a free problem. If Secret Managers are completely free to run, Cyber Arc wouldn't be a company. Uh, Hash Court Vault wouldn't be a thing. There's the community edition, not the free open source edition. It's free, just not open source anymore. I regret that. Not regret that. I'm mad about that personally. BSL. Anyway, so knowing that, this is why I lumped these problems in there. Vulnerable third party NHI, cuz how are you connecting your third parties? Long live secret. guaranteed because

that's the stumbling block. Everything I'm about to talk about next, this is the stumbling block. I didn't I almost put the slide later. I should have in retrospect. And insecure authentication. If you're rolling your own authentication developer don't. That's what we should be actually telling them, but they're going to say, "Okay, how do I do it instead?" Well, what if we moved you to a better authentication mechanism for interystem communications that hopefully we can pick vendors who also do this? And this isn't the science fiction future. This is the stuff that gets me excited that I'm like deep into protocols now. And I'm loving it. I'm loving it because this is the hope. This is the great

hope. Spiffy's been around a while. I doubt anybody here has actually knows what Spiffy is. Anybody? Anybody know what Spiffy is? You do? No. He's just trying to crash his head. Sorry. Like the only person here is going to lead us. Um, please, please, please. Uh, no. Spiffy uh secure product uh secure production identity framework for everyone and spire is the runtime you can actually run right now and if you're a Kubernetes fanboy or fan person I should say uh this is how you should be approaching the problem and it has been for the last seven years because this is exactly how Google does it on Google cloud internally for the last decade basically instead of giving a long live secret I'm

going to at your time of birth kubernetes pod I know enough information that you are in this IP you are in this exact environment. You were born from this digest. You were born with these sal soft self uh software levels of soft uh you know salsa supply chain levels of security at testation. You're born with a certain level of salsa at testation. So I guarantee you that when you're born, I can give you through an API a certificate that says you or you a driver's license for lack of a better term. Says here's your birth certificate and driver's license. Hold on to this. We're going to reissue one of these every 60 uh 60 minutes or every 24

hours. And what you can do is you can go through a trusted gateway and explain, hey, I am me and I'm supposed to be here right now and I need to pick up some stuff from that database over there. And the gateway says, yep, you are you. I trust that implicitly. Now go over and talk to the database. And the database says, "Yep, you're you, but I don't know what to do right now." So, it takes your ID and runs to the certificate authority and says, "Is this real? What what can they actually do in here?" And it comes back with a workload identity or not workload identity, um, a workload certificate, um, lack of a

better term. There's a proper term, my mind's going blank on it right now, but you get a new key that goes and lets you do that thing for an ephemeral amount of time. That's a jot. It's a XO X50 ah X509 or a job that has a time to live set in seconds or minutes, maybe hours. I wouldn't recommend it, but still you can do it that way. That's the trade-off of like Larry Garfield's question of like how often do you reauthenticate? But now we're not just authenticating, we're authorizing. We're starting to separate out that control plane of what if just cuz you're in doesn't mean you can do it. Just because you're in, you're you. But now what can you do?

Let's go back to the certificate authority and say what can this person do in this circumstance? What can this entity do in the circumstance? Oh, you can also federate the certificate authorities. So, if one gets corrupted, one gets corrupted and it gets kicked out and all of its client holders are all kicked out and then they restart over and a lot of security built into this thing built by some really smart people. There's a whole book if you read no other books based on this talk. This is the one to read. It's 198 pages and you will love it. It's called Spiffy a book or solving the bottom turtle uh because we'll talk about solving the bottom

turtle here in a minute. If you're like I don't want to read a book, best talk I've seen about it ever was at cloud native security foundation or cloud native security con last year. Uh Matias is one of the people that wrote the book or worked on the book. I don't know if his name is on an author but he's who taught me spiffy and that talk uses cute Disney like characters created by Chhatty BTS that are definitely not Disney characters. Disney cannot sue us. Um, that is Chad GPD. Um, so you're like, okay, but that's a weird cloudnative thing. Well, who's going to do that really? Because we haven't done it yet. It's been the hot new hotness for seven

years now. Why why that? Well, because Whimsy exists now, and this existed for a couple years, but does anybody not know what the IETF is? The I Internet Engineering Task Force. Who here knows what HTTP is? Anybody? HTTP. You know, that's how we found out about this event and like how all of our internet traffic work. These are the people that write the protocols. You know, the request for comment, the RFC's all quoting. You didn't do it by the RFC. Uh yeah, they're the people that write the RFC's. There's a working group called Whimsy, the workload identity and multi system environments group, and they're on draft four. We're almost finalizing I say we, they're almost d uh finalizing draft

four. There's a new idea that specific Jot language just came up in the mailing list. Fun mailing list. It really is like you love nerd stuff. It's like great almost daily updates about this stuff. And everything I just described about Spire and Spiffy, they're building that into the standard. It's coming. It's not just coming. It's here. It's been here. So developers instead of saying, "Hey, go figure out this weird crazy permission set on your own." and store the secret in a way that makes us happy in security even though it's going to make your life a nightmare in the short run. We can say just go namespace and allow list the services and the CA will just

magically do it from there. Oh, so this service should get to these databases and get to these these resources, right? Cool. The CA will just say, "All right, here's your authority that says you are you." And when you come back to me and someone says, "Hey, can they do this?" I'll check the list and say, "Yep, let them in." This is really hard to read, but that's basically what I just described. This internal CA service. How do I know this is actually winning? Because May of this month, Kubernetes v133 introduced the idea of evolved image pools. Now Kubernetes generate short-lived automatically rotated tokens for service accounts. If the credential provider it communicates with has opted into

receiving a service account token for image pools [Music] for a Docker container to get the image, you have to authenticate into whatever service gives you the image. The thing I just described that's like eyes glazed over detail of like why do I care about this? That's how Kubernetes works now and that's how you're supposed to be using it. I guarantee your developers aren't using it that way because this is from May 7th. We should be the ones enabling them and empowering them like, "Hey, there's a much better way to do this." And guess what? It's so much faster and so much easier. But until we do, they're going to keep doing this at scale and it's going to get worse.

The other path in the short term, because you're like, "Well, okay, that's great and all, but we don't live on the bleeding edge. We're still dealing with Windows somewhere in the a Windows XP somewhere in the house. I met a poor soul who does operational technology security for a pretty large manufacturing company back at RSA. And he's like, I can't buy what you guys got. Not my thing. I'm like, I'm not trying to sell you anything. I'm just having a conversation, but how many XP boxes you got? He's like, I got these two machines I just can't get rid of. Like, yeah, this is very real. So, in that world, what do you actually can you

do? Well, what if we pretended the same way you're not your password that every password is uniquely should go to one machine identity? Hopefully, an identity doesn't have 15 API keys that go to it or that it can use to authentic authenticate to a system. We can pretend this is true. And if we start pretending that's true, we have this awesome world we can open up. Or we can just find all the secrets, put them in the right spots, give developers better ways to access them, and then rotate them at scale. This comes from a different talk I give that's literally about what I just said out loud on this slides. Secret security end to end. I'm not going to give that

whole talk, but here's the two-minute three-minute version. If you don't know about a thing, you can't protect the thing. That is the basis of threat modeling. That's 101. So know we we know where your secrets pop up because we know this for years. Uh it's in your code. It's in your config files in your Jira, your slack confluence. It's in text secrets.ext. It's in EMV files. It is everywhere. Everybody's talking. Uh all the things that happen around your code. We know this because this is I live in this world. Tons of tools out there to help you do this. Again, tooling is an important part of the system. I have a favorite on this list,

but you all know how secret detection generally works. You can textually tell, does this grant access programmatically in the system? It does. Cool. That's a secret. Now, let's do the math to figure out what it actually is and what it goes to. Put them in the right place. Well, that is secret providers. If you are all in on AWS, congratulations. You won the lottery because they have an awesome, awesome system called AWS Secrets Manager. Not that Azure Key Vault or Google Secret Manager aren't great. I love them too. But Azure or AWS introduced the idea of secrets everywhere a couple years ago and now you can use their secret manager across platforms. But if you're like, well, I

have a lot of onrem, I got a lot of everything, you're just going to need to have a secret manager system at enterprise, a vault or a keyless or cyber arc to have a centralized way to reach all of your secrets programmatically. What I mean programmatically is literally this. You give it a path. The secret lives in a vault encrypted at rest and at runtime it gets pulled into memory when needed across the wire. Should never be seen by a human being. Shouldn't be interceptable. If someone pops that box and actually dumps memory from that Kubernetes cluster, okay, but you have other problems than getting that specific key at that point. developers kind of are scared to death

of this. So you again as security people can tell them, hey, this is just another API call. All you're doing is going to make a namespecific API call into a service that we're going to tell you how to do. We'll give you a path and instead of that hardcoded credential, just swap out the path. The best talk I saw about how to actually rearchitecture your entire application so you don't just hold the secret there in memory forever was this one on blue green deployments uh for credentials. zero downtime credentials from last year's besides Las Vegas. Kenton's an awesome dude. I could talk I give a whole other talk about long live secrets and I'm happy to

talk to you about getting off long live secrets. But once we get to the world where we can make an API call for the authentication process, why does it need to be a long live secret? Oh, it doesn't. At that point, at that point, we can make it be anything else. But once we get the developer on board that, hey, we can just make this call to a thing instead of going down this stupid path of hard coding long live secrets. How do we actually get them to do it? Because every security tool we throw at a developer, and this is why every developer I know hates shift left, is because it meant here's more work for

you. Here's more stuff on your plate. Here's more stuff in your toolbox. Here's more alerts that you're never going to look at. Here's more blockers. At the same time, the people that pay them say, "Go faster. I needed this yesterday and we're eliminating 30% of your department because of AI. Go faster." I'm always going to be a developer advocate. It's in my heart. It's in my soul. No matter what my job is, I'm always going to advocate for the people that build code. And if we can meet them where they are in their IDs and their workflows, we're going to be in better shape. Git is the center of our universe. Not just security, but everyone's

universe is Git. How many things today were referred to GitHub? How many talks you see that saw reference to GitHub? All of them. There's code. That's where we store our code. That's how we communicate. That's how these systems work. That's how DevOps works. So, what if you gave your developer a tool that said, "Hey, go crazy. Placed your hard-code credential in and when you go to commit, we're going to automagically stop you from doing that commit. But instead of just stopping you, let's go and see if it's in the vault. If it's in a vault, let's return the path to you and then just swap that in. Maybe using said, maybe using X, just swap it in

that line within the Those are command line tools that let you manipulate text." um we're going to let you uh or we're going to swap that line for you and then return to you say, "Hey, does this look good to you and let you commit that this isn't science fiction. This works." The stored in the vault, that's a little shaky right now because of the way paths are derived in these vault systems. It's very specific. They're very long and they're very specific to this what they do. But this part, if it's in the vault, return that path instead. Yeah, we've already built that. That works. We we've productized that in my company and not specific to my company. Like we

built an entire reference architecture. I'll talk about in a second. But that's great if you can get devs to use that. But good luck getting a dev to even put in a git hook if they even know what git hooks are. Hopefully they all know what git hooks are. But if we can't shift that far left, what if we shifted to the pull request? Because there we should have some say in security. If there's a PR opened that has a hard-coded credential, go look in the vault. See if it's in the vault. If it's in the vault, come back with a call into the vault. Upon accepting that PR with the call to the proper path in the

vault, that's rotate the secret behind the scenes and no one knows what it is anymore. Again, I've seen that in action. That's something not publicly available right now, but we're building it. Reference architecture. Uh, we called it Brimstone. Cyberarch never called it Brimstone. They called it has my secret leaked connector. It's a dumb mistake, but not on their fault. Not on our fault. just miscommunication somewhere. But exactly everything I just described, that's this is if you go to the speaker notes, if you get the slides, there's the link to this and you can see exactly how this was built with us in Cyber Arc. Why is it a submarine? That's a whole other talk. Um, but you move from the

front to the back and you see off compartments. Everything before 1972, uh, submarines all worked in a way that you could seal off any department from another and stop the ship from sinking. They all work differently now. It's all pressurized and somebody explained it to me once, but old submarines, you would start in the front, you work your way back, and you could compartmentalize things, so you just move back. But if you don't find all your secrets, you can't do the other pieces. That's the point. Continual secret scanning, that makes sense. But if you give developers better tools to do this stuff, rotation's a trivial trick. That's what it is. Make the new one, test it, swap it in for the old one,

test, make sure you didn't break nothing, clean up after. Yeah, I didn't write into that because that comes directly from the hundreds of examples, not examples, the hundreds of actual things you can deploy from AWS that let you rotate at scale all the lambdas. Like these already exists. Like you don't have to write these. You might need to modify them slightly, but you might want to do other things with them. But there it is. That's where that came from. Again, if you're all on AWS, congratulations. You win the lottery on this one. Uh, everything else does it, too. Other platforms, I guess, Alibaba, I think, does it. I don't know. I've never actually looked at the docs.

Multicloud. Well, now we're starting to think like, can we call that service and get a new API call or get a new uh key through an API call? It's 2025. The answer better be yeah or find another service. And then if you're using like a platform like Cyber Arc, I I use Cyber Arc a lot, but again, I'm kind of still mad at Hashy for the BSL thing. Um, if you're using any of those platforms, there is a way to do it within the platform itself. If you're self-hosted, go nuts. Script all you want. Crown job jab it up. But call to those systems. Get a new secret. Auto rotation isn't hard, but we treat it like it's this golden thing

that's going to save us. It's a stepping stone toward when we get to whimsy and we do proper proper machine uh workload identity and the systems like Whimsy Inspire at scale. It's a stepping stone to get there. We can't get there without doing this stuff first though because once we're at the point where we machine call in and it auto rotates, it doesn't matter what it's rotating. It doesn't matter what it's calling. It could just keep reissuing that same workload identity. Almost done. The other bucket I almost called other, but I realized that's not the problem. The problem is technical complexity. Insecure authentication I almost lumped in with technical complexity, but insecure O isn't. Insecure is a

different problem. Technical complexity is because we are building the most insanely complex things the human beings have ever abstracted in our brains ever. And what we have to ask is hey can we do this in a noninscure way and that sounds overly simplistic I know but at the same time we say hey environment isolation which is you're reusing that same NHI across multiple environments. The easiest example is production and staging and both at the same database using the same API key. That's environment isolation policy breach. It's not that much different than insecure cloud deployment configuration. The number one way in according to Verizon's DBIR uh breaches start is uh 22% was started with a credential or

identity related attack and 20% were misconfiguration uh as of this past one. Anyone here not read the DBIR? Go read the DBI. It is great. It's the best thing Verizon writes. It's funny. Uh, footnote 13 references footnote 11 from the previous year. Oh, right. The answer to both, I think, is a mixture of technology. The first is CSPMs. Hopefully, I don't have to know explain what a CSPM is. I hope. Go talk to Whiz if they're out there. Go talk to like these people. They're they're going to tell you what a CSPM is. That's not my job. I don't make a CSPM. I think they're cool. I love them. But that's going to help with the hey,

did we do this right? Did we misconfigure anything along the way? It's also going to tell you that other piece of what violations. Oh, is a human be using this? Remember earlier when we said, hey, should anything else other be be able to use this this NHI? That's why you asked because is this going to be in my policy? Hey, should a human ever be able to use this? Oh, not let's let's watch that. Um, not on here, but Falco is my favorite open source tool that can start looking for those anomalies like why is an IP address outside of our IP addresses ever calling this API? Oh, that's send us some red flags. But if we're going to deal with

identity, specifically identity, then we got to step back and like how do we manage human identities? Should we be managing machine identities with the same level of of rigor and fervor and all those other words? Could you just plug your NHIS into intra or to octa? Octa's working on it. They would really love you to. They're not there yet. This is what I promised in the abstract that we would get to this point. I said we'd do it earlier, but I'm going to do it here at the end. Non-human identities has emerged in the last two years as the new hotness and what identity and access management cares about. It's the missing component in the IG stack, the

identity governance access stack. And non-human identity management group is a nonprofit dude, one guy, he's awesome. He has a team uh in the UK who got together and said, "We're going to go talk to every person that has NHI on their website and figure out what they do and make you a giant map. My company's on there. I'm happy to talk to you in great detail about our version of this and why we think we're right. But we're all trying to answer those same questions. Who owns this? How do we deal with this problem that the big issue isn't the I, it's the A in what it can access, other people can access. And ultimately, what do we do about this

problem set where it's so complex that none of their existing tool chains actually fix this? So, go ask your developers like who who owns the NHIS you're building, who do you think owns them, what's your offboarding policy, all the other stuff I talked about. I'm not going to give you my entire talk right now because this isn't the future. This might be right now because of Agentic AI and how fast it's being built and how fast it's being deployed. And if we don't get a grip on it,

we can get a grip on it though. We got to have the right processes. You're in this room. You stayed in this room and thank you very much for being here in my talk. I am trying to raise awareness about this as hard as I can. And I know I sound kind of nuts up here like whimsy and spire and spiffy and I'm throwing out acronyms you not might be comfortable with. And I'm talking talking like, hey, developers have a hard life, but I'm trying to raise awareness because we put the right process in place and we have the right conversation, you get people the right tools at scale, we can actually start fixing security and stop proving how

smart we are. If you see something in plain text secret, it shouldn't be in plain text. But that also means we have a long live secret and we didn't authenticate in a way that that's future looking. We didn't do it in a scalable way. And we need to work on this together. Our models failed. You haven't failed as developer. Our model failed together. Let's fix this. I'm Dwayne. I live in Chicago. I help people figure stuff out. Check out the repo podcast, the security repo podcast. I didn't even plug that earlier. Uh quickly, I work at Gigg Guardian. And thank you very much for coming to talk. [Applause]