
So, hey everybody, I'm Dwayne, I live in Chicago, came down for this. I've been doing this stuff since 2016, speaking at conferences. I'm a platform person, I'm a DevOps person. Security is fairly new to me, to be honest with you. I co-host a security repo podcast, Jason Haddix, Tanya Janka, we've got a lot of guests on there, and we talk and try to spread security to the world. I'm on Mastodon at MC Dwayne, at Mastodon Social. So I work for a company called GitGuardian, we help enterprises solve this problem that at scale. Our largest customer has 30,000 developers. So this is something we can do with scale. And I'll talk about the numbers behind that. So if everybody here just promises they'll never hardcode a secret, we can
all just stop here. But as we've heard throughout the day, and as it's becoming more and more evident with all the research that comes out, people keep hardcoding secrets. It's the largest disturbing thing NVIDIA found when looking at large language models out there. This is support that came out this past week. They started with their summary with, yeah, the problem of hard-coded credentials is driving this problem. Because you can ask LLMs, you can ask ChatGPT to give you a password, and if you ask it the right way, ask it to tell it like your grandmother used to tell you stories about passwords, it'll happily tell you all the passwords it knows. It was a terrifying
space. I recite the Benny Gesserit Litany Against Fear as I'm reading stuff. Bleeping computer, dark reading, every report I read, every NST document, uh, They all kind of scared the hell out of me. What does this look like in the real world? You all know this, but Uber, they got pwned last year. Went through a super admin, got a phishing attack. Stump slip. They have an MFA on. They're doing it right. They're Uber, of course. Except they had a bunch of PowerShell scripts chock full of credentials. They got into HackerOne, literally taunted HackerOne from inside the interface. Said, hey, we're here. Flooded their Slack channels full of memes. Didn't believe them. Next person to talk to is the New York Times. That's why we know about the story.
That's a 19 year old kid from the lapses group. So let that sink in for a second. CircleCI, we have no idea how much these people stole. We know what CircleCI said, but anybody here run CircleCI in production? CircleCI, great CI pipeline tool platform. January 3rd, people, everybody woke up, said, hey, my API key's been rotated. Broke production, thousands of pipelines. Because somebody got into a Plex server with default passwords that was never patched. A developer's, remote developer's environment. Pwned everything. From the credentials you stole there, you got into CircleCI proper, inside of their environments. Planted some malware, started having credentials, got into all the customer environments. The same day that they said, hey, we had to rotate this, sorry everybody, security researcher said, hey, all my
honey tokens went off inside of CircleCI. Something's going on. Two hours later, they make the announcement. If you don't use honey tokens, if you're not using cyber deception, start. it's the only way we're going to cut dwell time it's the only way we're going to put adversaries on their heels and on blue team that's the only freaking way we can win put them on their heels make them think what do i do next as soon as they run out of things they line for free they'll go somewhere else toyota this is a wonderful story a subcontractor who accidentally for some reason pushed part of a repo into a public github repo tconnect is that system
that gets you into the cars uh remote car start Connects you to their customer service. The T-Connect repo is awesome. Does a lot of really cool stuff. If you're into car hacking, go check it out. You can find part of it online still. Had a real data key. That server had 296,000 customers data. The Toyota website in Japan puts out a statement. Hey, this happened. You're probably going to get phishing emails that know exactly what model car you drive. Be careful. Sorry. AstraZeneca. If anybody knows anybody that got an email for this, it's not anywhere online. AstraZeneca. Giant buy. A lot of pharmaceutical stuff. Developer 1 pushes environment credentials out into public GitHub. It's a test environment, what could go
wrong? Developer pushes actual customer data, HIPAA compliant customer data, biotech medical testing data into a test environment. It sits there for a year in that perfect storm situation. They've never released the numbers, they don't know how many people were affected, don't know the depth of it, don't know exactly what kind of data they got out and customers were affected. Sorry. Airplane crashes where they come out and say this is exactly what we're wrong. Here's how we don't do that again. Companies say, hey, I'm sorry. We got breached. And every other company got breached. So all of these companies still exist. It costs them millions in every case. It costs their customers trust. And that's the real problem. In all those cases, it was a password
laying around somewhere. That's how they got in. You can hack a DLL all you want. You can get in through every leap mechanism your hacker brain can think of. But when someone calls up MGM and says, I'm the password inspector, service desk person, give me your credential, that's how MGM happened. It wasn't a backdoor. It wasn't some crazy exploit. It was literally, give me your password. We're in. Lay of the land. Got it. We just saw that great talk about the lay of the land. Living off the land. And I don't care, honestly, about activists. I don't care, honestly, about the stuff that goes on out there. Like the kid from Lapsus. He's a 19-year-old from Lapsus. I won't even give him the pleasure of saying
his hacker name out loud. He said he did it for fun, for the lulz, but he's part of Lapsus. They owned everything. How much of that data got exfiltrated? If you don't know this guy, go look him up. He invented Cybercop Sting, the first commercial honeypot system. 1998 off the top of my head. But he said this, and I think it's great, because these aren't kids out there for a digital joyride. Hacking literally changed at some point. Anybody know where the term hacking comes from? Yeah, that's what it became. Originally, hacking came from MIT campus. Engineering students in the 60s built a car on top overnight. They disassembled a car, took it to the top of the dome, reassembled it, and
they had to get a crane to get it down. They took a fire hose, took hydrogen to a drinking fountain. That still exists at MIT. They have it in a museum case. It was engineering students who thought we'd play these fun pranks. Somewhere along the line, that became, hey, we'll take over a system. Somewhere along the line, it's like, we'll steal because they want your machinery resources. Verizon DBIR report from this year said it very plainly. 85% of hacking or incidents that they were, incidents they were investigated were organized crime. It is people that want your machinery sources, they want your data. Machinery sources, so one, they can crypto mine until you, two, they can
sell that by Azure instances on the dark web. Really? And then what's your data? They're going to ransom it to you. They're going to sell it to the highest bidder. Anybody here been to a dark web ransomware site? They look really cool. They really do. And it's terrifying when you see one in life. I'm not saying go on the dark web and you can find blog articles with enough screen pics to scare the crap out of you. Again, Benny Jezzer at Lindy Against Fear. It's a growing problem, not a shrinking problem. I use the word secret and credential interchangeably. They're the same word in my head at this point in my life. So that's what
I mean by it. API keys, usernames, database credentials, the database strings, Slack URLs, Slack API URL. connection URLs, anything that gets you access to a thing or encrypts or decrypts data. That's what I mean by secret. This is what it looks like in the wild. You see this all over the place. We saw it in the last, a really good example in the last session, if you were in this room, has it there. It's not encrypted, it's just there. And from there, he laterally expands because that's the common MO. Expand, escalate your privileges as far as you can and keep going until you pwn everything. But if you find a script completely full of these... everything. You don't have to do much more work. How
these end up here? We honestly think it's an accident. By the most part, it's just people weren't thinking when they did it. They're not developers, but I'll ask anyways. Anybody just debugged anything? Like, ever? Like, hey, I gotta figure out why this thing doesn't work. The process when you're debugging is, I gotta make this thing work. Following the strictest security protocols my company laid down as I'm working. No, I gotta make this thing work. And it's not the same file. whatever package this file is, when you get done, there are 17 files open. You've changed 14,000 lines of code, it feels like. And it's the end of the day, and you're tired. So what do you do?
Save, git commit, push, leave the building. Up until the push, you can still salvage it. Once you push, it goes somewhere. Look at every single push that happens on GitHub public. That's what GitGuardian does as a platform. That's part of what we do. We look at anything that gets a commit, anything that goes public, becomes public. I think, I forget the exact string. But you can do this yourself, api.github.com/feed. It's there, it's free. It's also a fire hose, be warned. Be prepared for that much data. Last year, a billion commits got added to it. That's up to 27% growth. That's pretty good. We found 10 million hard-coded credentials just laying out there. That's 67% growth
from the previous year. These aren't cumulative. This is just added in the year 2022. We already know it's worse than this this year. The numbers come out in April, but yeah, our preliminary research is kind of terrifying. This problem's getting worse, and it can't just be new people, because that's a 67% increase. While the platform itself only grew by 27%, you can't blame students. You can't blame new people. HDL, it's a fun fact. We'll see what happens with OpenTofu and that whole split, but that's a whole other aggressive talk. How did we find these? If you want to see how we built validators, this isn't just grip. This is like, hey, are these valid or
not? A lot of those are validated. have a whole validation process. What we find? Well, because we're looking for about 400 specific types of detectors and then a boatload of generic ones, we have this long tail of data, and that's why other is the biggest category by far, but by a good chunk. But the types of keys that are getting more data storage. You can't have an application without data. I mean, you technically can, but that's a phone game that doesn't connect to the internet. I play Solitaire on my phone. But that makes sense, data storage, because what's the easiest way to authenticate is just throw the API key in or throw it in the config file that maybe fun fact about gitignore it doesn't
work if you just pick up the entire file and move it into an S3 bucket and last year they found about a million and a half just on public S3 buckets just all the git credentials like in the .git folder because people picked them up and moved them cloud providers that makes sense as well messaging systems where specifically are what specifically are attackers after data storage cloud providers that gets them to that, which is what the messaging system comes in. A little fun breakdown. Fun fact about AWS, it's only 3% because they have done a damn good job in the last few years of figuring out how to stop this. They specifically scan the internet themselves. They're constantly scanning for keys. They were
the first partnership I know with GitHub. GitHub has a free program. If you have an API key, you can tell them what that looks like and they will watch for it and do something about it. In the case of OpenAI... they will immediately, they'll just hit the invalidation endpoint. Slack the same way, they'll hit your Slack invalidation endpoint and just invalidate that whatever string. AWS doesn't kill it, it puts you in quarantine and gives you a threatening message that you have to fix this right now. And then I think that's after nine attempts, then it deletes it. Not so much. A lot of Google API keys though, fun fact, because things like Firebase identifier APIs are API keys, and they're public keys. They're
fine. They officially say don't share this publicly, but then the people that write them are also like, yeah, it doesn't go to anything. I disagree. Never, ever hard-code a key because then you get in the habit of hard-coding keys because, like, is that important? We ask a bunch of IT decision-makers from very large companies, hey, how do you feel about all of this? What's your opinion on what's going on with this, with this data we just gave you? 75% said, yeah, we had a leak. We had some kind of credential incident. 60% said, yeah, it infected our company somehow. 27%, they're only using manual code reviews to try to fix this problem. Here's a magnifying glass intern. That's what that means. It
doesn't work. 94% plan on doing something about it someday. Everybody know that Creedence Clearwater song, Someday Never Comes? Yeah, someday they'll fix it. The next 12 to 18 months. Sure they will.
It's C-level too. It's C-level not over WhatsApp. WhatsApp's at least secure. There's like, hey, what's the system password, Larry? Let me throw it in Slack real quick. You deleted it from what's shown in Slack. Slack has internal logs that you're not sanitizing. I guarantee your company's not sanitizing it. Yeah, but that's... I was kind of glossed over that one, but yeah. How many people have ever gotten sent a password over Slack or a similar system? I'm surprised it's that few hands because that used to piss me off so bad in a previous company. It's like, you can't do that. You literally cannot do that. Put it in one pass and then tell me what folder
it's in. I can do that, which is what my current company does, and I'm very happy about that because that just happened yesterday. It's like, what's the password? So what can you do to stop all this? That's what the rest of this talk is. That's what this whole point of this was. If you ask Google, this is my favorite that exists, and that's the whole thing. I know it's hard to read from the back, so that's why I made it bigger. But it's basically, redesign your app. They don't even say good luck. They just aren't saying good luck. And this was from last year, so they might have updated this and made it a little bit longer and said good luck now, last few
months. In reality, it's using systems like Vaults. Vaults are awesome for multiple reasons. One, you store encrypted. At rest, it's encrypted. That's awesome. It's also programmatically callable. So if you're using HTTPS, it's encrypted in flight as well. And then you only have to deal with the problem of scrubbing your logs, which you should be anyway, from any plain text references to anything that got pulled in from memory. The other awesome thing about vaults is they are super scalable. HashCorp makes a ton of money on Enterprise Vault, and they should. All the service providers, Forms, AWS, Vault up here, and they're free. Just use them. There's no reason right now in 2023 anybody should plain text a password anywhere except to put it into one
of these things. Other reason these things are awesome is they're accountable. You can export a, where did the key come from? When did it get invalidated? Who's touched this thing? And you have that insight. The downside to these is that, oh, good. So you say that they are, is there like a single key inside that that is still going to be calling the same thing so you don't have to change it? 100%. There is a caveat to that when you get onto like the AWS side and then you're going to use like Cloud Service Manager. There's basically a middleware layer that takes care of the validation and cache rotation on those services, but if you're using Vault, it's straightforward. But yeah, in the code, instead of
being hard-coded password, it is file name dot credential or something. I forget the exact layout, but I'm generalizing it up. But you're gonna call it programmatically that way with a call to whatever service it goes to and a reference to the project. That's one of the other great benefits. Downside is these, in most organizations, are set up by the developer per project. that advantage of, hey, this is accountable, goes away. It is now not just how many passwords do we manage as a company, how many vaults do we manage as a company. There's a good path to that. If everybody's on the same exact version of hash core vault enterprise, you can just do a big old export, a big
old import, all on the same page. That does take coordination. If you're a small company, you probably only have one or need to worry about. If you have 30,000 developers, you're going to need to worry about this. Is that what I'm doing? Yeah, because you have to check every single vault. at that point. That's the problem is now you've distributed the, it's like using EMV files, which in and of themselves aren't a bad idea. EMV files are awesome if you use them right. WS's version where you're storing in your home folder, throw the extra layer of something like SOPS on top. SOPS is a free encryption tool from Mozilla Foundation, SOPS, and it is a way to encrypt plain text in place.
So you can pass around a plain text file that is clearly encrypted, but anybody else that has SOPs in their ID with this correct encryption algorithm will be able to see it. No one else will. And there's a laundry list of ways you can encrypt it. So if you're an attacker who gets a hold of this and it's garbage, which we saw last time, it's like, yeah, okay, here's the password. It's encrypted. I think I know what it's encrypted as, but I'm not even going to... So how do we fix this for real? It's this. Tools are part of the solution. They 100% are. I work for a company that sells stuff. Tools are a
part of this. Can't do this without them. But if you don't have the processes around those tools, don't buy them. If you don't have people that are a way to train your people and disseminate that knowledge, if it's just like, here's a tool, good luck, close the door, and hope it goes right, it's like throwing a grenade in. It's like, it's going to cause chaos, and you're not going to get the reaction you want. Actually, a grenade's a bad scenario. It's like throwing a box of cake ingredients in a room and closing the door and hoping when you reopen the door there's a cake. It might happen, but seriously doubt it. This is applicable to most things in life, not just this problem, but
most problems in tech. So how do we go about creating these processes and tools and training people to use them correctly? I personally feel it comes from shifting left really, really, really hard. I stole this from somewhere. I'm speaking to your slides. But who here is familiar with the term shift left? Who here likes the term shift left? Because it got abused. It's like DevOps. It's meaningless. It just got a bunch of marketing thrown at it, and we end up hating it. DevOps days. Chicago, it's on my phone. That's a common conversation you have in the DevOps community. It's like, is DevOps still, we like this? Shift left got a bad rap. I think we didn't shift left far enough. We got shifted
left far enough that it's like, hey, I'm a security person. Now, developer, this is your job. And that's how it got interpreted. Like, hey, we test later. Let's test earlier. We're just going to piss everybody off. And we're going to shift the thing left a I think this is where we have to shift to. This is a little bit of personal rant. If you don't use a whiteboard, if you're not diagramming things and doing thread analysis and a discussion from day one, you're already not shifting left. We're already waiting. People have ever, developers, but if, how many people on the security side have ever been called in to a early design meeting? Like, this is what we want to build. That's awesome. That's the first time anyone's ever
raised their hand to that question. No, second time anyone's ever raised their hand to that question. That's like six times I've given this talk.
But also an audit was kind of driving that, it sounds like, as well. Like, you're going to do this. Because if you draw, like, I got this great idea, and someone comes in and says, where's the data come from? You can't use that data. How are you going to pass these compliance checks? It's like, oh, we didn't think about that. It's like, well, how are you going to test this part here? Like, what's the testing plan? It feels like, at the start, like, oh, man, you're shooting me down. And nobody likes that feeling. one team. Security isn't a separate team that is fighting dev. And unfortunately, that's how most people see it. Well, we saw, yeah,
sure. Yes. Yes. A hundred, 110%. I'm going to get to that. So thank you very much for queuing me up, teeing me up. I'm going to drive that point down your throats for the next 10 minutes. So we, a lot of developers, uh, send millions of emails a year to people that, so if, You might have heard our name or seen us because if you push a credential onto GitHub public, we will send you an email. We automatically send the committer of that commit, that email address. So always use an email that works if you're committing things on GitHub because there's people who are trying to do the right thing and help work with thousands of developers, hundreds
of companies, all sizes and shapes. That's just our paying customer. And we said, hey, what are the trends we see at people that are really good at this secrets management stuff altogether? and people that aren't so advanced yet. Like what are those levels look like? Somebody read Accelerate, that great book from Jess Humble, Nicole Ferguson, and Gene Kim. If you haven't read Accelerate, it's one of the most important books in the last 10 years. It's kind of out of date, but Dora Project keeps updating it. But anyway, Dora says there are four main leading metrics, and if you can identify those in your DevOps organization, you can tell how good you're doing. And kind of rate people and like how fast you get code to production, mean time to
recovery in a fail event, a mean number of times things break when you push, but there's four. That's a different talk altogether. We said, hey, there's five levels we've identified. How organizations work, back to your point, because if you're working completely in silos, this person over there selling stuff and saying, hey, this is what we're going to deliver to you, Mr. Customer. Then you have somebody on the exec team telling the press, next two quarters we've got this innovation in AI, it's going to knock your socks off. And the developers are over there like, "I'm just trying to figure out Kubernetes." Yeah, you're gonna be working in these silos that don't talk to each other. And what that looks like from a secret manager perspective is, you
know, across these levels. You're sharing credentials in plain text because it just works and it's easy and I can move on with my day. What about the rest of it? Maybe I got a local config file. It's probably checked in. Secrets just end up in our shared source control. They're just all over the place in a number of different ways. We're not thinking about scoping at all. It's not even on this list. But you're using third-party tools, third-party libraries that have tokens in them. They have PyProject. We're putting that numbers out very soon. But it's like disturbing accounts in the last six years have just put valid credentials into their PyPy project that then got distributed to millions of
developers. That's probably sitting somewhere in your code base inside your company right now. That is a way in, but it's still a way in. The initial foothold is all that matters.
And you're embedding things in scripts because you're not thinking about it. And no one's ever thought to look at your server logs, like in general, because you're all working independently. Security's over there trying to like make sure that your NIST compliance standards and your OWASP top 10 are getting dealt with. Developer on this end is just dumping everything into plain text. Level two, we're starting to realize that this is happening. Maybe we need to work together a little more cohesively. Maybe we need to start having these conversations about what this looks like at scale.
we're still doing pretty much the same thing. Secrets are unencrypted. We're not hard coding them in line, no longer in the code itself. We've stepped up from that. We're more mature than that. We're still using config files all over the place. I see secrets are stored externally by the cloud service provider, but not on the developer side, the detection side, I should say. Now you actually are starting to pay attention. Maybe we should check our PRs. We close those PR, those pull requests. Maybe we should do a manual check from time to time. That's that 26%.
Final artifacts, you're pretty sure don't contain any secrets, but you're scanning for that every now and again when you think to do it with manual tools. Like, "Hey, we gotta get this to production fast, you might skip over the tests." This is some generalizations, this is a lot of generalizations, but the biggest one we have seen that, in my opinion, separates, like, "I haven't really thought about this," to, "We're starting to think about this a lot more," is this last one. How often do you rotate your secrets? If you only rotate a secret because you think it got compromised, that's not enough. That is not the right way to think about it. It's like, you're just setting yourself up for disappointment. At this level, you are starting to read
your logs and realize, like, hey, there's plain text in here. We should probably deal with that. I will say quickly, none of this is meant as a judgment. None of this is meant as, like, hey, you're here. Shame on you. Because every project in the world starts here. You open up an IDE and just start typing PHP, mark PHP. I used to do a lot of PHP. You're here, because you haven't done this stuff yet. But now you're starting to think, alright, we need to cohesively work together a little better. Let's get the same people in the same room at least sometimes, and let's start building this thing at scale, a little more sustainably. Now Vault's under the picture. Like, now your dev teams are starting to be
like, yeah, we use HatchCorp Vault. Yeah, we use Doppler. Yeah, we use Akeyless. Those are all different ways to do the same thing. Developers are starting to scope things for the first time. Because when you scope a credential, it should only do the specific little thing that it should do and no more. So when one gets their hands on it, it's like, "Okay, this gives me read access to an S3 bucket." Maybe that S3 bucket has something good in it. Maybe it's got cat pictures. Who knows? But you're starting to scope these things down so it's not just, "Hey, they have an S3 key." That's a root credential. It can go to everything. On the
scanning side, before the code leaves the local developer machine, they're starting to run, like, open source tools you've built. Rep is a great tool. I don't know. A little sidetrack. Most of the tools you love about Bash got included in Git. So if you've never tried just like Git LS or Git Bash, Git diff is just diff. But grep is the one that I like. Solves the problem of where am I pointing grep? Because it just looks at the index. I know this because Linus used it when he wrote a blog. Anyway, that's a whole sidetrack. But yeah, developers now are starting to use tools like, hey, I'm going to look for the secret before I push this important thing. I squashed it, therefore it's clean, right?
Now we're starting to use tools like GitHub Actions to, hey, every PR, let's actually run a tool to check this. There's a bunch out in the market. This is a good place to be, honestly. If you're a small company and you're not pushing out or you don't think that's critical, this is a good place. You should like, hey, I feel like I'm a level two. You should feel great. Like you're doing a lot of stuff in the right way. The big indicator, though, is still that secrets have not been rotated on the regular. Once you start doing that. Now you've put together a comprehensive security plan across the company. You really have at this point.
Your business goals, your security team, and your developers should be working in concert on that whiteboard saying, how are we going to do this? How are we going to do this at scale? And now Secret's being rotated on a regular basis. Pure idea about it. Like, yeah, we are going to rotate this at least every 30 days. Good place to start. Daily is even better, but let's not go nuts yet. You might think, "That sounds like a lot of rotation." You can do this automatically on most platforms. AWS, there's a whole single page document. It's like, "How to turn this feature on." You do need to write a little script for it, but they provide the script. So you can just say, "Hey, AWS, I'm
using your secret manager. Every 24 hours, I want a new secret." You're using Vaults, you're rotating things, well-defined, and the current source code revisions is good. But now leveraging those local dev tools. I love Git because Git has an entire automation platform called Hooks. If you're not using Git hooks in your development process, look and see what you can do with it. It's amazing. There's a great website called githooks.com. Matthew Hudson built it a few years ago. It's a lot of data at the moment with at least new references, but the content, the explanations are rock solid. There's a bunch of links out to things that people have built with Git hooks. Full disclosure, GitGuardian builds a command line tool, ggshield, which you can run as a
pre-commit hook or a pre-push hook to catch credentials before they get If you stop it there, it's super easy to fix. The other thing I like about whiteboards is that whiteboarding is stupid cheap. The farther left you can go from your IDE, the cheaper it is still. But fixing it at the IDE level, that's still really, really cheap and causes no real problems. You're constantly scanning everything. It's just scanning all set up, your security team does that. Developers are part of the security discussions when not every time, but when it's serious enough. And that's the big differentiator between this level and being the A team. Being one cohesive... Yes, you have teams that logically you fit into. Yes, there are different
departments. I'm trying to say we should have completely flat organizations. Your executive team, the sales team, security, and your crazy developer in the back who can fly the plane. The exact same plan and know exactly what this looks like internally when you do a thing. This is kind of our pie in the sky. Yes, we know companies that are this. that have learned the lessons the hard way and fought to say, all right, we have one vault system united across our orgs. We can audit the thing in literally 30 seconds. We know there are no hard-coded secrets. We can prove that. In case the NIST standards and CISA recommendations come to fruition in the next couple of years, and you're
going to have to start a testing that you solved this problem because this is this big of a problem, you think S-bombs are bad? This is causing more than anything else in the industry right now. NIST standards from last year, 2011-06, I think. I'm bad with numbers. Kind of point to, like, you should be able to prove you can deal with this if you're building microservices. If you're doing DevOps, you should have a plan about this stuff. There's a whole giant section about it. But so far it's not a regulation. Like, there hasn't been an executive order saying you have to have attestation that you have tried to solve this. But these people could if they needed to. They could prove it. Because they have gotten rid of secrets
wherever they possibly could. This one is the biggest one we see as a differentiator. Anywhere you can possibly replace a credential with rule-based access control or rule-based access control, depending on what you're using or what you're building, the better off you are. The best passwords are the ones that do not exist. Total sidetrack, I only bank on my phone anymore because it's biometrics. I don't have a password. I do not know the password of my bank account. I could reset it if I needed to. I own the reset path, but that's behind biometrics as well. I'm not worried about anybody stealing my passwords. characters? I don't know. I just randomly typed. But that's the biggest one. It's like,
if you're serious about this, solving this organization, you're going to root out every password you can. It's like, this shouldn't exist. This should be a control. They still have to exist somewhere. You're still going to have to have a root password somewhere. It's unfortunately long-lived, but you can rotate those. There's even an argument to be made that if you completely have secured the recovery path for that root password, you shouldn't know your root password either, because you can just reset it every time. And then the multi-factor authentication of that is actually more secure than a multi-factor authentication on a long-lived password. That's a philosophical. Back to this. You're doing scanning literally on everything. Your remediation
workflows are also automated. So, hey, someone pushed a secret and it shouldn't be there. Someone should get an alert that that happened, but they should also say, all right, and then this process would have happened, and then let me go look. Yep, yep, it's all taken care of now and go back to the rest of their life. That is entirely possible with a good platform like the right API calls, you can make anything happen on the internet. Then you're at a little fix-it script that just automatically fires off. With this talk, is to one, scare the crap out of everybody to stop hardcoding your secrets. Hopefully I've done that. The bigger thing is on people that are actively trying to hurt us. Almost every story you
hear in the news is something like, "Hey, they had this crazy zero-day breach, and then they found a giant pile of passwords." Or somebody called up and said, hey, I'm the county, that's one of my favorite Saturday morning breakfast cereal comics. Had one, it's like, how hackers think they work, how Hollywood thinks hackers work. It's like, we're going to backdoor into their zero day, yada, yada, bunch of times. It's like, how hackers actually work. It's like, I'm the county password inspector, and I'm here calling to inspect your password. We've got to stop making it easy on them. Pie in the sky, if you try to get there tomorrow, It's just gonna suck and everyone's gonna hate you for being the security person who
came back from a conference and said you're doing it wrong. What you can take is that all of this is available. The secret management maturity model. Get Guardian's website. It's free downloads, not even behind an email sign up. It's just, it's there. You can take this back to your team and it's like, hey, here's a map. We're here. We're doing these things, but we can do this better. Have we thought about doing this conversation? We're in the room together. Having that conversation of like, hey, how do we make the security and If you're only doing it in retros and you're, because of an emergency, reactive to something that you've lived through, yeah, that gets the ball moving. Yeah, that gets business actually to care about this stuff. But
that's also too late. How do you have this conversation with the team? I'm completely open to suggestions on that because I've still never heard a universal truth. But it starts with the conversation just with the person next to you and your team leads. If you're a security person who's never talked to your lead developers, there you go. Go slack them and say, hey, you got 15 minutes next week. If you buy them donuts, they will talk to you for an hour. If you buy them Starbucks, they might give you 15 minutes. Busy, but so are you. It's the other way around, too. If you never said, hey, you're an exec who's in charge of this department, never had a conversation about security around here. There's
local organizations that are, are they on that sign? 2600, they're out in the hallway. Go talk to them. Because if you never invited an exec or or anybody else that you have to interact with to a local security function, why not? Why haven't you extended the olive branch? Say, hey, have a beer with us. Hang out. They might have a terrible time, but at least get the conversation started. I'm Dwayne. I live in Chicago. Going there in a few hours. I've been a developer advocate since 2016 doing this stuff, mostly in platform stuff. Go check out the Security Repo podcast. We've got a lot of really awesome guests and a lot of great stuff planned. Love rock and roll. Hit me up on the internet. With
that, I'll open it up for questions. Oh, oh, yeah, that's... The CMMC guy was about to bring that up. Thank you for bringing that up. Yeah, but that is specifically if you sell DOD contracts, if you're bidding on DOD contracts, and that was never enforceable until, like, this year. And they're still fighting back on it, saying, hey, you can't really audit us for that. Before 2019, did anybody know what an S-bomb was? Yeah, yeah, yeah. Sonotype knew what it was, and they've been trying to sell them for 10 years. Nobody else did. I've talked to Sonatype enough about that. They're like, yeah, they're actually really glad about that order, but also it caused so much havoc and chaos. I
did not know that. I haven't been following PCI advancements, so valid and non-expired. Non-expired is an interesting one. Most things have a weighted test for validity that's not intrusive. I say most. Not everything does. The random passwords don't. But yeah, that's a good point. I did not know that, though. So PCI-4. All right, any other questions? Enjoy the rest of the day.