← All talks

Avoiding Credential Chaos: Authenticating With No Secrets

BSides Las Vegas42:2055 viewsPublished 2025-12Watch on YouTube ↗
About this talk
Identifier: T7AHQT Description: - “Avoiding Credential Chaos: Authenticating With No Secrets” - Advocates eliminating or reducing manually managed secrets. - Demonstrates Kubernetes clusters authenticating without secrets. - Explains AWS IRSA and Azure Workload ID services. - Provides Terraform source code. - Offers practical guidance. Location & Metadata: - Location: Ground Floor, Florentine E - Date/Time: Monday, 14:00–14:45 - Speakers: Chitra Dharmarajan, Steve Jarvis
Show transcript [en]

Good afternoon, guys. Uh, welcome to Bites Las Vegas ground floor. You guys made it. So, uh, we're going to have, uh, the talk titled avoiding credential chaos, authenticating without secrets. We have two speakers, uh, Chitra Dharajin and we also have Steve Jarvis right there. So before we go ahead, we'd like to thank our sponsors, especially our diamond sponsors, Adobe and Aikido, and our gold sponsors, Formal and Drop Zone AI. It's their support along with our other sponsors, donors, and volunteers that make this event possible. And a few of the other announcements are these talks are being streamed live. And as a courtesy to our speakers and audience, we ask that you check to make sure your cell phones are set to silent.

If you have a question, use the audience microphone so YouTube can also hear you. Uh make sure to point at uh the mic in the audience so people know where it is and that'll be the uh speaker ops. Um, as a reminder, the bides LV photo policy prohibits taking pictures without the explicit permission of everyone in frame. These talks are all being recorded and will be available on YouTube in the future. And with that, let's get started. So, please welcome your speakers. [applause] Hello everyone. Good to see a room full of audience completely filled with lunch here. So let's have some thoughts to share. Avoiding credential chaos, authenticating without secrets. That's a topic me and my colleague Steve Jarvis

is here to discuss. A quick introduction. I'm Chitra Dhar Rajan, VP of security and privacy engineering at Octa. I love building high performance teams across the gloes leading security transformations. Outside of my day job, I also advise many startups in Bay Area and I love to grow along with technology and innovation these startups bring to bring to the floor. And my security mantra is all about being an enabler. Never be a gatekeeper in the career of security. And I have dropped my LinkedIn here and I'm calling my colleague uh Steve Jarvis to give his stellar introduction. >> Yeah. Hi everyone. My name is Steve Jarvis. I'm a security architect at Ozero at Octa. And I um before I was in this position, I

spent a long time software engineering on on network and security building software in that space. And that still frames a lot of how I think about security now. Outside of work, I spent a lot of time cycling. I'm a dad to a four-year-old. My favorite thing is cycling with that four-year-old actually. So, um, my security mantra is the the secure way has to be the simple way, otherwise there will become another way if it's not the easiest way. And, um, that's a link to my personal site at the bottom. Uh, once we're done here, I'll put the resources up there, too. So, >> thank you, Steve. Uh, and a little known fact about Steve, he has strong opinions

about identity and even stronger opinions about bicycle tires. So, if there are any bicyclists here, racers here, you have an subject matter expert to exchange thoughts. Okay. I want to say your secrets are safe with us. This is a very safe space. Let's come clean. Let's share your secrets. Me and Steve has taken a pact to keep it safe with us. So in today's agenda, we are going to enable you to break up your relationship with shared credentials and secrets. We'll start with something like you may be wondering what's wrong if I have a few secrets. We keep them safe. No offense, it's just a little old-fashioned and that's about it. And lot of operational

toil secrets rotations. Then you may come to realize there are too many secrets that you have. Even your secrets may be having little secrets to keep. So how do you actually get rid of them? It's too much out of hands. That will be our second phase of discussion. And then we all will make together an executive decision to get rid of them. Time to break up and we make that decision together. And then we as security practitioners, we only believe what we see, right? Proof is always in the pudding. So there are going to be some cool demos about crosscloud authentication without secrets. And finally, we leave this room in peace knowing that our secrets are safe. Oh

wait, there will not be any secrets to safeguard. So we will be more at peace knowing that our secrets are cleaned out. So that's the goal and agenda for today's discussion. So enough having all that fun. No tech talk can start without some scary data points and numbers, right? So a global average cost of any data breach as of 2024 uh citing from uh Thommpson Reuters it's about 4.88 88 million is the cost for every single breach and this is a global average and I'm sure it's more than that but this is what the industry has called it as a quote but just within US the cost of an average breach of a single average

uh cost of an average breach is about 10 million and the cost increase is due to lot of expenses with detection tools credit monitoring services regulations fines and everything these are some tangible costs But there is also a lot of intangible cost. This is more about loss of customer trust, employee anxiety, productivity u you know down leveling of the productivity, lot of security operations work and last but not the least cost increase with cyber insuranceances. Most of the data breaches uh they start with lost or stolen credentials. So let's understand the attackers tactics. so that we can strengthen our defenses. Most of the data breaches start with you know either it's a compromised credentials, lost or stolen credentials

or harvested list of credentials from prior attacks with the aid of bots uh victim service or victims uh victim themselves are targeted for attackers successful access and thereby the data breach. So before we go into the detailed discussion of this session, we all want to remember as golden rule. Thou shalt not have the burden of any secrets. Let's call it out in our brain one more time. Thou shalt not have the burden of any secrets. Like any golden rule, there is always going to be a caveat, right? So if you must have a secret make it a point that you have it secured in HSM or uh you know KMS and have some automated rotations put in place.

Let's say you may still need to have some secrets in your environments for your bootstrapping of your infrastructure. Whatsoever other reason, whatever may be the reason, if you still want to keep some secrets, then we need to choose what we need to keep and we need to bake automated rotations as a part of our security operations. Not when a bad day happens. It should be like everyday operations like every few you know every four weeks, six weeks when you do blue green deployments, whatever you can rotate, you should be rotating to maintain the security posture. So rain or shine, we rotate them on time. If you plan to keep a secret, that's a liability we own.

So with that preamble, let's think about a regular workday in a technology company where we have employees logging into their SAS applications like Confluence, Slack, GitHub and uh you know the CI/CD pipeline deploying the workload workloads in your CL cloud infrastructure s engineers accessing EC2 instances via SSH keys and of course services accessing many of the resources via API token. Seems like a typical IT workday right? This infrastructure is fully infested with lot of risky secrets. an employee using a password instead of corporate SSO, an S sur engineer using an SSH key, maybe they store them in their laptops, you never know, or the AWS config files and servicetoervice communications between Kubernetes uh microservices via

shared secrets and those API tokens. There is credentials, credentials, credentials everywhere. Anything that get logged via logs, anything that get uh harvested out, your production infrastructure is at risk. So Steve, what do we do about this? How do we go from here to a clean state or the target state of secure secure uh secure infrastructure?

>> Yeah, thank you Chicha. You might have to mute because I think we're getting feedback. So >> yeah, thanks Chicha. So, what are we going to do about it? Well, the goal here is that we're going to move one piece at a time. Basically, we're starting with this picture on the left, which is our current state that we're how we're operating right now in our imaginary company. And we're going to try to get to this picture on the right. And the general idea is that, you know, red bad, blue, good. So, we're going to see how we can like change these different components, the way they're authenticating with each other, how we're managing these secrets, and like

dramatically reduce the burden that we're that we're feeling. And we're going to go about this in four areas in particular. Um the first we're going to look at the engineers access to SAS and servers. And then we're going to look at how the our CI/CD, which is also in GitHub. In this picture, GitHub is going to serve as both our version control and our uh continuous deployment. How that's accessing the cloud infrastructure deploy resources. And then third, we're going to look at the API access here from our services. Our services need to talk to that API we're operating. And lastly, the communication those services between Kubernetes clusters, right? And at each step, we'll suggest a different design, something

different we could do, how we can improve the situation. So first we'll check out the user off and this initial one is probably going to be familiar to just about everybody right passwords are so yesterday. So what we can do instead of a password is we can do use some public private key technology like web a add an IDP to consolidate identity management and we don't have those static persistent credentials around anymore. So for example in web an right your device creates and generates a private key and that private key is used to sign proof that you are you requiring a biometric to use that key to sign that to sign that assertion or that

challenge is like a built-in second factor. So we get MFA automatically in this flow and one of the big point pain points originally with this was that that private key is strictly bound to your device. So if you made an account on your phone, you couldn't log in on your laptop and that was really not workable. So back to like a you know the secure way has to be the easy way. This never caught on because that's a huge pain. So pass keys addressed this and syncing it to different devices. So this this burden is largely gone now. We can use passwordless authentication. It's got built-in MFA and it's a huge improvement, right? Because now in a

login flow that key it never leaves your device. You couldn't leak this. It can't get fished. It can't leave. So and also assume there is a data breach at the at the provider right like Chitcher mentioned that's like a fuel for a lot of this security incident cycle if there is a breach that database gets lost they have the public key they can't impersonate you to the public key so even if there is worst case breach it's it's still relatively all right so this is a quick run for our golden rule of not carrying the burden of secrets um we just offloaded a bunch of passwords to be the devices's responsibility

So the second thing we're going to look at is still in that engineer access and right now if we have an incident and we need to access some servers we're relying on SSH keys. So this really means that every engineer that could possibly respond to an incident which is typically all of them are going to have a key on their laptop that probably lives just about forever. So, we're going to build on that same um authentication flow we already have to leverage that IDP, but now we're going to use it to assume roles in AWS, temporary credentials that live for a short time and combine that with a native service in AWS called session manager. So now we don't have a need for

a secure shell session here. Session manager is going to use AWS's APIs locally to establish a shell that the engineer can use in that environment. And so not only here are we um eliminating some of the risk that we are carrying from these static credentials, we actually get to harden the host at the same time because now we don't need any ports open to the internet. And this is a very AWS specific picture. But again like the the things we're painting here are just for example purposes. The other major cloud providers have the same concepts available. Right. So quick recap. We're moving through this picture really well. We've already kind of made a happy story out of all of

the uh engineers access, right? There's no secrets that we have to worry about in that in that part of the picture anymore. So now we're going to move a little bit right over to the CI/CD. And again, that's in GitHub's workflows and that's what we're using to actually deploy our services to these clouds. So what we've been doing so far in GitHub workflows is provisioning static secrets right for deploying the AWS services we have IM users access keys to get stuff into Azure we have the client ID and secret for a service principle and those workflows you know pull those secrets out of GitHub secrets deploy the infrastructure and that all that all works all right but we would love to not

need to provision these right So what else can we do? Well, in this case, GitHub can actually act as an OIDC issuer, right? It will sign tokens for your workflow as a feature of GitHub. And so now we can kind of use that as the identity provider to assume these IM roles and entra service principles. And we don't need to provision any secrets anymore in GitHub. All we need is configuration. It needs to know what IM role ARN to look at, what service principle to look at. We need to establish the trust in these clouds to say, you know, we have an IM roll here and if I get a token, an ID token from

GitHub, it's allowed to do whatever this IM roll is allowed to do. It just becomes configuration. There's no more secrets in the picture. And we can and should lock that down to the specific, you know, GitHub repository and branch and you really easily have that level of specificity when you're defining those rules.

So more quick progress. We've got a great mechanism for our human users. No more secrets there. Our CI/CD has limited the secrets from our um from those workflows. And now I suppose we should check out what our actual services are doing, what we're building, how are they communicating with each other. So if we go all the way to the right side of the diagram here, we have this service running in Kubernetes and Azure connecting to an API with an A API token. So an API token here, this is a a preconfigured shared secret uh probably has an indefinite lifespan. And there's a lot of risk here because if this this is transmitted as it is

across the way, right? it exists in the same state on the client as it does the server. If we ever have a software bug or a logging error on either end of this, we're going to put that API token in a log somewhere. Any software vulnerability, we could leak it. Um, we we would really love to change this. So, what else could we do? Well, we can use uh another some more private key technology, right? And we're going to do something called a private key jot. And what that looks like is we generate a public key pair. Our service will sign attestations with that private key and submit them to our IDP. That IDP knows about our public key so

it can verify that we are the client that holds this private key, right? I trust this. We're going to give you a token and it will issue an access token. That access token is what we can pass then to our API. And this access token is is shortlived. it can live for hours, minutes, like how long do you need it? And we could very easily run through this flow again to refresh it. So now if we have a problem and this key, this this access token that we're actually submitting to the API ends up in a log or ends up breached, there's a very good chance that it's expired by the time anyone's able to do something with it,

right? And that's actually not all. There's another really cool little win we get from here besides just the secrets management benefit because this is fully like OOTH 2 and we're getting a jot issued from an IDP. Now we can also leverage custom claims and scopes here at the IDP and those end up embedded in that access token. So before we had like a really opaque generic API key that get passed back and forth. Now we have an ability to define authorization values, right? Scopes, claims, and the token at that IDP. So we've like gotten more power to our kind of IT administrators to control exactly what this service is allowed to do at the API. Not just that it's allowed to

access it. Now we get more knobs to turn here. What ends up in that access token? What do we want to enforce?

So cool. With that change, we're threequarters of the way through the specific topics that we wanted to fix, right? We have removed all the credentials that the engineers had access to directly to access the SAS and the servers in the cloud. Our CI/CD is like happily humming along with no provision secrets. The services that we have running in Kubernetes and Azure, they no longer need a static API token to interact. The last thing that we want to look at is the communication between these two services. We have clusters running in each cloud and we want them to be able to talk to each other. Right now, those two services are using pre-shared secrets to communicate. And

this is working, right? It just means we nothing magic about this. we just developed whatever a 30 character magic string and we make sure everyone knows the same one and then they can talk to each other because no one else would know the same thing if we didn't put it there on purpose, right? Um and that works but it's really actually a pain on redeployments on fallbacks because there's this is a distributed system. So we're trying to update if we have to do rotation we have to update this pre-shared secret at a bunch of disparate clusters, right? It's a difficult thing to do and it's causing us a lot of heartache. So we would love

to change this to rely on some PKI and mutual TLS and we can use the certificates that are issued to these workloads to establish that identity and that root of trust. So then the real anchor of trust and the long lived key is no longer these pre-shared secrets. It's the root CA private key and that we can store in something like an HSM. So it will just issue in real life. There would probably be more levels here. It's not going to be the root just issuing search to the cluster, but say we have an intermediate here. But the point is this one's locked away in an HSM or even something physical like a bank fault. We have

intermediaries and then these ones that are used as identity for the services in the cluster. We can rotate those on every deployment. Every time we ship out a new cluster, just change it every time. So like it doesn't become an event to rotate anymore. It just happens. It's a normal day. We're shipping an update. We're going to get new certificates issued. So see Steve not every sorry so not every infrastructure is mature to do the MTLS authentication right this requires a lot of service mesh uh configurations so what are the other mechanisms that we can secure the serviceto service communications >> yeah good question and that's fair because this establishing PKI we have to do device attestations

there's probably like you mentioned uh service mesh to actually control the access policies. There's a lot of prerequisites for something like this. So instead, yeah sometimes we don't really need a BMW. A Camry is perfect. So thank you. Uh we can improve the situation a little bit using this pre-shared key that's already present. So instead of passing that pre-shared key directly, we can use it as part of a building our own jot basically or JWT token, we can put the what the values that we need in our own jot and we can use something like happy iron module with that pre-shared key to both provide integrity and secrecy to that. We'll authenticate and we'll encrypt that jot

and on each side we have that pre-shared key. we'll be able to seal it on the sender, unseal it on the receiver, and then we get a little bit for improvement like it's a admittedly a modest improvement over compared to something like a full PKI and MTLS, but it's for quite low cost.

So it yeah so it's it's a simple improvement we we do improve the picture here by using something like an iron token so Steve like uh as a part of this conversation we are supposed to walk the audience through a very cool demo right we are supposed to talk about cloud-tocloud uh resource access via authentication with no secrets. >> Uh yeah, first I want to touch on actually the difference between like why is this still an improvement because we're still burdened by these secrets, right? So yeah, we'll get to that in a second. That's good. But first, when we started talking, we talked about how we're going to like remove these secrets from our system. And now in a

couple places, like here with this pre-shared key, we didn't actually remove it. And before, if you remember, we were talking about the access to the client API. We just exchanged an API token for a private key. Like, we still have stuff we have to take care of. Are we really reducing the burden? And we actually are because in situations like this, they never leave the host. A private key or this pre-shared key now, it never leaves the service that is provisioned for. It doesn't go over the wire. It doesn't end up on a server. And the token that does get sent is short-lived, right? Right? So if it there is an issue, it'll expire quickly.

We can rotate without worrying about a broader longer lived impact. And private keys we can lock away relatively more securely. HSM, KMS, something like that. So even though we do still have secrets, we are making better choices about the types of secrets that we have, how we rotate them as a regular part of deployments and how we store them and manage them. So even though some still exist in our system, they always will, we can make better choices about what they are and how we treat them and still significantly reduce our risk. So in the end we end up with a happy all blue picture, right? The all we have all the connections we did originally all the

services are doing this the same functions they were they have the same authentication going on. But in this state, all the secrets are managed by like the IDP stored in HSM. There are credentials that are in the system but rotated as a regular course of say our deployments every week. And it's a it's a much better spot to be in. It feels good right? >> Yeah, we still need to >> Yeah, we still need to walk our audience through that cool demo, >> right? We do. >> Yeah. So do you think we all we talked about is securing access uh of user CI/CD service to service authentication but many tech infrastructure is all about multicloud ecosystem right a

service running in one cloud accessing a resource from another cloud and vice versa so do you think we can federate identity in intercloud communications? Yeah, good question. I do. So, let's look at that. I think we can um extend this a little bit because this is a common use case. So, I love this. This is um say we have services running in one and how do we authenticate to APIs in the other. So, this is a little bit different than the interervice communication we just had. Now say that we have maybe like a management plane that's running in one cloud and we need to reach out to the APIs of the other, right? How can we federate across clouds

without secrets here? So that's that's what we have and there's kind of two different mechanisms we're going to talk about doing this because it's going to go in both directions. So on this side here we have Azure running Kubernetes on AKS with a single container and we want that to be able to assume an IM role in AWS and then again again all the rights and privileges that that IM role has on the other side here we're running again Kubernetes on a EKS in AWS and it starts out the same Kubernetes is going to issue a service account token to this container and that's just a feature of Kubernetes natively so that part's the same on both

sides but on this AWS side we have more components And the reason for that is in both of these situations when Kubernetes issues that service account token the issuer or that trusted identity is going to be tied directly to the cluster ID and that might be fine but if we're talking about hundreds or thousands of clusters or redeploying them regularly now we have a new problem of like how do we here we need to know what ID we're trusting and if that's changing every day um they might be able manage that or maybe not or maybe so the point here with these extra components is we ultimately get an ID token is issued from cognto. So if we think about

this in like layers of the infrastructure we have in say in Terraform we have the infrastructure that defines things like the ID providers the clusters themselves the IM roles then we have kubernetes the actual manifests and that's where we're going to find what services are running how those AKS and EKS clusters are configured and then we have the actual applications and some of the challenge here is that when we change that like redeploy the Kubernetes cluster it has ties to the infrastructure level configuration So we would need to manage that a little bit differently by introducing Cognto as an issuer there. What we get is a stable trusted issuer in AWS that is separated from the EKS issuer. That make sense? So

we've we've kind of broken the intrinsic tie between this and ultimately what Entra has to trust. Cognto will stick around across cluster redeployments. Right? So in this case, Cognto is helping us persist the ID that's being federated across the cloud. >> Exactly. Yes. Yes. So this is running now and what we're going to do is we have three problems at key points. So th this is all set up. This is running and we have three issues. The first one is from Azure just trying to assume that I am roll. we have a bug with that role assumption. The nice thing is the Azure AWS story is pretty simple. There's only one hop. So that's where

the problem is. We know it is in uh on the AWS side. We have two more problems. One is we get the service account issued but we're failing to assume that IM RO with Ursa uh Ursa IM ROS for service accounts as an AWS service feature. And the third one is actually being able to get that ID token from Cognto. So, um, give me one moment. I'm gonna switch to mirror mode because I'm gonna actually really do this live, but I can't see what I'm typing on the this mode.

Heavens,

where'd my stuff go? Okay. Okay, great. So, so this is running on the top left. Um in this pane we have the the live output from the cluster or from the application running in AWS and on the right we have the output from the app running in Azure. All right we're going to stop with start with the one on the right. So [clears throat] we see it has failed to assume the role. There's no open ID connect provider found for this issuer. Like I said, this is this is kind of the issue of the cluster ID changing. This format looks like this is the subscription ID and this is the cluster ID. So every time we redeploy the

cluster, fail over, roll back, that's going to change. And we're going to actually look at the the source code for this. And this is split up in like the same kind of um hierarchy I described it earlier. We have Terraform defined in the infrastructure. We have Kubernetes manifests defining you know how Kubernetes is what Kubernetes is running and then we have a couple Python apps one running in each one and the goal of those applications is simply to assume the role and dump something about the identity in the environment just to show that we have act the desired access right so the first issue here is that this issuer was basically the last version of our Azure cluster,

right? We redeployed it. We forgot to change this. So, I'm going to comment that out. And in this case, I have the correct value provided as a variable because of course, this is all defined in one repository. I'm just deploying it here. So, I know what the correct value is already. So, once we um redeploy that, I'd expect uh the logs on the top right to automatically correct themselves. So what's going to happen is this I have make targets so I don't have to think too hard here live but it's just going to run a terraform apply right now and that will fix um the IM role identity provider or specifically the IM identity provider that's allowed to assume this

role to match the actual cluster ID that we have running over there. So with every blue green deployment, if we reconfigure the issuer ID in the other cloud, federation makes it easier. >> Yeah. Yep. Yeah. And this is something that could be like I've automating it here. There's a solution here. I don't mean to over complicate AWS. There's probably a fix on this one, too. There's just going to be trade-offs whichever way you want to go. Okay. So that'll turn green if it doesn't make me a liar.

should have made it sleepless. Yay. Green. So now we see we we have the issue that we changed it to. We and then the proof in this pudding is that we assumed the role or role in AWS. That's just a call to STS who am I right? Okay. So now we're going to keep going on. We're going to look at the second issue here over in AWS line now because now Azure's worked and we're going to shift our focus to AWS. And here we see that we are not authorized to perform that assume role with assume role with identity. So we have a Kubernetes service account. We want to change it into an IM role and we're not

authorized to call that. This one's a little bit more opaque, but again, I get the advantage of having written these bugs on purpose so I know to look in the uh service account annotation that we put on Kubernetes because this is how you assume a role with IM ROS for service accounts with Ursa. You add an annotation to the pod for a role that your Kubernetes pod is allowed to assume. In this case, we have the wrong ARN. And we know that because wrong is in the name. [snorts] So again, looking at the EKS config here, this is another relatively simple mistake to make, but we're going to change that annotation to be the correct

AR for the role that we're actually allowed to assume. We're going to deploy those apps. And um so make deploy apps that's going to apply that Kubernetes manifest that we just updated. And then a refresh is simply going to scale down the deployment and scale it back up. So I force it to you know take up the new config, deploy a new pod, take up the new config.

While that's thinking, I'm gonna just start talking about the third. Okay, we'll come back to that in a second because it should just take a minute. So, the third problem here is once we fix this problem, we'll actually have a have an Ursa role that's allowed to use Cognto. And then we want to make the call to come to say, "Hey, I'd love to get an ID token from you with that nice stable issuer." And then use that to assume a service principle in Entra. And for this final problem, we're going to move up to the application level. We have a couple Python apps here. And the problem here is going to be pretty simple. Basically,

we have that service account token that's issued by Kubernetes. Kubernetes issues that with prefixed with the protocol of the issuer, HTTPS, Cognto doesn't want the issuer. It just wants the domain and the rest of the URI. So, we're just going to get rid of it. Um, and that's that. Once we have that, Cognto should happily issue us an ID token because we're saying then that our, you know, we want to use that login, the issuer that you expect that we've configured you to be allowed to accept to accept an to issue an ID token to. And here's ours. Um, but first, let's check to see if we got past that last problem.

Yeah. So before we still have a problem, but before we were not allowed to assume RO with web identity. Now we have a role. Now we just have an invalid login token. And obviously and if I check the service account annotation again, we should see it updated to be the actual EKS workload role that's allowed to be here. So one more bug fix for this final step and it should light up all green. I think for a second I changed the application. So what I need to do is rebuild the containers and push those to the registries. So this is actually going to build into a image locally push to ACR and ECR and we should be set there.

Okay, since this is going for a second, why don't we move on to the uh learnings? Pushing was pushing was quicker before a lot of people in here. So, that's all right. I'll move on to u some of our takeaways here and we'll come back and check on the sash that in a second. >> So the key takeaways uh which we want to reiterate for the audience here is thou shalt not have the burden of secrets. The golden rule right so it's incumbent upon us to design in a way to remove static credentials wherever possible. Let's say if there is an opportunity to leverage passwordless solutions for people login be it SSM or pass keys go for it.

utilize managed identities and uh trusted IDPs for serviceto-service communications and sorry service API communications and service to service communications whether it could be an MTLS or some kind of iron tokens where the shared secrets is not sent across the communicating entities. So coming to the second part the caveat number one if you must have a secret somewhere thou shall secure the secrets right think about HSM or KMS think about uh securing them in the with a hardware protection there and if you must keep some secrets in some environment somewhere for whatsoever reason it's incumbent upon us to make sure that we frequently rotate them. So choose the right secrets to keep and bake them as a part of your

regular rotations in your operations. So let's keep these golden rules in mind uh when we build our systems when we build our infrastructures. Yeah. >> Thank you. >> Awesome. Yeah. Thank you. [applause]

And I'm definitely going to show this all green in a second, too. I can't I'm not going to leave it broken. >> Live demo is always tricky one. Like, you know, Steve, you did it awesome.

Yeah. Any >> any questions or thoughts or I'll just talk for another minute. Yeah.

>> Hi. Uh could you explain a bit more about those iron tokens and how they work? Are they I'm guessing they're just like a certificate. No, the the iron token is going to do like a passwordbased key derivation to generate keys bas you give it a just a string a random string that you share on each side and it will generate the keys necessary to do the the crypto for the authentication and the secrecy kind of on the fly. >> Okay, thanks. >> Yeah, screen. Sorry. So anyway,

>> all right. Thanks everybody. >> Thank you. >> Thank you guys.