
Welcome everyone. Before we get started, I'd like to introduce you to a friend of mine. His name is Bob. Now Bob here is a red team and he's currently on a red team engagement. He's targeting his client's customer data within Azure. So to begin the engagement, he started by trying to fish several users and was able to successfully obtain credentials for an Azure account. Using those credentials, he logged into Azure and saw this a pretty vanilla looking Azure portal where he's logged in as some generic user literally. For the next 20 or so minutes, we will be following along with Bob as he traverses the this cloud environment and describes to us exactly how attackers or red teamers like him
navigate Azure using key vault lateral movements. Just a short introduction, my name is Cristiano Biankeet and I am a senior red team operator at Microsoft. I've held roles as a software engineer, as a site reliability engineer, and as an appsac engineer. So, if you're in any of those roles and are interested in pivoting to red teaming, please come see me later. We can have a chat. We will be covering three main topics today. Entra applications and service principles, Azure Key Vault access controls, and finally, recommendations on how to prevent Bob and people like him from accessing your Azure environment. So what are entra applications? Well, for those who don't know, they're basically just objects within Azure. And
for our intents and purposes, these objects are mostly used for authentication and authorization. Now, they are actually used as templates to create service principles. Service principles are basically like local applications. So think of it as there's one global app and then in each local environment you'll have a service principle. To get a better understanding of what that looks like here we have a HR tenant. A tenant is just like an environment within Azure. In this environment you have an application called HR app. This application has a client ID, a unique object ID and a tenant ID. that tenant ID just corresponds to this environment. In order to actually use this application and interact with it in any
way, we need a service principle. As you can see, the service principle has an app ID which corresponds to the HR app, but its object ID is different. For most things we will be doing, you will always be interacting with the service principle. Now, let's say for example, you're an external company. You say, "Hey, this HR app has some useful functionality. We want it to be able to do things within our tenant." Well, easily enough, you can just provision other service principles for that same application in your tenant. Okay, cool. So, let's pick back up with Bob here and see how he's doing. He was able to log in as this generic user and is now doing some reconnaissance within
Azure. First thing he checks are the app registrations. Unfortunately, this account doesn't own any apps. But that's okay because by default in Azure, you can view all app registrations regardless. You can see that there is a sample test app that exists and it supported account type is my organization only. This means it can only exist within this tenant unlike our HR app which could exist in multiple tenants. There's also a second app called MC Corp. PPE. This one can exist in multiple organizations like our HR app. All right. So, we learned that there are two main apps within this tenant. Great. This brings us to our first method of accessing key vaults, which is the
legacy access policies. This just looks like this. Exactly. You have a user like admin and he has various different permissions within your tenant. Permissions such as get list keys or delete secrets or updating certificates. These are controlled at a very granular level which which each permission can be either selected or deselected. Let's see if this is at all relevant for Bob here. Now that he's taken a look at the app registrations, he'll start diving into what subscriptions he has access to. Seems like there is a dev subscription which he can view. Let's take a look if he has any permissions or any roles on it. We could see here that there exists seven role assignments on this
subscription. However, he does not have permissions to read what they actually are. That's okay. Let's continue looking into our resource groups to see if we can view any of them. Seems like there is a test resource group we can access. And within that test resource group, there is a testing only vault. Okay. Well, let's take what we just learned and apply it. Let's take a look at the access policies. And we could see that our generic user has get and list secret permissions. Keys and certificates unfortunately are unauthorized, but that's okay because within the secrets, there's one called sample test app. That sounds awfully familiar. I think we saw an application with that earlier. And just like that, we're able to copy
and exfiltrate our first secret out of this environment. What can we do with that secret? Well, turns out that there's one way of authenticating to an entra application, which is via application secrets. So, we can take that application secret and store it into a variable within PowerShell. for a real red team engagement. I do not suggest taking client secrets and just putting them into PowerShell. But for our demo, it's okay. We can do this. Now that we have it, let's just take a look at sample test app since uh we saw it earlier. And we could see that it has a client secret called testing indev. It is not expired and starts with 4KO. Well, our secret also starts with 4KO.
So there's a high probability that this will allow us to authenticate a sample test app. Let's go ahead and take the information we need such as the app ID and the tenant we're trying to authenticate into and then use a PowerShell to connect to it. For the rest of this demo, we'll be using a PowerShell because it is common place. Everyone can use it and follow along. So we can run connect a account pass in that credential and our tenant and just like that we now authenticated as our sample test app. So just a quick recap we started off as a generic user. We had we used a pre-existing access policy of get and list secrets to steal an app secret from
a testing vault and then use that to authenticate as a sample test application. service principle. This brings us to our second method of accessing key vault which is via Azure role based access control or arback like access policies. This can be applied to any service principle or user. In this case we see an admin user has an owner role on a subscription but key vault has its own specific roles just for it. things such as key vault administrator, key vault certificate user and so on. Key vault administrator is the most privileged role. It actually has 52 different permissions which involve deleting certificates, getting secrets, uh updating keys, etc. If you want a role to go after RO, this is the
one you would go after. Okay, so now that we understand a bit about how Azure Arbback works on key vaults, let's check back in with Bob. He now has access to sample test app. So, he's taking a look in the environment to do some recon. If you're authenticating as a service principle, you can only do this via command line. So, you will never be able to use the UI like we did previously. He's now checking what role assignments there are. And it looks like our sample test app has reader on a vault. But more interestingly, it has key vault certificate user on a PPE vault, one which we did not have access to before. Also, there's some other admin user.
Now, in a real tenant, this will be hundreds of role assignments and you wouldn't be able to use to really parse that. So, you can actually filter for the object ID of the service principle you care about. Okay, great. Since we have certificate user on this fault, why don't we go ahead and try and list what secrets there are in it. And as we could see here, there is a PPE app certificate in there. It is not expired. It was recently created. And hey, there's a tag called access to attendance and some GUID. Hm, maybe that'll come in handy. Just in case, we'll go ahead and save that to some variable as we might use it
later. Now we can go ahead and just dump the certificate out of the vault using key get a key vault secret. Now that we've done that, we can take a look at it. It's a base 64 encoded string. Looks like a pfx to me. So we can just go ahead and write that to our local file system as ppert.pfx. We can now just decode that B64 string, encode it as bytes, and we'll be good to go. But what can you really do with a certificate? Well, turns out that the second method to authenticating to an entry application is via public keybased certificate off. So, let's go ahead and try and do that. Firstly, we can load it into memory just
to make sure that it does in fact have a public key or sorry a private key. It does. That is true. Great. So, we could use this. Next, let's take a look at it thumbrint. Its thumbrint starts with DC25. Since we still have access to generic user, we can go and take a look at that PPE app registration and see that that same thumbrint DC25 does exist on this app. So, we could use it to authenticate. We can take the app ID and tenant ID and similarly like we did with sample test app we can try and do a connect a account command. We just pass in the search path and just like that why don't we try accessing our
tenant number two. It looks like that worked and we now successfully pivoted into some unknown tenant and a subscription called MCP prod. So just recapping here, our sample test app had certificate user on a PPE key vault. This PPE key vault had a certificate in it. We use that certificate to authenticate to mcorp.pppe. However, we didn't try and authenticate in our existing tenant. We know there's a service principle in there because that's where the app is honed. However, it isn't just that there isn't just one service principle. There are multiple in other tenants clearly and by using the certificate we were able to pivot and move into an completely new environment where we can do additional
reconnaissance and see if we can continue moving laterally. I like to call this access control method 2.5 for key vault because it is still Azure arbback but instead of the role being on the key vault directly this would be on the role of a subscription or resource group where the key vault exists. So let's see how that applies to Bob here doing recon like usual. We're checking what kind of roles exist within this tenant for our app. And it looks like our PP app is owner of a prod resource group. Cool. Let's take our object ID and store it cuz that might come in handy later. We can then try and list what resources are within this
resource group. And we can see here that after running it on our prod resource group, there is one resource and it's called vault prod. Okay, let's try and list them like list all the secrets in there like we did previously. Uh-oh, forbidden error. Looks like we're not authorized to actually view the secrets in here. We don't have the appropriate permission, which would be read metadata. Well, that's okay because this key vault exists in a resource group that we're owner on. So we could just go ahead and assign ourselves key ball administrator and that would be that would give us all the access we need. So just running new a role assignment on the specific resource
we're trying to access and providing it our object ID. We are now key vault administrator on the vault and we could just rerun the command and see all the secrets in there. There seems to be two of them. A prod app ID and a signing certificate. Okay, signing certificate. That sounds promising. What can a signed certificate be used for? Well, potentially federated identity credentials, which is our third and final method of authenticating to an ENT application. Please bear with me as this is the most complex way of authenticating. But in essence, if you look at this diagram, you will see that there is an external workload. In this case, think of that as your service or user or Bob here who is
trying to access Azure resources which are all the way on the right. Well, he simply talks to an external identity provider. This is something that the application trusts and if that application trusts it, then it tells Entra, hey Entra, you should trust this identity provider. If I have a token from it, you should exchange it for one of your own tokens because it is proving that I am who I say I am. So Bob here or service can request a token from that external IDP. It'll get a token back with all its claims. Then take that, send it to Entra. Entra will be like, "Okay, let me just double check that. Checks that it's signed properly. Looks
good to me. Let me send you back an entry token." And then the service can use that entry token to access things within Azure. But wait a second, we have what will look to be a signing certificate. So if we have that, then we could actually just skip this whole first part. We don't need to request a token. We could just create our own because we have the private key. So let's create our own token. First things first, we need to dump all the secrets we saw in here. We'll take the prod app ID since that might be what we are targeting next. Now that we have that saved, we can see it's a GID. It's not a prod. It's not an
app ID we've seen so far. So, that's promising. Secondly, we will be dumping our signing certificate here and saving it to our local file system just like we did before.
Now that we've done that, we can go ahead and see what that application like some information about that application. We could see that is the app ID we saw earlier. We saw was created recently and it's called mcorp.pro. Okay. We could also see that the federated identity credentials field is blank. There are no key credentials and there are no password credentials. So, how are we going to authenticate to this if all three are blank? Well, that's actually misleading. There's a separate command called get a app federal federated credentials and this is the source of truth. It will tell you whether there is a federated credential. Here we can see there is one and that there is trust on
MCP IDP. So, MC Corp has its own identity provider. Now that we have all that information, we can go ahead and create our token. First things first, we just need to load our signing certificate which we just saved on our local fire system into memory and create signing credentials with a SHA 256 hash. Then we need to create our claims. So we need our issuer and our audience which we saw earlier and we just got through reconnaissance. We need our subject which is MC Corp IDP. And lastly, we need our x5t claim, which is just a base 64 string of theert hash. With all of that, we can just create our token using all everything we just talked
about. And now that with that token, let's just try and connect to Azure. We provided our tenant ID, our new prod app, which we just did some reconnaissance on. And just like that, we were able to get into the prod app.
Don't clap just yet because we're at the final stretch, but to start things off, Bob wasn't really interested in like the prod app or the PPE app or this test app or this key vault or that key vault. He had an objective and that objective was customer data. So, let's see if Bob can get there. So like we've done before some reconnaissance see what roles we have access to. We are owner of a resource group called data. Would you look at that? All right. So let's see what resources are within this group. We could see there is MC Corp secret data within that group and it is a storage account. So we can set our current
context to that storage account. So we can query it and see what's in it. Great. We're able to do that. And now we can take a look at what containers there are. Customer content. Okay. Let's see if there's any customer content in there. data.txt. Let's go ahead and take that data and save it to the local file system just like a responsible hacker would do. Let's see what that content is. customer data that Bob shouldn't see and some gooid. Looks like we did it, guys. [Applause] So, just recapping what we and Bob just did. We started off with a generic user that was fished. That generic user had pre-existing access to a testing vault where we stole an application secret.
From there, we authenticated into a sample test app service principle. That service principle had a key vault certificate user role on a PPE vault. That vault contained a certificate. That certificate allowed us to then pivot into our production tenant as this PPE service principle. that service principle was owner of a prod resource group. There was a a key vault within that resource group. We didn't have access to it, but that's okay. We gave ourselves access as admin. We stole the federated identity credential signing certificate allowing us to get access to the prod app. And finally, we were owner of customer data. So, what did we learn today? Well, we learned that there are three different
ways to authenticate to enter applications. app secrets, public keybased certificate authentication and federated identity credentials. Also, there are two types of access control for Azure key vault. The legacy access policies and secondly, Azure Arbback both on the vault and on the resource group and subscription the vault is in. So, how do we prevent this from happening? Well, firstly, you can use managed identities. This was not a topic I had enough time to cover unfortunately earlier, but this would allow for short-lived tokens instead of the use of something like an app secret which can be leaked in source code or just in other ways and they're long lived. If you must use app secrets and credentials,
we highly recommend you have an up-to-date inventory of them and make sure that they do not cross security boundaries. Do not have like a prod secret in dev, for example. and also avoid cross-tenant pivots. You should not have some application called MCORP PPE which exists in your dev tenant but is owner in your prod tenant for some reason of a resource group. That should never happen. Lastly, switch away from legacy access policies on key vaults. Use Azure Arbback those that is centralized. It just like every other Azure resource uses arbback. Use it for key vault as well. It makes things such as logging and detection much easier. Speaking of logging, enable logging on your key
vaults. That makes things such as audit logs and other things appear and easier to alert on. And if you are enterprise, use Microsoft Defender for cloud. They have great anomaly detections. So something like an anomalous IP accessing your key vaults, anomalous secrets, get and lists operations and so on. All these things would have caught Bob pretty easily and he never would have gotten to see that awesome gooid with that. Here are some resources from Microsoft about how to enable monitoring. This will be available with the slides which I think will be available later. And that's it. Bob and I would just like to thank you for sitting with us through uh getting customer data.
Yeah, I don't know if we have time for questions, but if anyone has any >> Yeah. >> Uh, this question may be ancient and out of date, but how much if any of this thing would a tool like Blood Hound be able to? >> Well, Blood Hound, as far as I'm aware, doesn't work for Azure AD. So, that that would be on a on a on an onrem system. Yeah, >> it does now. >> It does. Yeah. >> It goes to your house. >> Okay. >> Okay. >> Well, there you go. >> Did you face it in the real world or just the P part? >> It's a P based on things I've seen in a
real world scenario. >> Yeah. >> So like uh how did you overline that? Did you have to uh inform the customer about it like happening or like will the customer inform you like something is happening with my client's account? >> So for example, if I was doing a red team engagement and I saw some of these things I would inform the customer and they would like you know make more strict access policies or arbback or so on lower the permissions that exists. >> Was it midnight? >> This was not midnight blizzard. Uh no >> another question I had like in order to authenticate you mentioned like we use app ID tenant ID. >> Mhm. >> Right. All those things. So uh when we
build the application all these things are like many times hardcoded right app ID tenant ID whatever. >> So and then we call the API like we Microsoft Android to get the credential. >> Yes. >> Right. And then we pass all these three things three four things I then we get the token. >> Yes. >> Right. So and then with the token we actually the application actually authenticates. >> Yes. Exactly. >> So what is your take on like within the code hard coding these things like app ID and id? >> Yeah. So firstly doing reconnaissance isn't always the easiest thing to do. Like in our case we had an application that could read the directory and see
these things. To be fair, any generic user could see this information, a real user in Azure, it'll see like uh you know what the fed the identity provider is, what the subject needs to be. But to get the signing certificate is not should not be trivial and that that it should really be like a an odd situation because if you have a signing certificate, you win like you can just do whatever. Yeah. [Applause]