
[Music]
Thanks everyone for coming. This is my first charm. Uh looks like an awesome crowd. Hope you're having a great morning kickoff. Uh my name is Jenko Wong. Um as you can see uh sort of by the title, I wanted to explore maybe some new perspectives on defensive deception or using deception as part of our defensive strategy. Uh I've been involved with threat research a lot of past six years or so and a lot of products before that and v variety of uh security domains. So I want to sort of share that perspective with everyone. So, we use a lot of analogies when it comes to defenses. And here's my handcrafted art of putting them in one
slide, right? Whether it's canary tokens, razor blades, trip wires, landmines, we got it. Honey, honey of various sorts. I think I want to go through with you just why why talk about this topic? I mean, is it even useful? Let's maybe define it a little bit, go through some examples. And what I really want to do is try to stretch our thinking a bit. So, it's it's a bit of a research focus and I think we need to do that especially defensively or else how are we going to move the security needle, right? That's that's part of my premise here is if we're not willing to think beyond the days set of alerts and tasks, we're not really going to make
progress. So, here's my attempt to add two two bits into that discussion and hopefully um stir some thinking on your side as well. So there's no shortage of headlines, right? And this is part of why I think we need to entertain new ideas. So just to take a few examples, I mean this happens to be Microsoft, but clearly many vendors and many enduser organizations um are being attacked and losing. I would say we've got Blizzard from 24. And you don't have to go through all the details, but maybe you have, right? Uh the things to pick out are I think the attackers are getting stealthier and more advanced with their techniques. So if you're a Microsoft uh
customer or familiar or supporting a Microsoft environment, you know probably a bit about OOTH apps, right? It's everywhere. Third party app access, users in control often the time granting access. Imagine 20 years ago we said users could punch holes in the firewall and grant their access to an external app to say a database. It'd be ridiculous. But that's the world we live in. And so this is part of it. But we see in fact that OOTH apps were not just used for initial access. There's privilege escalation in there, right? Some legacy OOTH app existed. It gets complicated when you actually look at underneath what's going on. Uh let's keep picking on Microsoft. Uh if you got Microsoft people in the
audience, sorry, it's not you personally, it's the rest of your organization. The first of many storms, right? This one's 23 version 0558. Uh some of our favorite threat actor countries, China. Again, this one's pretty complicated, but we're getting into forge tokens. They're starting with email. It's a misnomer to say this was business email compromised, but that's part of where it started. MSA tokens gets complicated, right? Really fast. Again, if you're in that world, business side, sorry, consumer side email crossing over to business side, I'm pointing out attacks are ever increasing. If you look at Microsoft's own analysis, and I will give kudos to Microsoft. This was one of the better reveal your dirty laundry analyses. And
you can see there's like a 10-step chain. Pretty advanced how they the attackers crossed environments, took advantage of multiple mistakes and breakdowns. So yeah, advanced attack also shows how hard it is to play defense, right? How hard it is to know how your environment's interconnected. So defense is hard. Let's roll back to a few uh a month uh or so ago, another storm. Microsoft again, right? This time they're advising people. They're not actually breached. Russian threat actors. This one's interesting. Okay, I'm I'm gonna point out that again, if you know Microsoft, PRTS are like the golden Uber token. Primary refresh tokens are hard to get, but if you get them, you can mint OATH tokens. It's used in SSO and device
registration. Trust in the domain when you first connect a device to a Microsoft domain. if you can get it and they did huge huge uh impact and their spoofing of Microsoft apps like the authentication broker and we don't even know what else is sort of underneath the surface. We don't we're waiting for the next headline or reacting going down checklist. So what can we do better right? Well, the research community, if you look at summer camp and other conferences, there's a list of research and I want to point out too, right? Some of these uh you might know your AWS, you know, about a lot of the good research that's been going on for a
lot. There's some Microsoft research here. This is in the last couple years. I highlight highlighted storm 2372 from a month ago. last side. We knew the fishing part of that device code fishing almost five years ago that was outlined by Mtory Cinema first entry. The bottom the most recent one where they switched to grab the primary refresh token was over a year old known by the security community. Dirk John is awesome researcher in terms of Microsoft. All right, so enough setup. I mean, you you guys can find 50 of these examples. The point is, I think we need to assume breach. That that term's been used. I'm not the inventor of that, but seriously, assume breach
when you think about defenses, not just say that like, hey, that's an excuse. We can't be perfect. It's what do you do differently if you know every layer is breached? You still try to protect, lock the door, but you're assuming people get through the door or the window or the back door. And what we shouldn't do, my premise here today is stop doing the same stuff. We're all busy, but let's figure out if there's something worth doing. And so my premise today is can we start to use stealth against the attacker, right? Where can it help? And I'm going to focus on postbach detection. Now, I want to sort of make a nod to MITER and there's a lot of other
resources that have talked about deception just in case you're involved with MITER engage. Yeah, that's out there. And I think there's some value there from a planning process perspective. They talk about denial and deception. I'm going to talk about it a little bit differently. I fit in deception, but notice the words they use. They talk a bit about how to mislead and confuse the adversary. There's a value in that. That's not what I'm talking about today. I'd rather be hidden. My analogy is I want to be a trip wire. Attacker doesn't know. They trip over something. helps me detect after they've breached my defenses that helps me detect after breach. Right? I want to take that
9-month average or whatever the enterprise average is for figuring out they've been breached. I'd like to shrink it, too. How about a day? That'd be good. Hour would be great, but I'll take a day. I'll take a week. At least a fast reaction time may get me uh, you know, saved if I'm then organization. The lures within engage, that's the area we're at, right? And actually at this point I do want to make a nod to Matt Masel who just spoke about GCP. Awesome tooling there. If if you haven't checked that out, you should because tooling and and some of the ML incorporation, there's a lot of practical stuff that's going to move the needle, right? Just with practical
stuff. I'm I'm coming at a little bit with some of the theory which I also think is important, but just shout out there. But lures itself to me is a little bit active. I don't want to hang a honeypot in front of somebody. I don't want to be that obvious. I want I want some invisible trip wire. That's again my my little focus area today. Um this is some of the rest of engage. I'm just pointing out this is a good framework if you're thinking about how to to to apply deceptive defenses. MITER obviously has done some successful things internally, but there's a lot of there's a lot less practical implementations being shared of how people are doing engaged. That's
the problem. the stuff is still high level. So let's see if we can move that to some examples. Now I'm going to first start with an old school story because it's a Unix story. So uh you actually have to know shell. I know that's going out of fashion. But this story came from a CTO about 20 years ago was at a v management company. The CTO, this is Solaris days before Lionus ate everyone, right? We used to have Solaris and other flavors. The scene was what do you do if you're protecting your inner sanctum server, a ring zero server, right? Attackers going to get there one way or the other. It has all the secrets. So the premise and the story is
if you're a CIS admin one thing you can do is how about we change the ls command that becomes on Unix Linux a choke point a very a common path that any attacker is going to do even if you do a buffer overflow CV people are probably going after root shell just going to be protective right ls that also becomes our trip wire everyone's going to do ls class, including the good guys, but we'll get to the good guys in a minute. Let's replace it with shut down now. Takes the server offline. At least you saved the server. They probably didn't Xfill and you can probably do some simple monitoring. Worst case, do a ping
and you'll know you'll know at the same time as as uh taking remediation action, right? So, that's the shortest kind of shell script that would be somewhat intelligible. So in that funny story is look maybe we can think about this part of deception and I'll pick these terms. These are terms that have been used but they fit my narrative. A choke point a common path. Let's figure out in the cloud what are the common pathways that an attacker absolutely is going to cross. Let's figure out the trip wires. What's going to trigger something from an attacker viewpoint and less likely from a normal user? And then what's the reaction, right? In military terms, what's the landmine? What's the the
grenade? Um, you could say a landmine's also a trip wire, right? You step on it and the action happens. And at the same time, we have to worry about FPS, right? In colorful terms, don't kill the bad guys. So to continue or finish that story, what's a a real admin do on that inner sanctum server? They use echo star, which is a poor version, right? It it lists the files. You don't get all the functionality. Of course, the the admin has to remember to do it, right? It's it's I'm sure everyone's got 10 better ways to implement this, but it's it's a story, right? So, what what do we do in the cloud, right? That's also the
field I'm looking at. I'm not looking at old prem full stack OS, although the cloud obviously has compute. So, I wanted to go back to the attack chain, right? Part of the premise is we have to start there and design specifically for the attack chain, TTP. So everyone has a version of this and there's more complicated ones from locked to MITER. And I'm just going to rejigger it. Instead of having the nodes be sort of the actions, I want the edges to be the actions and the nodes be a state. And it just makes me think about I want to monitor or detect the actions of an attacker as they move from point to point. This helps me, right? the
objectives on the bottom more of the TTP on the top in the red if that makes sense to you because what now I'm going to get into the details I think there's a couple areas we can rethink some of them have been thought about but rethink deceptively to try to make really practical so canary tokens honey tokens identity type tokens resource tokens they have been talked about there's a lot of good stuff people should look about that I I'll talk a bit about them I'm not going to regurgitate hate it. But I want to look at other TTP or tactics and techniques like I want to look at enumeration. Why? Postbach there's always some kind of discovery
that you do to go to the next step especially in the cloud where you're restricted to APIs old school. Yeah, there's a lot of ways to discover the IP network and move and find your target. What what are you going to do to move laterally or to privesque in the cloud? You're going to do a bit of discovery things like what permissions do I have, permissions to what resources, right? Pretty obvious stuff, but we don't pay enough attention to discovery. I want to put some traps there, right? Of course, we want to look at identity access, security principles, right? So, if it AWS, we'd look at um accounts, user accounts in that sense, including API keys, permanent keys.
We'll also look at roles because that's super important, say in AWS. for Azure, we'd look at service principles right away. That's so key to their uh identity and access, right? Um resources, of course, there's a gazillion resources. We'll start with the common ones, which everyone will pick up. S3 compute, but you know what we have to be aware of. There's a lot of targets for the attacker. So, we should look at all computes including clusters. We should look at images. We should look at interactive compute that's hidden from us like cloud shells. All of those get utilized with tokens or access. We need to cover them because the attackers aren't saying, "Oh, I'll just take the
most obvious stuff like S3." Of course they will. And then they'll look at every little way to get inside and escalate from there. Um, and then you could keep going. Databases, analytics, so on, right? There's a lot of stuff. If you want to try to capture access to resources, you want to look at this a couple dimensions. At least that's what I'm proposing. The location of your detection is important. What do I mean? Take compute. You think about EC2 and AWS running internal. That's the obvious one. How about the cloud shell version of it? Still internal, less security controls, often given for free. Users click in the browser, they get a shell, but it's an EC2 uh instance. It's got
CLI pre-installed. Still caches tokens. There's a whole bunch of stuff. Are you paying the same attention there? Attackers are, but defensively are we not. How about external versions? Your endpoint got admins with CLI. Boom. Credentials. We have to our perimeter sort of moves. That means our defenses should be more flexible. It's not authentication to your cloud. That's sort of the obvious one. It's wait my home was breached and my users work from home. That's my perimeter, right? And it's lazy for us to think, oh yeah, we've got EDR there, anti virus, and a VPN. Well, of course, they're all going to be breached, but maybe you can detect things that far out that early. Access
versus use. Um, simple one is, did you access this bucket? Did you try to um access the EC2 compute? You could be more nuanced and look at usage. They not only accessed EC2 compute or accessed the top level of a bucket, they enumerated, they went further. Why? Why have that level? Because you may need it to distinguish uh good from bad more reliably. Signal fidelity. Um rotation. I mean that is if you put in traps of any kind or trip wires or whatever we call them, they have to be rotated both to fool the attacker who may know some of your methods and also to address malicious insiders who are threats as well. So there are some issues to go
through. So let's take some examples. I'm going to use AWS for now pre- brought common knowledge but the point is we have to be effective by being specifically designed for our environment no way around it we can think abstractly with all these frameworks all these metaphors if I can't give concrete examples in AWS it's not really going to be effective GCP Matt Masel great examples there Azure so on so let's talk about enumeration very common choke Right? That's what I propose. One of the first things you do if you get access is you're going to check your policies, right? Managed inline policies. You probably enumerate them all. You don't want to leave anything on the table as
an attacker. If you were to fully do that, um there's actually a bunch of API calls you have to do because AWS does not consolidate that into one top level piece. I haven't done anything defensively, but just this might be a good detection, right? and and SIM vendors, others have gotten this. I'm not saying anything new here, but the sequence of this call these calls to fully enumerate your policies is suspicious because you usually don't do that if you're a normal user. You only check your permissions when something you're blocked, right? You don't go every day, oh, I wonder what I can, you know, do and see, right? What admin privileges, let me check every single
policy. You just don't do that. Um policies, let's let's bring it down to example. Even if you're not in AWS, these things translate over to other roles in other environments. Here's what the attacker needs to understand, right? What API calls? Um, what resources can I access? And by the way, um, there's additional things they may do if they see in their policy, sorry to flip back and forth. If they see in their policy something uh pretty sensitive like assuming role which really means you can privilege escalate usually the attacker is going to follow that right but there's a little bit more involved to know whether you can actually do that. So they may actually enumerate roles to see that
they actually are allowed to do that. Right? So again there's a bunch of things we could pick off and we haven't even done anything differently uh defensively. So let's talk about what we could do. Now we're getting into sort of the meat of it, which is the policy I propose could be a choke point because almost assuredly an adversary is going to check that. But what could we place there that might trigger something suspicious? Um, a certain API permission. You could purposefully put in an assume role. Okay, you could also wild card things. STS star would look very inviting because I want to invite that adversary to actually make an API call. I could also list resources. Of course, that's
that's the idea behind sort of canary honey resources like buckets. Very enticing. The adversary when you look at this is oh I can do these things. I can assume role on that administrator role. I can also access uh some wildcarded read operations on and especially that finance bucket and I could also be very specific if I wanted to and say ah maybe you can roll back the version on a policy all in an effort to entice in a hidden manner an adversary to try those right here's a choke point and then there are the trip wires Now there there are what dozens hundreds of these that we could probably brainstorm if we're in AWS. The point is this is an
example point that I think we should really look at. So think about it from the Red Hat viewpoint. You're the adversary. You actually know there these might be deceptive methods in place. It's tough in AWS. There's no way around it. You don't you don't brute force a buck. I mean you can with domain names and stuff but usually it's if the the bucket's there you're going to see what you can get right if assume rolls there even though administrator is you know a custom role name sounds enticing right I got privilege escalation so let me switch to a different example okay tools I think we have to be specific to tools again not a new
concept we do have detections based on tools and there's a lot of good open-source research exploit-based tools. So, let's take Pacu because we're on um AWS. They have many modules. One is to discover uh roles, brute force them in a way very clever way in order to do cross account access which is part of how AWS imple uses rules roles. their dictionary list is such just the first few entries. uh however they came up with it they know there's been uh various people who create administrator named roles can be anything some are very specific why don't I go back and use some of those and put them in just in case someone uses that tool they'll
get a hit and maybe they'll through a different means not by looking at the policy but by using a tool they might actually discover that role try to come in and use it and I'll be monitoring that role it is a role that's not really use it may not have permissions, right? I can I can sort of make it a low impact um kind of dete uh risk. Okay, so let me move to a third example. Again, stretch your brains for a moment. Let's keep moving. So I don't want to gloss over and ignore Canary and Honey tokens. Um here in the AWS world, I want to give a nod to the Netflix security team, right?
Besides being huge on BSD, they're pretty huge on um AWS. So when they do publish things, it's worth looking at. And more recently, but still a few years old, they talk about their version of Canary tokens. So here we're getting very specific to how AWS manages credentials and temporary credentials, right? You could use user accounts or you can use roles and temporary session tokens. and they list some of the trade-offs and user accounts. You can have permanent API keys, but you have a limit of user accounts. So you, you know, it limits how many canary tokens. So they went with session tokens, which makes sense. They're worried about their EC2 compromises. So basically, they they
play some tricks. They have an EC2 instance manufacture some canary tokens, which if you look at the details in this link, they then want to use those Canary tokens and plant them. They actually said suggested go back to the CLI cache in AWS. So their version is I can have an EC2 manufacturer canary tokens. I'll use the ID of the EC2 instance that's manufacturing them and then I'll spread them around and of course I'll be monitoring for them. Okay. But an attacker who comes across that whether it's in the CLI cache or some other reasonable place be very hard not to try that token. You've discovered a credential, you're gonna try it, right? You can avoid it, but then you the
defender wins still, right? You they had a different version to try to protect EC2 compromise. And I want to point it out because it's a very different technique. It's a little bit specific to AWS, but all of the cloud providers have a metadata service for the running services to grab the token, right? Usually a local URL like 169 dot you just curl it do a get. So the normal flow is what's in the picture, right? Some application running on the EC2 box could be a script needs it. You get you get a role, you hit the metadata service and magically you get some token that works. What they did is they man in the middle. They futed with IP tables. They
routed that 169 call to their own proxy service. The proxy service knows the IP address at boot time of the EC2 instance. it does its own assume role to grab a token. But with AWS, one of the cool things you can do is when you assume ro, you have to control that call. You can inject a policy that restricts what that uh token is used for. So it's the caller's control, right? But because they're man in the middle, they're controlling the call. They get a token and they used it to IP bind the token. So if an attacker was on that EC2 box and did the normal thing, what they get back is a token that only works on that
IP address and that's the injection of the policy. So they use this feature of AWS to constrain the token but in a subtle hidden way because if you're on that box, you have to do some hunting to know this is in place. And that's usually not what attackers do. Attackers going straight for the token. They're they know it. Everyone knows it. You do a curl call the 169 blah blah blah blah. So what they do is they don't want it taken off board and used a stolen credential. They will detect it right off and it won't work. I'm suggesting you could play with that in a couple ways and basically detect if a token is
stolen. And it's a it's a very specific playoff of zoom roll functionality in AWS. move to another example. I'm flying through this guys, I know, but I want to get through some examples. Stretch your brain in a couple different directions. So, I talked a bit about you got to worry about cache credentials. Your your perimeter is actually sort of not not perfectly a mode. So you got to look at uh where CLI is used among others just mainly because for whatever reason the CLI tools of all the cloud providers still store in plain text cache credentials right all these CLIs by default terminate they can't even keep a session where you're interactively working with it meaning when they terminate they
still want to cache your session so that the next command you run you don't have to authenticate. So uh this big example is GCP just uh it was a past example I had written about but clearly the directories in the top right everyone knows about them you jump in there and you find either session keys permanent keys OOTH tokens whatever is used so where do these environments exist on EC2 instances cloud shell running on EC2 instances endpoints of admins running CLI and you got to think out to make sure you didn't miss any of those. Those are attacker points. They're also opportunities to be better about defense. If you're going to go the honey token route, hit all these places. Is it
hard? Of course, it's going to take a little bit of work. But the good thing is your config is cached and will survive an upgrade. You just got to get on the box and maybe have to refresh it to be fair. your your config and then even if people upgrade the CLI you're you're okay cloud shell sometimes those discs persist you got to sort of play that game but guess what attackers know they can still get good stuff even with ephemeral contemporal discs in cloud shell EC2s are the big target uh so part of it is let's push against the attackers but by placing sort of deceptive things because if you check that's the first part you'll check just
like SSH will be checked if you're on an endpoint and if you find a credential there it will be tested maybe not immediately depends it will be tested so that's the thinking I think we need to do let's take resource access of an EC2 a little bit different let's move off credentials and move to resources so there there's a couple ways you could look at buckets and data service compute I just want to again think we got to think differently So, how do you get on an EC2 instance? A lot of ways, unfortunately. It's it's old school. So, we could have a CV cuz it's running a web server or some port listening. Could be a management interface. Could be
because there's a ton of EC2 instances with public 22 still open or RDP. But we also know the recommended best practice and some customers do is use something like session manager. All of the cloud providers have a different better way to use your IM identity. I want to cover that path also because probably if an adversary saw in their policy access to an EC2 instance using session manager, which is a way to do SSH, I think they're going to try it. They're like, "Ah, I'm inside the environment. Oh, they didn't protect that. They they thought, you know, once you're authenticated, you're safe. I'm going to go at it." So this one you could run a full EC2 instance and play a
high interactive kind of game. You don't if it's not running you'll still get an error that you can look at. No one should be connecting through session manager start session to the EC2 instance. Would a admin a good user get confused? Maybe. But I would say your chances are pretty good you could train that out because that was never in your job role. Why are you mcking in your policy? You know we told you there's some deceptive stuff. Don't touch, right? If you don't know what it is, don't touch. And if you did, you know, we'll slap your hand. You're an AWS admin, right? That's part of the job. You're not a, you know, general user.
So, there's a lot of practical issues, but theoretically, I want to look at all avenues, right? If you get into access, look at all of that. Let's talk about common resources like S3. Though, this has been done a lot. The only point I want to say is when it comes to false positives, this could be pretty noisy. I mean, listing buckets, enumerating buckets, or even accessing objects on a bucket, that's hard to say. So, here's where I think we have to be involved. Not everything's going to be one signal, one field, and cloud trail or the log and wow, we detected you. It's not that easy, right? Or else we wouldn't be here trying to get better. Maybe we have
multiple levels, which are multiple signals. It's enumeration of all the buckets in the policy that that we planted. It's also going to one of the attractively named buckets and iterating down it where we put some additional they are honey resources like files and paths that are attractive to see if they go through successive kinds of uh enumeration and discovery like three chances right then I have a higher confidence what's analogy in the real world in the movies where you have the bad guys in the back of the casino there's always a hallway to that back area and instead of locking the door, we keep it open. It's an open hallway. We just have a sign that says
employees only. And then they ignore that. And at the turn of the hallway, we just have a really this is restricted. And then at the third, we have a sleeping bodyguard. So actually an attacker or could actually walk by. We're leaving it open, but we detect at every stage. And we know this is very unlikely to be a good guy. So the question is how do you implement that in the cloud? It's sort of like old school. You have a message of the day on a sensitive server which is hey this is restricted confidential. If you don't have access you should immediately log off. Go tell someone but you obviously already have access. I want three of those
signposts. We can figure it out. You can put top level there. There's a little bit of leeway. You might iterate a bucket differently. You could try to grab the whole thing, but we'll catch that differently. Volume detections. I'm trying to say we can be more than just one trigger gets us the answer. We have to. It's never that easy. So resource access causes us to think about it. It could be just a simple rule in a sim saying this, this, and this. If we were really sophisticated in an ML running, this could be a feature, an indicator feeding into we don't know, but it's suspicious. this person's really going deep into our finance bucket. Don't you?
Right. And then combined with some other indicators, then we flash in the sock. This is the thinking I think we have to do even if we're talking about deceptive defenses and we hope it can be better. If we don't, I don't think we make progress. Let's continue a bit on how to reduce false positives. Even though I'm talking about deceptive defenses and identifying the bad guys, if we don't talk about let's not kill the good guys, we're missing the picture of practicality. So that's why I say keep using that phrase so we remember it. Let's not kill the good guys. Let's not send off false positives. So let's take something sensitive in AWS. I know I'm
playing deep into AWS, but but if we're talking about Azure, I'm talking about let's look at service principle and escalated privileges and all the ways people get access to service principles. If we're talking GCP, I'm like, let's talk about service accounts for sure and all the way people get there. And if there's any impersonation features, which you find in other SAS apps too, it let's talk about that. Well, AWS is assume role. So, how can we separate the good guys? We've talked maybe we can put fake roles and other things for the bad guys. Well, we got to get specific with assume ro and you see this with cross account access. There's usually a secret
external ID that's set up in the relationship. It it doesn't solve everything, but you shouldn't have zero or else we're talking about GitHub action abuse and all this all the stuff we read every other week. You're supposed to require the sum roll to pass in effectively a tag, right? Can we use that defensively? Yes, I think we can. That's just how you might do it on the command line. The way I think you can is I want an optional tag. Basically, I don't want to block them. I'm deceptively trying to detect people. I want to actually let them through the door. I I just want to see if they do something outright suspicious. If I have a policy like
this, the net net of this is I'll get different results in the log, the cloud trail log depending on whether they put that environment tag equal to prod or not, whether they include it, write value or omit it, right? And and so valid users, yes, they either have to be trained or their workflow and tools should include it automatically. I get it. That's still a room for false positives. But we got to look at it and see if if we can do that, then they pass through. Everyone else who may try this for whatever reason, they get to this point, they say, "Ah, I can assume roll." They'll get through. They think they're through, but instead we're
monitoring cloud trail for that tag. It's wrong or it's missing. We fire off the alert. And could it be a mistake? Sure. Could be FP. And we're still calling that AWS admin saying, "What's the problem? We trained you just like we do every year on fishing yet. You you you didn't do it. You're supposed to be, you know, technical. This is what's going on. It's still worthwhile, right?" And then you figure out, of course, efficacy. We got to, you know, practical matters, right? We got to see what this FP rate is. But my premise goes back to if we're not thinking about this, how do you validate valid actions? Sort of repetitive, but how do you validate
You can't just do one half of it, right? This is this is a different talk, but but it's worth pointing out. So, in the end, I want to talk a little bit about some of the choke points, trip wires, at least in this AWS kind of oriented talk, and then a little bit of other things. This is not complete. I'm going to get to a little bit of how do we feel good? We know we're we're you know, covering something except a few cherrypicked examples. But if I were to summarize this, we're going to look at access for sure. It's resource access, but also identity, IM, object access, accounts, and roles. Those are big ones. And I
would say resources take most of our thinking, but there's a lot of canary token and other things. So, we're not behind. We just have to keep those as maybe priority one and two. Then what I want to add to it is other parts of the TTP the sort of attack chain especially discovery in the cloud environments there are API calls and sequences that are less likely to be done every day by valid user enumeration of policies enumeration of resources look at those right and that way if that holds water even if you're breached and someone gets authentication and escalates privilege You still have a chance because even after privilege escalation, people still enumerate and discover to see what else
they can do with their new privileges. Right? After an assumed role, even if it's with administrator access, you're going to say, "What else can I see now?" Right? So, that's my premise. We challenge it. I'm not saying you have to accept it. That's what's behind um some of the chokepoint thinking is where are they likely to go APIwise reflecting their attack chain. The trip wires takes some thought. I gave some examples. What in those choke points would be enticing that they would trigger versus a valid user takes some work. I'm not guaranteeing we're not false positive. I'm just saying if we don't think about this, we will definitely not make any progress. In terms of the actions, I
didn't spend much on actions there. Those are more straightforward. Once you know there's a problem and you have confidence, we can follow our remediation playbook. But I will point out, yes, ideally, it's I know there's a bad guy there with a compromised account or credential. Let's revoke access. And maybe we're there. It could be more subtle though. You could do an alert for investigation. We're not sure, but it's worth investigating. we might increase our visibility in some way dynamically. Turn on logging, especially if it's an admin. I'm like, okay, not sure. Maybe they forgot the password or the tag. Yeah. Well, I have a funny story. Same as old school one. Um, tell you quickly. That's the one where the
Interctum server is told to me by my boss's boss. I am trying to one of my guys I'm in professional services is trying to deliver a file to a customer. We have a secure FTP server. I have access to it cuz I'm up the chain. My guy comes to me and says, "Hey, what do I do?" I'm like, I'm like, "Well, just use my access." So, you know, that's violation one. He does it what we would always used to do is you start changing file permissions right because it's it's restricted so they can't enumerate the files he chamads it you know like it's world readable but but still no one can enumerate the directory so you'd have to
know the file name me I'm like it's a fair trade-off but it's my guy using it shared account plus he's futing with a production server plus there's multiple customer data on it meanwhile what I don't know is our IT guy. As soon as someone logs in, he gets alerted. He's watching every command. So he comes back like, "What the hell are you doing? Why'd you do this? Why'd you chadas?" I'm like, "Oh there's no story." So I had to go through that. The point is sometimes increased logging and visibility is the right action. And we do have to think through appropriately. But on some gray area for certain select users, maybe admin users or sensitive
resources, maybe that helps and that would be the right. It's not a grenade. It's just, you know, more heightened visibility, right? Adjust risk is just you know what really part of our model we have to track risk more than good, bad, binary, right? at least a spectrum because everything's grayscale and it's just hard to do if you've just got rules firing in the sim because it's always just a boolean expression and boom. If you have ML it gives you a little bit of leeway but I'm going to suggest there is still maybe you have certain thresholds even if it's counts one suspicious thing two suspicious things after three then we start taking more action. So there
are things within our reach, even if we're just running an organization that aren't full-blown ML models with probabilistic kinds of distributions. It's just it's the correlation part. Maybe when we see a failed login, what looks like a brute brute force password spray, look, it's not enough to block or do anything. I don't even want to spend time on it, but I track a little tick and the source IP addresses. later post breach we see some other activity from the same IP address gets another tick and third that's an admin action that that originated with that and now we have enough to really do something right I'm just suggesting counts aren't beyond us in whatever tooling we do and it's
worth thinking about on the action side don't kill the good guys I talked about some of that that's important we all know if we have too many FPS forget it none none of this moves forward We're wasting time, energy, and somewhere up our management chain, someone's going to, you know, uh, ask some hard questions. So, where does this bring us? The TTP attack chain orientation has to be there. So, I'll go back to look, there's MITER attack, there's other frameworks. I think these are useful to an extent. And that's saying I don't want to say they're bad. It's just for this purpose, they're not enough. To me, this is a good starting point, but I
listed the cloud IAS specifically because that's what I'm talking about. If you look at the sub technique level, which is the lowest you get, it's generalized. Why? Because it there isn't one for AWS versus GCP, it's abstracting up. So, we can have nice categorization and talk a common language, which is useful. I can talk to you about an enumeration technique. But then if you're an AWS customer and you're an Azure customer, you guys got to say, well, what's it like in AWS or Azure? And you'll be like, well, it's this. It's a little bit different. It's this API call. No, in Azure, oh yeah, it's hard to prevent enumeration of the directory. Darn that MS graph blah blah
blah, right? You're going to have different conversations. And if you're trying to implement defenses, you're still going to be like this. You'll be like, well, that's good. You implement for you and I'll implement for me. So that's not helping. So I am only suggesting that to do practical things I have to dive right below this for the environment stack and there's no way around it. This saves some time, gives us some patterns to track. And in the end, if we have tools, someone's done the work of thinking of what makes sense for this environment down to is it a policy? Is it this API call? And so on. We have to do that work. Guess what? The attackers
are doing that work. So what makes us think we have a shortcut? Attackers like, uh, I can impersonate in AWS. We all know that. We've done that for years. what's it in Microsoft and they do the work or someone's already done a tool to do the work right if we don't accept it I I don't think we we move forward so what what are some key elements at a higher level on top of the examples design I'm just talked we have to be specific to this cloud provider specific to tools specific to TTP if we're going to make this practical I think we have to focus on other parts of the the attack chain Besides initial access, besides
privilege escalation, they rightfully get attention. And guess what? There's enumeration, there's lateral movement, there's all the data xfill. They need the same thought process from a detection viewpoint. Access of course is key. We're not going to ignore that. At a bigger level, we want to look at the common tech vector. So compute will always be there. Buckets will always be there. But credentials, compromised credentials are so common. And where do those credentials live? Oh no, we're worried. Great. Those are where we think about defenses being pushed out. And that's sort of going along with the perimeter. The integrate point is we got to work together across our silos. Attackers don't care if they hit the
help desk and move to the corporate user and then back to the admin and fish here. And they don't care. They traverse the organization. They just have to win once. Unfortunately, we have to be perfect. But whatever IM admin teams pick up, even if they're not in operations, that has to be used and communicated to operations. We have to cut across the silos and the product the the filters and um the complimentary thing of identifying the good has to be done not just identifying the bad or else we're still going to be having you know too many too much noise. So let's get into practical considerations. I think I got about 3 minutes, but this is actually
the important stuff, right? The the preceding stuff was a bit of the theory and the research. Maybe you think that could work. The question is if the cost is too high and the benefit's too low, it's not going to work. So, we always have the ignore buy decision. Ignore is good talk. Nope. Million reasons. Not a priority. Okay. Resources. It's difficult. this is a headache. This makes me sad about my job. That's that's me, right? Um the buy is is there existing solution. Of course, we should look at that first. Uh your choices are there. There are open source tools. Again, I'll point back to earlier talk because uh Matt was, you know, that's part of an open
source thing. You should be looking at all of those. There are commercial offerings. I would look at those, too. Commercial offerings tend to be vendor specific with what they're selling. The startups innovate, which you do need here, I believe. The big companies though have the presence to actually implement it. Well, they have the platform and they're on everything, endpoint, middle, and uh server side, but they're also the dumbest to be frank. They they think just buying a bunch of things and sending it into the SIM is integration, right? They don't work together. No easy answer. Build is if you're large enough to have the staff to even think deceptive. Yeah, I'm going to spend time researching it. Then
you're in the build. Is it for me? Build means maybe I take an open source tool. Maybe I take one or two things and deploy them scripts. I mean, all everything I showed are just And then it's provisioning CLI commands with monitoring detection filters. We do that, but I'm saying would you do that as part of a detection strategy? And could you do it at a low enough cost that it doesn't screw up things, make mistakes, so on. So the cost side is has to be evaluated. It's not sexy. It's does this hook into my admin workflow? Cuz administrators are the natural users to implement deceptive anything. They provision things whether they're fake somethings and detection
filters. That's the best time to deploy them and they can manage them, but the cost can be high. You don't want two or three things you have to remember every time you provision a user. that doesn't help. So, hooking into your admin flow. I mean, with CLI and scripting, if you do that, you could. If you're using a tool, that's that's a harder choice, right? Uh, but the good thing is there's hope. The good thing is, okay, I've got 32 seconds, which really means 32 seconds over is you could run some of this out of band. You could choose to say, ah, once a week I'll do deal with this. I'll provision things and next
week I'll check on them. like an audit cuz I can check the logs at any time. Yes, you'll be slow to react, but it's better than zero. It's how we used to do vulnerability scanning with Nessus on a laptop back in the day. We didn't do continuous anything. We're happy to see how badly patched we were once a week. We don't have to boil the ocean. We could start small. Problem is this could introduce more risk, more attack surface. Any tooling could be attacked. Did you make mistake configuring stuff? So you actually left an opening. Malicious insiders like admins on your team leave the company. Do they know? So rotation has to actually be part of the
functionality that is I have a fake bucket and tomorrow it's a different fake bucket. So yeah, good luck knowing which is the fake bucket of the day, right? Coverage and confidence is super important. Okay? Because I think in today's world, we have too many lists that we blindly follow, which look, we have to do stuff. And if it's a good list, it's a good list. But if you blindly follow it, you actually don't know it's a good list. That's everything. Someone there's a breach. Uh detect this IOC, which is what? An IP address, a hash. Okay. Is that good for that breach always tomorrow? No. Right. If we're going to move beyond the is this really helping
me I think we need a bit of the research of and this is on stuff who who people researchers look what is the classification of sort of your attack landscape whether you use MITER and attack chains we have to have some confidence that okay there's only 52,000 things you have to worry about but they classify into 36 techniques in Azure No, we don't have 52,000 defenses. We got this. But at least you know what you got, which is you got 1% or.1%. You know, you can make a plan out of that. So, we you have to know the universe. But again, there's some good news in that. That sounds like, oh god, we don't even know what we what the
target is. The good news is if there's value, maybe you can incrementally roll it out. And even though you do 1.1% at least you have a thousandth done and you're a thousandth better than yesterday right so part of this is tied together how do you get confidence do you know the coverage of everything you're trying to do and the targets moving we do not know all the tax but if we abstract it to the attack chain we can still have some confidence so what I mean is I'll use AWS's example Nick Fchett does a lot of great research about enumeration attacks and a lot of them don't get logged in cloud Okay, everyone's dependent on logs. That
means we're blind. How are we going to stop that? You can't. And even if those holes are plugged, you think Nick's not going to find something this coming year? Of course he is. He's too good. He's been doing it for two years straight. But if our deceptive defenses don't rely on stopping that one technique, it's the next step. It's the enumeration step will still be done. We can have still have confidence of coverage. Theoretically, we need to get there. We we have to think ahead. I can't just give you a list, even if it was with a tool, and have you feel good about it. It would be wrong. You need to know where the the target is. Is there
even an end to this? Which brings me to testing and efficacy. We got to do better and more creatively. Um, what I mean is, look, you could run some of this on historical logs. That gives you a start. Would this have picked up anything? Let me look. Are there false positives? You can, you don't have to start and wait for the future. You can look at the past and actually get some comfort feeling. More importantly, how do you test something that may not happen? Uh we we deal with that with a vendor who tries to sell you something that's magical like what you can detect a well I'm not sure an AP is going to
happen and they'll probably say well good you're safe but you still need us. No, the the answer in my mind is if you're big enough to do a red team exercise of any kind, work this into that, but change the scope to include more persistence, lateral movement, objectives take some thinking. It can be done bug bounty even CTFs internally run or even that's what I I aim to do with um cloud village CTFs at DevCon. I I volunteer for that. I'm thinking of trying to incorporate different challenges which are it's not just capture the flag. It's do it without triggering the sim. Work that into your red team exercise. You have an uncontrolled population. The red team could be
external bug bounty. Put in your deceptive defenses. Make it so that they know you'll be rewarded. You get double the bounty if you're not detected post initial access. Look, it's a start. Think about that because that might give you some confidence you have coverage. Now, you may not know everything you need to cover, but you know, against an un sort of a tethered red team. Ah, we detected they did enumeration. Got them. They went for a fake this or fake that or can that. Right. Work it into measuring off of red team. If you're not if you're spending the money on pentesting, think about incorporating the defensive measurements of something new like deceptive defenses. And even to the point of
please plant a back door and if you're not discovered for seven days, you have to show communication again, get more money, right? Because that'll test whether you can detect persistence. So, thanks for sitting through that. Appreciate [Applause] I know there's lunch. You can catch me um after or tonight or if there's any questions. And thanks