
So, I realize that we're talking about practice reference might be a miss here in this room, but I I'll hopefully explain it a little bit on. Um, I know there's a lot of talk about red team, blue team, purple team. Uh, and especially at conferences like this, we love to talk about the red team, the offensive, the hacking. Uh, I'd say I'm definitely biased and I love the blue team aspect of things. Uh, and so while yes, it's it's really important to have a a very skilled red team, I think what's often missed is that we do a lot of those engagements for fun and not necessarily to do our business sometimes. Uh, what we really need to
start thinking about is how do we get into that business mindset? How do we start applying the results of those tests? Not just for, hey, look what I can do, but also how can I defend my my business? So, at the end of the day, we're collecting paychecks uh for a reason. And if we can't defend our business, if we can't justify our existence, uh we're not going to be around for much longer. So, okay.
Okay. It doesn't work at distance. Um so, I obviously had a little bit of the intro. Um, but just to go a little bit deeper into myself, my background, um, maybe hopefully inspire some some just career advice in general. Um, when I started in college, I was actually accepted as a mechanical engineering student, I had zero cyber background. None. Um, I was fortunate though to get a lot of hands-on experience. So, when everyone talks about home labs, getting their hands on getting dirty, like I cannot stress that enough. Like, if you don't know something, go figure it out. Um, don't let someone teach it to you, go figure it out. Um, and so, you know,
I was really fortunate to go through that, uh, have a great education. And not only that, but during that program, they forced us to do internships. They forced us to get out there, work full-time for organizations where I worked in energy and I worked in academia. And I realized complete two ends of the spectrums for for employment, right? Energy, red tape, bureaucratic, couldn't get anything done. Academia, wild west. uh if we were doing vault management, we could run exploits, take down servers at other colleges and then basically extort them to to go patch and do things like that. Um so ultimately when I graduated I landed with Reliquest um and found out that hey there's a perfect blend of
doing a little bit of the fun little wild west but also working in more corporate where you have a a purpose and a goal and and you know a mission. Um and so over the years I've I've probably held as many different roles as I have uh years at the company. I started off as an analyst, went through security engineering, doing patching, upgrades to SIM. Uh, ultimately found kind of a love for the threat detection, detection engineering piece, a little bit of the threat intel, detection research, stuff like that. Uh, which led me to kind of saying, hey, as as I grew up through leadership, how are we going to do that for hundreds of organizations? Uh, as
you guys have probably seen in our talks, detection engineering is tough. It is tougher when you're trying to do it for hundreds of organizations across every industry, across every size. Uh and so we realized, hey, we need something a little bit more scalable. We can't do this manually with humans. We need to build what we call now is detectionist code into our technology, into our process so that we can scale uh and ultimately get better for our our our customers. So where does that leave me today? Uh you know, I'm a a big fan of the blue team aspect. Uh don't get me wrong, red team has its purpose. Uh but I'm a leader and advocate of all things
blue team. Uh how can we get better? How can we document? How can we improve our detection? uh our containment response capabilities. Um and more importantly, how do we shift left and start preventing some of this activity?
So, uh I'm really going to try to cover three things. There's three takeaways I want you guys to have throughout today. And really, the first is like why is testing important? Like I said, it's not just about hacking a box, proving something works, proving something doesn't work, but really like the bigger picture of why we actually do this. Um, second, I want to talk about really what you guys should do if you're on the blue team side, how you should approach these red team engagements, what you should be asking for, what you should be looking for, what you yourself should be doing. Uh, and lastly, um, I think stories and trends are important. So, I'll try to
weave as many of those uh throughout the presentation as I can. Maybe give a little few nuggets of like, hey, if you guys are working in blue team today, take these back to your organization and check maybe these these five 10 things real quick. Who in here is on the red team? We got one. Who here is on the blue team? Love that. All right. Uh for the blue team folks, uh all currently employed or students? Employed. Okay. And what industries? Just shout out some industries you guys work in. Finance. Education. Academia. What? Finance. Where do you guys work? What industry? Network engineer. Okay. Network engineers. Aviation. Aviation. Love it. So, um, you guys will see a lot of those
differences, and hopefully I'll tailor some of it to to those different organization sizes. All right. So, where did the name of the talk come from? Who here knows who Allan Iverson is? All right. We got we got a few. Um, for those that don't know, uh, Allan Iverson was is probably regarded as one of the better basketball players of all time. I think he's a Hall of Famer, if not already. But he was once captured uh in a press conference. Came off a bad game. Reporters were asking him about missing some practices, missing not showing up to the team events. Really just underperforming. He had just come off his best year ever prior to that. He had a scoring title,
the team he played for, the Philadelphia 76ers. Basically made a championship run. They were a legendary team. And to see it go from a stark contrast of being one of the best to having poor performance and missing the playoffs and doing horrible was a shocker. And so they started asking him, you know, do you think if you had shown up to practice, you would have had a different outcome? And he went on this pretty legendary rant for about two minutes. He's like, we're not we're not talking about a game. We're talking about practice. Like why is a practice important? We're talking about practice. And I think he ended up saying practice like 24 25 times in like a one or two
minute period. And I think the whole point is like practice is important. For anyone who's ever been in a team sport, you can't just show up to the game. You can't just show up to the incident. Practice is important. It's how you gel as a team. It's how you operate. Um and ultimately like I think a lot of that you could contribute to his his bad season. Um so uh yes we are talking about practice. Practice is very important.
So um we talk about practice. What's the game? The game for us is the real incidents. And most importantly that I'd throw out there is the research we have from last year 2024 suggests that from initial access to lateral movement you have about 48 minutes as an organization. It's incredibly fast. It's also ironically how many minutes are in a basketball game? I just found that out yesterday. Uh so it actually plays in pretty well. But I swear to God that number is real. Uh that is the average time you have to respond. Uh, and by the time they are moved within the organization, if you're starting at minute 48, you're going to have an incredibly tough time responding.
Incredibly tough time responding. Um, and so why we think this is really important is like you got to be prepared to operate under those criteria. Got to be really important. So we know it's important. We got to act. How do we now emulate some of that to see if we can withstand that 48 minute time period? Um, I think when when we're thinking about these engagements, and a lot of different people have talked about this today, it's really important to like judge what style and what you're testing in the environment. Where do you think you have gaps, what scope, uh, how deep do you want to go, things like that. And I think it's important because like all
these things cost money. If you're on the blue team side, no one's just generously giving you a red team report, a purple team report, or a pentest. And so all these things, you know, you got to figure out how much are you willing to pay to get a certain benefit. Uh, and that's kind of like the ugly truth of the business side of it is like it's not just all fun and rainbows. You can't bring a team in for a month to do full testing uh all the time. So we always think historically like pentest was where this all started before red team or purple team as a term even began. It was simply, I think a lot of times, just
people coming in, throwing a Nessus scan against the environment, reading the reports, validating a few of them to make sure that scan was accurate, delivering that report over. And if you're anything like me and the the customers I've seen, they'd come do that report every year. You'd check the box saying, "Look, I did my pen test." And you would never improve on any of those things. So, we started seeing purple team and red team. Um, you know, we would see red teams come in and do much more adversarial where um they're not just running a scan being as noisy as possible, but they're coming in with a plan. They have some sort of uh adversary they're emulating or
techniques or target they want to do, and it's much more hands-on keyboard, uh, and hopefully a little bit quieter in that environment. And then there's the blend uh which is kind of more the purple team approach that we we like to talk about which is where um yes you have a red team in there but oftentimes I think when we hear our customers refer about purple team it's much more tit for tat right don't run a procedure we'll test and validate visibility detection things like that and just keep operating back and forth until we move through that attack kill chain. So really figuring that out of like how do you want to uh you know run at these tests for your
organization is really important. And second, scope. Um, you can't boil the ocean. Um, if you start with a full scope and it's your first test ever, like there's just going to be so much you need to focus on. Uh, and it may be you may want to start full scope just to show the extent of help you need. But it may make more sense of like, hey, we just stood up a new web application in our organization. Let's go test just that application. Let's just mock real world threat modeling scenarios against that application. Use that as our scope. um or hey, you know, we are worried about insider threat. Act like you're a real user. Target our 365. What files
can you access? What messages can you see? Um do we have HR's SharePoint documents locked down or not? Uh and and how to go about that. So scope scope and solid test is is really important. Um, and why I say that is like we really want to think through in all of these tests all the different aspects of like what we really are trying to test. Um, a lot of times maybe on the blue team we're thinking, hey, I just want to make sure my detections are firing. But to me, it's a lot more than that about putting in a full life cycle approach. And in the perfect world, right, you're not running a blue team. You're not
running a red team. This is probably like what a lot of your days look like is starting top down or left right. Focusing on, hey, what security controls can I put in place? Right? To me, that's that's really important. If you don't have to detect and respond to it, and you can just prevent it out of the gate, great. If not, and even still, if I can't prevent it, how do I detect it and respond to it? Maybe it's not high enough fidelity that I can immediately take an action. Um, but I need to be able to detect and respond and triage those uh in real life. So, we need to get logging and then once I have
logging, go through the detection engineering process. Write out your rules. Uh, try to narrow down true positives, false positives. Then, I would say this is probably a newer concept uh that I think a lot more companies are starting to to take under their wing is contain first. Historically, we've always said, "Hey, here's this alert. Let me go run all my pivot searches. If I agree, let me go contain." And I think the data nowadays just shows that you can't wait that long. Uh and ultimately you can take some mistakes in that containment phase, then investigate and then respond. With a purple team or a red team engagement, I think it flips on its head a little bit more. You're not starting
off thinking what can I prevent? But more often than not, you're going in blind. And so it's like, hey, you're reacting to something and that something is a detection firing and alert triaging. So then you got to go through that process of like, okay, this fired. Did I analyze it right? Did I contain it correctly? Am I responding to it long term? And then when that test is done, that's really when that full hunting, full analysis and review happens, which is like, do I have the right logs for all the things I didn't see? Do I have the right preventive controls in place? We all know those change boards take forever. Uh, and so this is really kind
of the flow that we'll talk about for the rest of uh rest of the talk. Dig into these, dig into real examples. uh and hopefully a few stories. So, last thing I'll say, uh this is my own personal opinion. Uh I know mileage may vary on this. I would suggest there's three parts to every test. There's pre-engagement, there's mid-engagement, and there's post-engagement. And I think to truly test your abilities, I think pre-engagement, you go in blind. Now, there's different use cases. If you're looking more to test your visibility, test your controls, maybe you go in knowing what they're trying to test and real time check those boxes. But to make sure we can operate in that 48minute world, we need to make
sure that we're going in blind and we're not prepared. Uh prepared meaning like biased towards I know something's going to happen. I know I got to act quick. You want to test that real scenario where you don't know if something's real or not and you have to react. My second opinion is don't weaken security controls. I'm good with starting with assumed breach. I think that makes sense, but don't turn off your EDR agent just because the pen tester has a hard time. Like, yeah, eventually you may have to. But what we've seen in tests is they'll go turn off all the controls, then write the report of look how easy we compromise your domain controller.
It's like, well, yeah, of course, right? Like that makes sense. Uh, but then that goes up to the board and like you potentially lose your job for that. Um, so go in blind, go in real mid test. um respond and contain like it's real. Um you know, act like it's a real threat. Take the steps necessary. Figure it out. Eventually, you're going to find it's a pen tester. At some point, they're going to cry uncle and say, "Hey, weaken those controls. Can I can I disable this to go run to another aspect? Uh test something else." Uh and I think you should you should ease those controls because if you're paying all this money and you stop them at step one
or step two, you don't really get a full full report. And then after the fact, um, you know, you should do full analysis hunting. This is where to me that collaborative really comes in of like, hey, can we replay certain procedure? Um, what does that full report look like? And I think like timing is of the essence here because a lot of tools have data that ages out. Um, you know, SIMs definitely have retention, but it comes time to an EDR, those logs are sometimes seven or 14 days before they expire out just because of the sheer volume. So, uh, time is really of the essence there. And I think this is probably the most
underrated part is you got to report. You got to show your failures. Um, you got to show progress. You got to iterate on that. Um, again, I think I see all too often organizations get this report, they read it, they go, "Yep, I agree. I agree. I agree." They tackle maybe one or two items, but they don't come back next year and iterate on that. So, one of the things we did with Rely Quest is build an app in house. We looked at a bunch of different tools of ways to track this. Um there's uh I forget the tool that uh is it security risk advisor put out. What's the name of that? Vector. Vector. Uh we looked at a
couple others uh that do similar things and ultimately we realized that like we kind of had a unique case on our side where we want to build a tool to track these procedures from the offensive side but map in all of the controls and detections we have in place. And so what we did was allow us to basically start entering these procedures in. And as we found something that didn't really have a parent of like where did this come from, maybe you have a detection that starts in the middle of the kill chain. Our goal is to find the roots back to the start of the test and eventually chase its tail down to see how far they
got. And eventually we get to this point where we can build some trees, build some views to chart this out uh and get a little bit better. And as we log different tests ultimately we can start mapping success and failure for our customer base uh for you know individual customers for ourselves on how are we getting better what are the most common techniques being used again this platform is not just pentest but it's also incidents as well tracking real-time data of like hey these are the things being used this is where you need to focus your efforts all right so Um, on the blue team side, I mean, I I kind of highlighted a lot of
these, uh, but I I want to dig into each of these more in depth and give you guys some actionable actionable intel out of each of them. Um, like I said, there's to me there's five main areas that I typically see gaps in. And it's either A, you're not containing your threats, B, you're just not investigating them correctly, right? the alert fires, you just don't triage it properly, you come to a bad judgment, a bad verdict, or you failed to look left and failed to look right to see what else happened in this attack. C, you didn't detect it right for some reason, logic's bad, something like that. Four, you don't have the logging or the adequate logging for it. Logs are
expensive. We get that. And five, you didn't prevent it. Um, and so we're going to go through each of these a little bit more in depth. Who here, I know a lot of you guys say you work in the blue team. Who here believes they have the full capability as a a sock analyst to take action and contain? They don't have to reach out to their help desk. Is there anyone that can take action and reset a user's password in this room? How easy is it? Someone tell me their interaction with talking to help desk to get a password reset. How long does it take to submit that ticket, get a response, get it acknowledged?
They respond quick. All right. Oh, yeah. If I'm calling, they respond. Okay. They know me and why I'm calling. So that is the Anyone have a Anyone have a more than 10 seconds? Are you talking like a tier one or three? You're you confirmed, you know, a compromised account. You see the fishing email. You see them logging into your 0365. I mean like okay that makes sense. in here this room does do their security teams anyone whose security team also has access to run containment plays some some okay uh and so I think this is really important that like we got to start justifying to our boards we need this capability we need to act fast that
48 minutes is why we have to operate faster and submitting a ticket to help desk and waiting for that to get identified waiting for them to pick up the ticket and go action it and then ask questions and verify we just can't do it Um, and second, even if we do have those capabilities, a lot of the times in the containment aspect, we're just too late. Like, no one's looking and getting a phone call immediately on every alert, right? It's probably going to a dashboard. You check on that dashboard every 10 minutes. You see an alert pop up, you're like, uh, it's a medium, but I got lunch that just got out of the microwave, right? And so, so we see this
taking a little bit too long. Um and so ultimately like where we would really start to advise our customers and my advice to you guys is like these are the core capabilities that your security team should have direct capability to go execute on. You should be able to from your EDR isolate a host. You should have that permission from your board. Disable a user, right? If you have if you know that user's compromised, disable them. If you know that they're compromised and seen logging in, you should be able to through Office 365 terminate that session and yank that token. You should be able to ban a file again through the EDR. You should be able to block a
domain, little bit riskier. Block an IP, a little bit riskier. Uh, and lastly, resetting passwords. Why I kind of put these top down and what I really want to stress here is like what is the worst case scenario if you isolate a single machine? What's really gonna happen? Yeah, I'll take a mad user over a million dollar compromise any day. And the best part is almost all of these actions are very reversible. If you isolate a host in the EDR, sorry, put it back on the network. It's not like you have to send someone out and like, you know, manually reimage a machine. Um, if you disable a user, it's just as easily reenable them. Uh, and I think that
we've let this kind of boogie monster of, oh no, I'm going to cause harm to the organization overcome our ability to respond to these things. And again, I would stress to our boards that like we need these capabilities. The worst we're going to do is disrupt someone for maybe an hour. Best case, I'm stopping our company from hitting the news. Um, one thing I would throw in here too is like again going back to the pentest world, right? Um, I think you should run these containment plays in a test to test your reaction. Uh, we had one organization who was completely on board with us running these tests. Uh, and we just kept playing whack-a-ole. Every
time we saw an alert go off, like their organization was beautifully architected, we had alerts at almost every phase of the killchain. Every time we saw them log into a box, we'd isolate that host, disable that user, they'd be like, "Hey, we just got knocked off. We'd put them back on the network. They'd run the next phase. We'd knock them back off." Uh, I think we made the person pretty annoyed, but at the end of the day, that report came back and showed a pretty successful test. All right, let's say you didn't contain it because you didn't even know to contain it. The second part is you didn't investigate it. Um, and I think this is normal. Um, you know, a lot of
times there's a lot of alerts in an organization and sometimes that just creates fatigue, burnout, uh, you name it, just being numb to these alerts. And so it's not abnormal that you're going to make an incorrect judgment. It happens from time to time. Defense and depth is really important. Two other parts I'd throw in there is like you didn't find activity before and you didn't find activity after. Um that's really important. Uh if you can find one thing, sure, but it probably means it came from somewhere else and it might have had the jump on you and it might be doing something else beyond what you've currently seen. So let's go into a couple real examples
here uh from Pentest. Thankfully, uh, but this is some of the responses we get from from some of our customers. Uh, in the first case, right, like there was a a request out for Sharpounds. Um, and the the response from the customer was, "Oh, it's it's GitHub. Like, we allow our developers to use it." And I get it. Like, I don't want to mock that person. It it's they maybe made a quick judgment. They maybe had a hundred other things on their plate. Um, but those are those are real scenarios that we got to be watching out for. And second, uh very similar. I heard that the previous speaker talk about print nightmare. Uh we saw a bunch of
Kerros requests for a print user. Uh not that they were using that that exploit. Um but they were basically using impact to get a ticket to where they could use that to pivot through the rest of the network. Um and it was on a print server. It was a print username. Made complete sense that that username is associated with that thing. I just don't think they understood the detection. Like I don't think they understood what they were being told from that alert. And so they didn't understand the impact. So like, hey, this is some legitimate activity and it ended up going on. And in both cases, if they had taken a quick breather, took a quick
second to say, hey, what happened before they downloaded uh, you know, Sharpound, what happened after? You would see in this case that uh, in the Sharpound example, the login, the host name was Cali. They're coming from a pentest box. Uh, and immediately after they did it, you see tons of network traffic out to the domain controllers to do enumeration. quick checks would have definitely illustrated like this isn't normal. For the second one, very similar, right? They started using impact in this case. They started pivoting, ran DC sync, pulled all the users, pulled all the creds, uh, and ended up hexfilling them using our clone out to a private server. So, again, quick checks that we can do.
My advice here, don't be afraid of AI. Um, I really do think we can, if we all lean in on it, we're going to see good results. Humans make mistakes. AI makes mistakes. I'm not saying necessarily let it run completely by itself, unchecked all the time, but use it as a fast tier one triage. Ask it, what does this alert for impact mean with these flags? Is this normal? Is this not normal? They might just be able to guide you if nothing else. And the speed will definitely help you uh get to where you need to be. So, let's say that alert never even fired. Um, as a blue teamer, what are the typical reasons for this? And I
think, uh, the top ones a tale as old as time, right? Uh, who here thinks they know exactly what their environment looks like, exactly where their endpoints are, uh, and complete ass in inventory? If anyone says that, I would love to learn your secrets. Uh, because that's just not the case. Um, second is like you know about it, but again, the ugly the ugly truth of business is that like logging is an expensive game. Who here has recently renewed a SIM contract? Anyone? Millions of dollars, right? Sometimes millions of dollars, especially at that enterprise scale. And like to then say, "Hey, I need to go log this additional event code is painful." Um, you know,
you might be doubling your volume just to get telemetry into one thing to pull web application logs or proxy logs or every single firewall log. It's like, I might get one nugget out of a million different logs. like how do we justify that to the business. Um so using these tests to really triage and figure out what you need is important and then last is like you overtune the detection. I mean how many times have we seen like oh it's this admin running this behavior. I don't want to see any activity from admins. They're they're authorized to do this. Well those accounts are the accounts that are getting targeted. It's like no like you really got to be
careful about that. So uh it's funny. I think I heard these two exact references mentioned the prior talk. uh a lot of times when we're looking at this like we may not be able to deploy a detection right it just doesn't make sense uh in that environment of like um I have it but I just can't um I can't create a baseline for it like if I use any desk how am I going to be able to detect when any desk is used maliciously becomes really difficult so in this case like our customers didn't even have these rules deployed um in the second case if you start looking at it um this is a good
example of like overtuning. How many people here look at all blocked alerts from their EDR? All activity that was blocked. I'd assume most of you like many of our customers say my tool did its job. But where I'd raised that concern is if you're seeing sliver, which is a C2 framework on an endpoint, it didn't just magically appear. Something happened before that. And sometimes that overtuning leads us to not see these things. Again, I was in the the talk earlier about purple teaming and they were talking about, you know, EDRs used to block PS exec and then it was like, hey, like this is just too necessary. Let's go turn this back on. These are
the types of things. It's like, hey, we've just kind of overtuned. We need to find that good balance as an organization. And so, again, leveraging, you know, making sure you get detections in, making sure you're doing your baselining, um, you know, is is really important there.
Um, I think I jumped ahead. Yeah. So, I jumped ahead a little bit. I think I talked about this one prior, but um, next is like, hey, you may not have a detection deployed, but it's because you don't have the logging. As these red teamers, as these purple team are doing tests, again, we're going to try to baseline that logging um, and try to make sure like we have all the logs that are critical for the activity we need. Um, and a lot of time it's it's limited, it's tough. Um, things like that. And so what we've really done is like across our customers environment started to document all those tests, all the event codes that are high priority and really
start to identify like key bottlenecks that everyone's going through. Nowadays, it's identity in my opinion. Um, everyone has to go through a domain controller. So making sure you have every relevant domain controller log is going to help you because they're going to have to authenticate to other systems and it's going to show up through those logs. So quick example here from what we found is uh Windows logging. These are some not all but these are some of the most key audit requirements you need to enable for your Windows boxes specifically your domain controllers in your environment. The absolute shocking part about this is that only the top three top four are enabled by default in Windows Box.
Absolutely shocking that they have not changed this policy. If you have not already, go enable this logging in your environment. It is going to cost some money. There are some noisy events. It's why they don't turn on by default, but I guarantee you every single one of these is critical to catching different techniques. And so, here's a good example. Um, shadow credentials. It's a a technique that's been around for I think four years now. Um, it essentially is where an account has a a sassle that's kind of a little bit loose, right? In most cases, there's some abilities where everyone has access to modified computer accounts. Uh, and they can add in what's called this MS key link, MS key
credential link, uh, value into an account. And essentially, this is where they put a certificate onto account and they can use that to authenticate instead of the password. So all they have to do is go edit that attribute in LDAP and now they can authenticate as that user and pivot around as that as it's them. Problem is that comes from an event code 5136 which is extremely noisy. Every object modified on the domain controller in the directory services generates one of these logs. Again extremely noisy but in this case hyper critical to be able to detect uh these types of techniques.
Uh, and last, uh, I think this is probably the most controversial. Um, you didn't prevent it. I think a lot of times as as security folks, similar to containment is like we just haven't had enough buyin historically to turn on some of these policies. Um, there's always the age-old, you know, my developer needs to be able to do everything. They need local admin. They need to be able to compile code. They need to be able to do this. And I get it. We are not without jobs if our business can't operate. But we do have to be very, you know, very wellthought in our approach to whether we turn these things on, whether we turn these things
off. Uh, and then second, um, this might be more of us in the room. We just don't have the time, I mean, to get in front of a change board and prep all that documentation and stage it and test the baseline and justify the impacts. Like I hate to say it, but some of us might just be like, "Tomorrow, tomorrow I'll do it." Uh, and so I really think we got to start focusing in on this because the more we can prevent, the less we have to detect and respond to. And so really three key ones I'd call out here. Um, shocking, but these are probably the three that I see constantly showing up in red team engagements getting abused.
Um, the first is just turning on EDR enforcement. Probably one of the easiest things you can do. some flip of a toggle. Uh but there is impacts, right? It's gonna stop some business software. It's going to cause maybe some disruptions. It may not play nice with all the software you purchase and use in the organization. Um so it can be tricky, but it's one of those things like you're paying for the tool. Use its capabilities. Um second, um disabling LLMR. Um this is a a classic. uh responder which is a a red team tool has been around for what at least a decade sir at least a decade maybe longer uh and it just sits on the
network and basically says hey give me your credentials and then people log in and you can use those to pivot around the network things like this are hyper important because logging at that level just isn't feasible right if you're logging from a client nothing on that client or that um EDR agent is going to say hey I just gave up my credentials at that network level. This protocol works at the subnet level. So, it's not really typically traversing east west in an environment. And I guarantee most of you do not segment your networks and have complete subnet visibility. So, unfortunately, like you're completely blind to this and they could have complete domain admin right out of the gate. Um, and so
these are the things where like it's definitely more difficult to do and turn this off on your Windows host, but it sometimes is the only way to prevent this type of technique. And then last, I would say this is more of like a trendy one. Uh, again, these techniques have been around for at least a few years. Uh, but abusing ADCS has become really popular in a lot of the engagements we see. And a lot of times they're just abusing certificates with weak permissions um and things like that where like any user can go basically pull that certificate template, modify it and then use that to authenticate throughout the environment. Um and so these things are are really critical to
go go chase down and get done. Um, and so I I guess like in summary, right, red team engagements, purple team engagements are hyper critical. Um, I think every single test that everyone's ever done has at least a few critical findings on it. The biggest thing I would say is that, you know, one, we have to measure and improve. If you get a test done once a year, you're not doing it enough. You should at least operate quarterly with these tests, testing different parts of the network and tracking progress. If you're fixing something, have them rerun it and retest it and revalidate that you put something in. And again, a lot of times if you
need a change board for preventions or visibility or logging and it requires another team to do it, you may not be able to get that done in these same tests. And lastly, again, if you're going the true red team approach and you're truly treating this like a practice for the game, operate with speed. Um, don't just sit back and say, "Well, they're testing. My environment's going to be dirty this week. I don't need to do anything." No, be an active participant. Contain, respond. Don't get cocky because they're going to beat you. But play it like it's real. Uh, and last, um, you know, I'm a big proponent and and Reliquest is a big proponent of of being a team sport. A
lot of the stuff I've talked about today came from a lot of talented individuals, even a lot of individuals in this room. Um, so even the whole idea for that custom application to track these things, we were doing them in Excel spreadsheets prior. The whole idea came from Harley Quinn, one of our product managers, like, hey, like wouldn't it be cool if or could I use this tool? And we just started throwing around that idea and saying like, hey, like, does it do XYZ? If not, how can we modify it? Is that an open source tool? Could we tweak it? No, it was closed source. Let's just whip up a quick quick web app, quick
database with it, and be able to track all this for our customers. Grayson Wagstaff, uh, DevOps genius. Genius in general. I don't think he's here, but, uh, wildly, wildly intelligent guy, super helpful with getting all this done, standing it all up. Uh, and then Joe, Ryan, Chris, Tristan, and Jaylen, um, were early adopters, right, of using the using the software, adding all that data in and pulling together a lot of the trends that we have throughout this presentation. Um, and it just goes to show like myself included for this presentation, but for every presentation you guys saw here today, there's probably an army behind that individual that made things possible. It should be no different in your guys'
organizations. You cannot do it alone. Work with a team that's much smarter than you. So, that said, I appreciate you guys coming in. Uh, happy to field any questions or answer any trends from Pentest we're seeing. [Applause]
Yeah.
You didn't see it. Yeah. What are you looking for in terms of collaboration? What should I be providing to you guys that will make your life easier and get you to fix that issue? Yeah. Um, look, I I think pain and suffering is the best way to grow, unfortunately. Uh, and so I'd say as little as possible to start and then just incrementally give us more. So, hey, I ran this technique. Okay, I'm a as a defender, I should know what to look for in that technique. Um, give me a give me an example. We dealt with earlier this year. Hey, do you see anything requesting any sort of templates, uh, certificates, things like that in your organization?
Do you see an authentications with certificates? Are you guys not a certificate shop? And as a blue teamer, they should know that. Now, the problem is you're talking, they probably don't have the logging there for it a lot of times. then it's okay well do you see anything from this IP address doing things against the domain controller and they may not have that event code in particular but they may see other activities that also come along with default tool settings and things like that that can lead them down that that rabbit hole uh eventually you may say hey I'm running this and I guess just for a personal question like are you on the blue team are you tracking those
event codes or are you much more I can do this I just don't know the necessarily defense side of it oh no I'm an attacker okay yeah so Like sometimes you guys may not know the event codes to look for which can be difficult. Um and in those cases like I hate to say it AI ask a question. Hey I just ran this attack. Are there event logs? Get an idea there to go turn on some switches. Again if you're if you're completely in the dark and you're getting smoked on these tests like it is actually a really good resource. Uh and at least it's not entirely accurate. It's going to put you down the right path. Um but I'd say ease
it out. make them make them earn it a little bit because I think anyone who earns that success, earns that containment, uh it's going to stick with them a lot more versus if you just hand them the advice. Appreciate the question. Start here. You mentioned the difficulties around uh logging for things that are very noisy but potentially like very uh could give you something very important like if someone actually did something. Have you seen any success with like ways of uh potentially filtering out to get only the important parts of the logs and like get the noise out so you don't even log in just wouldn't be like a problem? Yeah, I mean there there's definitely
ways to like minimize logging down and and search for certain things. Um even and again I always try to talk default capabilities or open capabilities that everyone can implement. There's definitely commercial abilities like a cribble, an edge delta, a datab those those data pipeline and like logging collection layers. But even down to the Windows level, like you can specify with Windows event forwarding, Windows event collection um like certain criteria in each event code to say, "Hey, I only want this event code where I see this capability." And it's almost like a pre-filtering for detection versus putting everything to a SIM um and handling it that way. Then I'll come back there. Is it better to use um external red teams or is it
possible for a blue team to serve that purpose and kind of test themselves or is it best So, I think there's benefits of both, right? Internal blue team, um, again, you're aware of it. So, like testing, containment, and investigation there, you're not going to see as much benefit, but I do think there's like you can do that much cheaper and still test visibility, uh, and prevention. Where I think external comes in is like I want to be blind. I just I don't want to know what they're going to do, how they're going to go about it. like I want to get an alert and then have to work back and put together that entire attack path and really piece together
that whole investigation. Um I think like when when I first started at Reliquest, we actually offered red team services and what we found is like customers don't want the watcher testing themselves. And so I think like especially in organizations like your board probably won't accept an internal red team report and it's like a full audit. It's like of course I'm going to say I did great. Uh, so a little bit of conflict of interest there. I think there's benefits in doing both approaches. Again, choose your style, choose what you need to solve for. Use your progression as you're mapping from prior tests, either internal or external, to be like, where am I still weak? What style do I have to go for?
What part of my network do I still have to test? Uh, and keep progressing that way. alerts that
I might look to some of my team here. What are they? Yeah, living off the land. So like type alerts pretty common. Those are overlooked. A lot of blood hound detections as well are overlooked commonly. Impact detections. A lot of people we send to customers, they have no idea what it is or what it is. and move on. But it can be realtor
one over mistake versus one of your top. Yeah, mostly it would come down to I guess history of working alerts. If you have the knowledge yourself and you're able to perform the tests on your own, you know, like we go back, we'll do things in lab. We don't know what logging it generates. We'll go back and do it in the lab. We'll see what it generates. We'll get familiar with it. We'll dig through the code of the tools and understand exactly the events they generate. And then that's the way that we kind of ingrain it in ourselves over over time. But whereas a new soft analyst may not have that experience or that knowledge and may not know where to
look. And I I would throw on top of his answer, right? Like uh the type of alerts are the living off the lamp. It's the anyes of course. Um of course it's it's normal. of course is expected uh kerros request of course is person authenticating they just don't understand like the smaller details of why those ticket options are different or something like that um and I think another yeah labs are definitely one way right those that have have practiced um but secondarily I would say those with business context those that can say hey this is a software company so yes it makes sense they're compiling code but HR shouldn't be doing that um and so a
lot of times like I I almost think overly technical individuals look at just the technical aspect of it where like there's an individual at our company uh came in I think out of school he was a lawyer like he went for law school and he's probably one of the sharpest analysts uh we have just because like he understands the business he understands the impacts and so he's kind of like applying the technical knowledge he's learned and gained and is an expert at with that business context. Um, so like that's why I think business the business knowledge part is so underrated in the security team because like you have to understand the financial cost of if I can do this or if
I can't do this. the business cost of if I contain this what might I disrupt what might I not disrupt um an investigation like is this normal is this not that was the toughest thing for me like is it really costly for this company even though it took me a week or two to make this report Yep. Yeah. And I mean I again a lot a lot of times I see these red team reports and they're overly ambitious. you should know every user script that's running the environment and baseline those. No, like impossible. Uh like I don't know what organizations they're pentesting for, but I guarantee you some of our customers, guess I can name drop
a few on our website, but like Southwest Airlines has hundreds of thousands of employees. Like yeah, we do our best to help them navigate some of that and hunt and like do that. Like, hey, this is this is normal. We can start to exclude this. This is bad. Let's go respond to that. there's always gonna be a gray area that's like shrinking and growing as you you hunt and baseline for them. Um, and I'd love to say I'd love to say a tool like a UBA is the solution. I just don't think it is. Um, statistical baselines just are near impossible to to do, create, maintain. Uh, you know, especially in the health care world, you have nurses that are on
shift, offshift, very weird hours, and it's like, yeah, of course, this person's overnight. They got called in for emergency surgery. like but I'm not going to go respond to that uh in a critical manner. So like I just think there might be some of that like perfect world that just doesn't exist in business
and all API calls for credentials for doctor shared workstations for nurses and like who's logging in and generic accounts and like I I'd love to say we could live in a perfect world. we just don't. Um, and so like I think we got to ground some of ourselves in reality with these tests of like acknowledging gaps, countermeasuring them with defense and depth or prevention or containment. Uh, and that's that's the only way we can be as good as we are. Question. Yeah. Do you see in the future maybe like isolating a
worst case you do is knock a user offline who's not working anyways. Uh, and so, you know, a lot of times it's it's as simple as that. Um, I'd rather, you know, beg for forgiveness than ask for permission to some degree. Um, and I'd say like you can use AI but still put some safeguards in place, right? I still think you could say, hey, I want to make sure I'm uh blocking this domain if it shows up as a fishing domain. And you can use kind of the AI to make the judgment of like this email does meet the qual criteria of a fishing email this domain and then use just straight up automation to say like
does this domain have an SSL is it registered to Google and then just say nope right or hey it's registered to Google but it's a it's a form right they're doing like Google forms or something like that uh and so I I think there's absolutely a place to do that to me it's the only way you will operate with that much speed is leveraging technology. You cannot have a human contained. They will not be faster than the adversary. Yeah, of course. I know I'm pretty much at time. Any other last questions? Yeah, I had a question. You mentioned, you know, some of the issues with
you.
So, I think in some cases, yes. Um, in other cases, no. So like the UBA and then there's RBA which is like risk based which is like each thing adds score and then it it creates an alert. I I find it funny and maybe I'll trigger someone but Splunk put out a RBA one pager on here's how we do risk based alerting. Their example was like this user runs a sketchy PowerShell script then they create a local admin account then they do this then they do this then they logged into AWS and then we alerted. I'm like each one of those I'm isolating that host. like I'm not I'm not waiting till they pop to my ABS. Like I'm way
behind the game. So yes, I think there's some things that just by themselves aren't enough, but there's other but there's other things that are like don't put everything into that bucket. Um so nice presentation. Good to meet you. What's your name? Ray Crest. Ray. Okay. The airport. So thanks for TPA. Yeah. Okay. Awesome. Your input. Love the airport. Tampa. I know you guys got the awards, but easily the best airport I've ever flown through. So, I got a lot of information out of it to help the team. Um, I just put something together for a cleanup and it was I forget the open source tool on certificate services. Certify. I forget. It's maybe certify. Okay. Certify. I mean, there's a bunch
of them like run your own blood hound. Run your own certify. like pull those results and basically be like, "Hey, these guys shouldn't be domain admins. They haven't existed for a long time. This service account doesn't exist anymore. Uh I've been doing account cleanup and service account all of them. Nuke all of them." Yes. That is an ongoing painful problem. It's a battle. It is. It's a battle. Scripts the next one we run. Do you see a lot of push back from your organization of like, hey, don't put this control in place. We got planes that need to fly. I would assume especially in that industry.