← All talks

From Soup To Nuts: Building A Detection-as-Code Pipeline - David French

BSides Dublin · 202444:08401 viewsPublished 2024-06Watch on YouTube ↗
Speakers
Tags
About this talk
David French presents detection-as-code methodology, a systematic approach to managing detection rules through version control, automation, and CI/CD pipelines. The talk covers practical implementation across SIEMs and EDR tools, including rule versioning, testing workflows, peer review processes, and audit trails. French demonstrates how security teams can transition from manual rule management to scalable, code-driven detection engineering.
Show transcript [en]

um but yeah thanks for coming hi everyone my name is David French uh I'm going to speak about detection as code in this session which is a methodology that uses code and automation to manage detection content in security tools uh the reason I used the phrase from Soup To Nuts uh from start to finish in this title is I want this to serve as kind of a lasting reference for people who need an introduction to detection code and need a practical example of kind of how to build it and implement it so it's not just Theory we're going to show you how to build something here so little bit about me before we get started I've been in it and cyber

security for about uh 18 years now uh during the last eight years or so I've kind of gone back and forth between um defensive practitioner roles uh sock analyst threat hunter that kind of thing and working on the vendor side developing detection content doing detection engineering and building kind of sim and EDR products uh currently work at Google Cloud on the adoption engineering team I work on Google SEC Ops uh recently renamed from Chronicle and uh yeah presented research at a few conferences created a tour called Dorothy that blue teams can use to kind of um simulate attack of behavior in their optimal environment Tes their monitoring and detection that kind of thing and yeah um I've uh yeah

been living in America for the last eight years and couple of Irishman last night nights told me had a funny accent so apologies for that all right so um yeah I want to think about who can benefit from this presentation I think anyone who's kind of curious about what detection as code is how to get started um maybe you're a defensive practitioner working as like a sock analyst or aspiring detection engineer maybe you're familiar with managing detection rules in your security tools manually and you're interested in kind of automating some of that um yeah if you're already an expert in this you might not learn a time um but if you've got experience in this I'd

love to to chat to you later um yeah just out of curiosity who's written a detection rule in a security talk with cool it's a good number all right so here's an overview of what I'm going to be covering so I'll start by providing a definition of what detection is code is and how it relates to those traditional methods of managing your detection content I walk through an example workflow for a security team that's uh developing and managing their rules as code we'll talk about the benefits of this approach and then we're going to design and build our Pipeline and then yeah I'll leave you with kind of some lessons learned from implementing this at a few companies um

some links to useful resources that you can use to build this uh with whatever tools you have even if it's in your lab or what you what you've got at work already today all right so here's my definition for what detection as code is I like to think of it of this set of principles that use code in automation kind of implement and manage that threat detection content in your tools so when we think of the traditional approach uh for managing our rules is to kind of log into your sim EDR firewalls and you're kind of manually configuring those rules and signatures right um and then with detection as code we're kind of leveraging these uh devops kind of

software development practices that have been around for a while you know uh using cicd tools um and kind of automation so yeah this kind of been gaining popularity the last four to five years and if you're new to this I've included some links here I can share the slides later um to exist in work right stuff that's inspired me as I've worked on this at a few companies so just to a knowledge um everyone's got varying level levels of experience with detection engineering you know writing code using some of the software development tools I'm going to talk about so uh here's some kind of basic nomenclature or introduction to the core Technologies we're going to use

to to build this pipeline right so starting on the left the left yep um we're going to need a version control system software so you've probably heard of git uh this is going to attract changes to our code over time allow us to make kind of incremental changes roll backs have that version history that kind of thing and then we're going to need a software development platform uh you probably heard of like GitHub or gitlab this is going to provide that centralized workspace for us to uh yeah manage a repo that contains our code right our detection content so when an engineer wants to make a change to detection rules they can create PO requests and and Stage their changes and

off their peers to kind of review and approve that and then we're going to need a cicd tool um examples here you probably heard of uh GitHub actions or gitlab cicd pipelines and what this is going to do when the codebase for our detection rules changes uh we're going to have a tool that recog that Chang has happened and then it automates the kind of testing and deployment of our rules to our security

tools so here we've got an example workflow that a security team can use to manage detection rules in their Sim uh just as an example using uh gitlab or GitHub I'm going to be using gitlab for this project that's what I had access to at the time I built this uh so yeah as with most things you know this is this is a way of doing it the way so you can kind of customize it to fit your needs um but yeah starting on the left we've got a detection engineer who wants to uh make a change to detection rules maybe they want to create a new one or update an existing one um so what they'll do is

they'll create a a new branch in uh the gitlab project and Stage their changes in a PO request right so they might create a new role and kind of stage those changes and after that P request is created we've got a set of tests that are executed by our lab cicd pipeline job uh these tests are going to check for things like rule configuration issues or problems with the syntax of the rules logic and that the rule actually still matches on the intended behavior and then once those tests passed uh they get up here to kind of review those proposed changes and this is where the security teams kind of discuss those changes provide feedback

suggestions on the rule that kind of thing and then after the engineers po request is approved they can merge changes into that main branch of the gitlab project uh our cicd tool is going to detect that changes been made to our code base and then it's going to push those changes out to our Sims API all right so that's kind of um hopefully giving you a basic understanding of detection is code um and what it is if you're not familiar with it and we can move on to talk about some of the benefits before we go ahead and build this thing so first benefit um I think of of working in this way is is this

collaboration right so one of the challenges managing detection rules manually in security tools is that people can log in I'm sure we've seen this right log into a security tour make a change to a rule without any input from anyone else on the team um and you know the te might not have approved that change or people make mistakes right they we've probably seen a rule that's been changed and introduces an explosion Force positives or Force negatives right missing attack of behavior um when you're working in this kind of uh collaborative approach in my opinion you know you've got people with uh different skill sets and expertise on the team is going to lead to more effective

rules and then yeah continuing on the subject of collaboration um managing your content as code is makes it easier to kind of share that content with the community um so I used to work at elastic back in 2020 and I think they were one of the first kind of big security vendors like opened up all of their detection logic uh we started doing detection engineering out in the open start accepting Community contributions and then yeah during the last four years you've seen all these other players kind of follow suit right um people might argue that makes kind of your defenses weaker but when you're talking about hundreds of detection rules the attack has walking across this

Minefield right they're not going to be able to evade every single one of them and then next yeah uh working in this way kind of gives you more control over changes that are made to your detection content so when you saw them in software development platform you can make sure that your rules are kind of or any changes are tested reviewed and approved before they're pushed out um and in this screenshot you can see you know the engineers proposed changes can't be merged into the main branch and pushed out to the security tool because their tests have failed right and that's c i pipeline um so yeah some organizations require this level of control over their detection rules as

well as their preventive controls um we're not just writing detection rules for fun right they're they're they're security controls that we need to look at off and then final benefit here may be obvious right we're borrowing these uh devop style software development practices um cic do tools we can build this pipeline to kind of automate that that building testing deployment of our rules um obvious benefit here is to like the time it saves us um and then you can have a set of tests that trigger your rules and validate the alerts were generated right is test that your um you're kind of logging monitoring detection pipeline as well and we we'll talk more about that

soon all right so that's the introductory Theory covered uh we can move on to kind of Designing and building this pipeline so yeah simple design uh for this pipeline we're going to build for managing detection rules in our Sim so on the left we've got our software development platform gitlab uh this Project's going to be used to store our rules um which is where we be working on our rules as detection engineers uh rule logic is going to be in individual files in this rules directory in the other directories we've got some python modules we're going to use to manage rules via the Sims API and then in the middle here we're going to configure a few cicd pipeline

jobs in gitlab uh one of them is going to be responsible for running our tests for our rules the other two are going to retrieve Rules From The Sim and push rule changes out to the Sim um some organizations might have kind of separate Dev and C instance for their Sim uh they might test and deploy changes to their Dev instance before they push them out to production uh in this example we're just going to have a single Sim right uh so on the right hand side we've got our Sim it's got an API that lets us programmatically create update verify rules using Code um so on the right here um I'm using Google set Ops is in this project

that's what I had access to at the time I built this um if you're interested I've got a Blog series and some example code um one shows you how to do it in setups one shows you how to do it um using free open source and Community editions of software um including yeah that one works with sumo logic and I can share the links to that later

on so uh for the purposes of this project we're going to presume a few things are true at this point uh we've got a collection of rules in our Sim already maybe our security team configured those or maybe the vendor put that initial collection there for us uh the s got an API that lets us manage rules programmatically and then some vendors will give you example code for managing rules via its API sometimes you have to kind of read the docs and write that code yourself and then yeah just a note on what you can do in the UI of a security tool versus the API um users have become to expect that par versus what you can

do in the UV API right so uh if you work for a security vendor people are going to be be evaluating and scoring you on that all right so next step uh for this project um I like to kind of wrap those python modules up that you saw on the previous slide into like a simple command line interface tool and what this is going to do is just going to make it super easy for me to execute those commands in my cicd pipeline jobs for for managing my rules right so I've got an example command to kind of retrieve rules via the Sims API um do things like dump them to local files and update rules and verify the syntax of

rules [Music] so for this project I've written code to manage rules using the Sims rest API um as an alternative you might see security teams use these infrastructure as code tools that that have been around for a while uh terraform I guess Plum is a bit of a newer one um and then with this approach you're still going to be storing your code in a central repo in GitHub or gitlab uh just to point this out as like a you know popular method you might see um as kind of mature security operations teams want to manage everything as code right no matter what it is whether it's rules or documentation or whatever um yeah so

I've got a link here to a Blog if you're interested um this one uses this example uses terraform to apply those changes to to suo logic yeah so if you're interest if you're kind of considering implementing this uh here are a few things to to think about um so if you are managing your detection rules and configuration for you security tools as code um do you want to prevent those changes from being made in the UI of the security tools right do you want to disable that um some teams are small enough where they just can agree that changes should go through their code base and get pushed out to the security tools some people

will disable that right um some people will have their cicd pipeline kind of apply those changes to their security tools maybe maybe daily um in case any changes were made in the UI and it will just overwrite those um but yeah if you do end up disabling this think about whether um you want a detection to alert on that behavior right if people shouldn't be messing around the UI you can alert on that and figure out why all right so here's a layout of our our gitlab project so starting at the top we've got those modules for managing Rules by The Sims API uh we've got our command line interface you saw earlier for execute commands to manage rules in

our tools and each detection rule is stored its own file in this rules directory this gitlab C.O file stores the configuration for those gitlab cicd pipeline jobs and then this file at the bottom contains the configuration for our rules and this is where we're going to specify whether each rule should be enabled disabled it also stores metadata about each rule like unique IDs creation time stamps that kind of thing all right so let's take a look at the importance and benefits of defining in a schema for your detection rules if you're managing them as code so simple schema like the one you've got on the right here provides a way for you to kind of structure and standardize your

rules uh this is important if you've got a bunch of people working on rules you know using different formats and schemas uh you're going to find it incredibly difficult to kind of test and manage them especially using automation right so this makes them more organized and easier to read and then once you've defined your schema you can have your test kind of detect R issues early on and prevent ref being Avid out to your security tools if there are any issues and it also makes it easier to share detection rules with other people in the community right for using a similar format or the same format um that edone understands so the example schema on the right hand side here this

uses pantic which is a python library and in this example you've got a set of field names defined for your rule the expected data type for each field and whether each field is kind of required or optional and yeah here a couple of file formats that people use for their rules so Splunk and sigma both use yaml elastic uses tommel for their kind of Open Source detection rules content and both of these file formats kind of provide that human readability um reading and writing these file formats is widely supported by by programming languages uh Tomo's nice it kind of let you break your rule up into sections so you might have a section for the rules

metadata the rules logic and then maybe another section for like miter attack technique annotation uh for this project I've decided to kind of keep my rules in a separate directory and the rule config in that that config file that we're going to see in a sec and this is going to give me more granular control over where my rules have kind of deployed out right maybe I'm managing rules in multiple SIM instances if I work for a vendor or an mssp all right so yeah the benefit of am schema or one of them is you can kind of validate your rules against it and catch issues early on um and then you can minimize the risk of deploying like can

changes out to your security tools so in this example we're using pantic this is raising a validation error we've got um an invalid value or type that's provided for a rule field and yeah there's some low hanging fruit you can take care of at this stage um you can check the you know raise an error for missing values um rule configuration issues you know you can't have a rule that's um enabled and archived at the same time and that kind of thing and then once you've taken care of the basics um you can use either pantic or marshmallow is another kind of data validation tool that let you kind of create your own more advanced like

validation use cases right like checking um my attack technique actually matches the technique name or the URL all right so um almost done with kind of the schema type of stuff the next thing you're going to want to do is also verify the syntax of your rules whenever changes are being worked on so some Sims will have an API method you can call to very verify the syntax of a raw and make sure it is in fact like a valid raw object um if your security D doesn't accept that or it doesn't provide that you can create your own kind of linter um to to check that Ro paing and validation and a lint is kind

of like a grammar checker for your code right if you haven't come across that before um that will obviously take more effort as the vendor kind of changes their detection engine and ra language um you'll need to keep up with that so yeah bit more effort all right so yeah next logical step for this project um we've got our kind of initial gitlab project set up uh we can pull the existing rules from our Sim and commit them back to our code base so uh to help with this we're going to use that handy CLI um and what this does it pulls the latest rules from my sim and writes them out to local files and it also dumps the

rule configuration so that that rule config file that we saw or that we're going to see next all right so here's what it looks like when the latest rules from our Sim are pulled and kind of written them out to our rule schema and written them out to our respective files so on the left we've got our yl rules in this case that are written to those individual files in the rules directory and then on the right you can see the configuration and metadata for each rule is written out to this this rule config yaml file um so for this example you've got separate schemas for the rules and the entries in that rule config file and this lets you

catch any issues when the rules kind of loaded from disk and before they written to disk so as you can see on the right hand side there each entry for that rule config or H entry for a rule and that rule config file specifies whether a rule should be disabled or not uh it's got the metadata for each rule like the unique ID creation times that kind of thing all right so we looked at that kind of poor latest ruls command um what we've got here is our first gitlab cicd pipeline job that automates the process of kind of pulling those detection Rules From The Sim and then commit any changes to our to our code base right so um you

could run this nightly if people were still updating stuff in your in your sim um but in this example we're saying people have to go through our code based through update rules so I'm just going to run this job once uh it pulls the latest Rules From The Sim dumps them out to those local files and then in the output of this job on the right you can see it runs the git States command to check if there are any changes that need to be committed to gitlab and if there are any changes kind of pending the job takes care of that commits into the main garage which is what you see here right so um this is what that initial commit

looks like in our gitlab project so our cicd job B order rules that are currently in our Sim on the left you can see the configuration and metadata was written out to that R config file on the right you've got an example of the RL rle being written to its own file in the rules directory and then all changes were committed to the gitlab project by the cicd job and then at the bottom here you can see that get commit message associated with those

changes all right so now we've got our kind of sim and gitlab project in sync right with our detection content um going back to our detection engineering workflow earlier this is how an engineer can kind of create a new rule right go through this this detection is CL by clo so on the left you can see a new err rule that detects a certain behavior in an AR organization uh the rule contains some metadata about what behavior is being detected in the actual rule logic and then on the right we're creating a new entry in this rule config file um we're specifying that the rule should be enabled to run over our logs and it

should fire alerts when it matches on the intended behavior um so yeah just a side note on this rule um this rule detects when OCTA admin privileges are assigned out to a non-admin user account so in this example an account that doesn't start with the word admin um so if your organization uses OCTA and has a set name and Convention for admin user accounts this is a A good rule for you to have in place so um I used to work at an organization where we had this OCTA admin that would assign admin permissions to another account log into as that account and Carry Out certain actions then log out and remove those permissions um that was an Insider risk

case um didn't end well for them um but yeah just to let you know I mean this is a good rule to have it can detect an attacker that's compromising admin and there's assign and admin permissions out to try and maintain um additional you know methods of [Music] persistence all right so yeah next step when you're implementing this capability it's going to be crucial for you to kind of protect the main branch of your project in the software development platform you're using so in this example I've configured protection rules so people can't merge any changes into that main branch unless the test pass in the cic pipeline uh We've also got a rule where I need to get peer approval from a

teammate um and you can set that number to whatever you want so the bottom the bottom image here shows that rule verification succeeded for nine rules but failed for one of them uh they all contains an invalid field name and the engineer is going to need to fix that so the test pass and then they can request that peer rreview all right so yeah Lessons Learned From you know when a detection engineer stages their proposed changes and ask their teammates to review their code so uh yeah after implementing a few few companies I think um you know acknowledging that detection engineering is hard you've got to do that takes a lot of time and effort um you've got to

do threat research to understand attacker techniques uh maybe you'll configure your lab environment to simulate attacker Behavior maybe you generate those events to analyze then you kind of write your detection logic develop your tests so as that takes a lot of time and effort right and then you stage your changes um it's on display for your peers to kind of criticize and pick apart and and challenge um so I think as R authors you know it's important to try and you know assume positive intent we're trying to make the best rules possible and try and avoid kind of getting defensive and then as the author of as a kind of reviewer you know if you think of rule SS don't

say that um explain you know your thought process maybe the person who wrote the role is an endpoint security expert and you're a network engineer background that kind of thing um yeah provide that constructive feedback you know try and get to reviews in a timely manner uh so this kind of this quote from this book on the right hand side has stuck with me over the years when it when it talks about code reviews um don't B the reason improvements with or on the V and it means kind of um as a reviewer insist on quality but don't become this impossible barrier help people get their code merged um don't expand the scope of like the change

that's being proposed you know don't let this disagreements Fester so yeah it's got some really good points um highly recommend that if you're becoming new to this stuff and then yeah finally um develop a rule style guide right so people can understand the expected you know minimum expectations and layout and format of a rule um and then yeah that will result in kind of less less conflict and arguments during this stage all right so so testing rules um in the field of detection engineering this is kind of a broad and deep topic I'm going to try and cover some of the most kind of important parts before we can move on to show you how we can test

the rule in our cicd Pipeline and and validate the alerts we generated so yeah um when you're building this out don't skip this step right um if you're not testing your rules on a regular basis then you can't have confidence that your your kind of logging uh detection and alerting pipeline is is working properly right and it's only a matter of time before you get blindsided by missing a red team activity or attack of behavior right so if you got 10 detection rules you probably keep up with testing those manually once a week or once a day um when you're talking about hundreds uh it's not scalable so yeah a couple of challenges or uh considerations with

regards testing um or why people tend to skip this step so I don't know if you've ever written a detection Rule and then tried to write test for it often writing test takes longer than writing the detection rule itself right so people um might tend to skip that you might not have the expertise on your team to kind of develop an automate tests uh so you look to hire someone with expertise or do you look at you know an off the shelf tool that performs testing and validation for your detections on a on a regular basis and then yeah final consideration here really um your team might start building out the library of detections and not developing tests thinking I'll

get to it one day but you know when that collection grows to over hundreds of rules um it takes a lot of time to go back and kind of write those tests right um I don't know if anyone will get excited about that I certainly wouldn't right so yeah some reasons why I think people need to care about this this stage um our environments tend to drift over time so Technologies entire networks come and go new preventive security controls rolled out um you know are you trying to detect a threat to a technology that doesn't even exist anymore in the organization uh login interruptions occur so if you have a r that's running it's never being fed any

logs it's it's never with fire right um attack techniques lose their their relevancy right systems are patched um every detection rule kind of has a shelf life um yeah pesky vendors kind of change their logging schemas right you've got a rule that's um relying on this specific field name and a value in it the vendor rolls out an update um changes the field name your rules never going to fire so yeah all these things really or or most of them can uh result in force negatives missed opportunities to detect threats early on before they become an incident all right so yeah a couple of options for for testing your rules at this phase uh one option is to kind of

ship test or synthetic events to your sim for ingestion and then validate that your rules generate alerts uh so this is better than having no test at all but it's not kind of comprehensive validating your logging detection alerting pipelines working into around um in this case you kind of what I've seen people do they're shipping um events that have collected at an earlier time right maybe when they were developing the role they're shipping those to their Sim uh so that won't kind of validate that you're R still firing on a vendor updated schema just an example so the more comprehensive approach here is to run you've got a set of tests that trigger your rules on a

regular basis valid all components of your kind of monitoring and detection is working and it gives you the opportunity to jump in and fix those issues early on before you you miss Behavior Uh if you're new to this um a couple of projects you to check out you've got Atomic red team by Red Canary and Red Team automation by elastic um yeah so I think aiming for 100% coverage and testing your rules is um you'll kill yourself trying to do it right so if you've got um rules that have machine learning techniques to detect anomaly dete or anomalies it's going to be hard to to write tests for that right if you relying on a Baseline

and normali and that so yeah you can't test everything and and that's okay so moving on looking at a practical example of how to test the rule we created earlier uh so in this example I've written code that uses octa's API to carry out certain actions that trigger the rule we saw earlier um so first it creates a new OCTA user account doesn't start with admin it assigns that administrator Ro to the new account and then it deactivates and deletes the user account so we can configure our cicd pipeline to run tests like this on a schedule or when changes are being proposed through our rules um in the real world right you're not going to be

able to run this test in your production instance of oo you might have a Dev instance where you're shipping those logs to your sim uh you know creating a user account signing up permissions and then kind of throw in away you don't want to um there are probably approval processes in your production environment you can't just assign permissions out yet and if the test fails you don't want to leave those permissions kind of dangling all right so we've executed our tests and then we need to validate alerts we generated by our detection rules so here's an example where I'm validating an alert was generated by that OCTA rule uh you can check that the alert contains specific indicators from

your tests like the the a username we used uh to create a user then delete it and then if alerts are being opened in your tick in que or whatever system you use you can just close those out and label as test um yeah so in this example the test um passed but if they failed you know you can raise an error for the team to investigate and not kind of let them move to that next stage of of pushing let changes out all right so now our tests have passed and our teammates have approved our Port request uh it's time to merge those changes into that main branch of the gitlab project which uh is protected

so when those changes are merged into main our CI our third cic job is kicked off and then it updates the rules in our Sim based on the code that's in our in our gitlab project right so uh this job is going to compare the rules in gitlab to the one that's in our Sim it checks if any updates are required creating new rules creating a new version of a rule um if the logic has changed and then enabling or disabling rules so in this screenshot you see that new rul is created in Sim it was enabled to run against our logs and it's enabled for alerting as well and then we get this summary that's printed out at the end of

the jobs run and that's that's preserved yeah so um as I mentioned earlier like many teams will kind of figure a job like this to run on a schedule maybe once a day to push out changes to their their security tools if they want to overwrite anything that was done in the UI um just to mention that as an option right if you if you're designing your own pipeline all right and then after that job completes um it's push changes out to the Sim our P latest R's job runs again um and this pulls the a fresh copy of all the rules from our Sim and commits them back to the gitlab project uh it

might seem weird at first but the reason this is done it ensures our codebase is kept up to date with what's in our Sim um so for instance when we created that that new rule we didn't know the unique ID or revision timestamps that the Sim decides that right when a new rule object is created so we're syncing that back and then yeah that process for kind of modify modifying a role is not all that different from creating a new rule so you've got your detection engineer stages their changes in the PO request in a new Branch they wa those test to pass they get that peer review and approval and then the changes emerged

into the main branch and pushed out to the Sim right so it's um rinse rinse LA to repeat that and then yeah a couple couple more benefits really um so as we're working on our detection rules um kind of iteratively changing them we're left with this audit Trail that's that's left behind right so we've got in our Version Control System it provides that commit history for each rule the context for each CH the changes preserved in those uh commit messages and the PO requests been working on so people can easily kind of go back to review the previous state of a rle or figure out uh context and the reasons behind changes and then yeah a few other benefits of

kind of storing them in this decentralized repo uh you might have Auditors that come in and ask if you have proof you have a particular detection in place so if you work in financial services industry uh auditor might ask if you got detections in place around things like Swift or data loss prevention and you can easily just show them those rules and you've got them in place and deployed uh in say GitHub in this example without giving them access to your security tools right and potentially just leave those those permissions there if you've got a purple teaming capability you can give the offensive team access to review your rules depending on you know the relationship between the red and blue

team and they can look for ways to kind of evade your rules in preparation of the next purple team and exercise and then finally you've got this searchable code base right so all your rules imagine from your sim EDR uh you know next gen file all that kind of thing you can quickly do a quick search if a new technique comes out you're wondering what kind of coverage you have today with your rules yeah a couple key takeaways after implementing this a couple of companies um so we got yeah a few minutes left here um yeah this is obviously subjective but my opinion on what types of organizations can can benefit from doing this uh if you're a large

organization with this complex Dynamic environment lots of security data available um I think this kind of detection as codee approach can help the team keep their coverage up to date and stay one step ahead of attackers uh if you need that additional auditing and change management uh could be a good fit there you know you might have a budget for dedicated detection Engineers where they kind of assess you know what coverage they have with the rules that vendors provide or what the teams develop today and then you know use threatens to kind of improve their detection coverage against threats over time and then on the right um I don't need to read all these off because it's

kind of the opposite of the left but you know if you're a small company maybe no dedicated security team um probably not a good fit you know you're probably going to be partnering with a managed security servic provider uh who's doing detection as code anyway right because they're probably managing rules across across multiple customers in environments and yeah um some advantages we spoke about earlier right and my favorite reasons for doing this that that collaboration uh results in better rules for your organization um we've seen vendors sharing more detection logic over the last few years um I think this has helped organizations build a better defense against attacks um you've got more control over changing

your detections spoke about that in detail and then yeah with this automated testing you know you can reduce the risk of deploying changes that that break your detections um you could be alerted to issues with your monitoring and detection capabilities before anything goes undetected and yeah um I can share these slides after I think uh here are some links to some useful resources if you're interested in learning more about this or detection Engineering in general uh if you're looking for some tutorials on how to kind of build this in a lab or at work based on what you have got one that works with Google seops one that works with Su logic and again yeah at the bottom if

you're not familiar with these projects you've got um yeah projects on GitHub with free detection rules to experiment with or um see if you can use them have to increase your coverage so that's it yeah thanks for coming um think we got a few minutes for

questions so every new process is basically number or what's the like if you were toop this in your organization something you obviously the the rules

yeah I think I think monitoring what people are doing in the UI right um perhaps for yeah exactly yeah do that yeah monitor for that um I think obviously when you're using Dev secops and cicd tools and all that Secrets management is going to be another big one might might be obvious today but you might have a security team who don't really come from like a software engineering background so maybe they could partner with someone internally um yeah just trying to think what else

yeah I think maybe controlling who has access to to open for requests and then I don't know can someone inject something in your GitHub actions workflows um yeah I think yeah nothing else comes to mind at a

moment yeah you could you could have the code owners all right and and make sure that um you know you might have certain parts of the security team that are fine to review raw changes and approve and push those [Music] um yeah yeah yeah and then just have certain people on the team who can kind of review and approve code changes right from messing around with the API and running tests and that kind of thing yep

[Music]

[Music] testing that they match on a log file or testing functionality matches what you're actually for part of the process [Music]

[Music] Sim yeah elastic do that in the detection rules replay yeah they've got so if you're an engineer staging a Ru you provide that sample test data maybe you collect collected some events from your lab um and it will load up I think like a pseudo detection engine makes sure the rule compiles and can run over that log data um so yeah yeah definitely possible yeah I haven't read that code myself I've seen it done yeah

[Music]

yeah you could yeah best practice or or stuff that I've seen at least um you could break your in that rules directory you can break your rules out by platform um and you could have you could have a cicd jobs detect for changes in those different directories and then pick up the rules from there and and deploy them out to your your respective tools right and I probably I like to keep the raw config kind of separate because especially if you're a vendor and you're managing it in multiple environments it gives you that graned control of like what gets pushed where and what gets enabled versus disabled yeah you can put like customer ID or the instance ID for all yeah

is that it last one okay

[Music]

[Music] oh the requirements of a detection yeah um so something like so in in the PO request template you would use something like um paler palente alerting and detection strategy is a good one where you as your the rule offer you specify like the name for the rule description of what it should detect uh no limitations forse positives that kind of things you fill that template out and then when you request a review you know the the expectations that you you can read that and understand what the the rules objectives right um so you would do that [Music]

[Music]

yeah um I've got to get off now right yeah let's talk on the side that's right sorry I don't want to run into the next session thank you