
can someone uh someone mind just shutting that door for me [Music] please it's bit [Music] noisy thanks all right I think we can get started here so uh yeah hi everyone uh my name is David French uh today I'm going to be talking about detection as code which is a methodology that uses code and automation to manage detection rules and security tools um and the reason I used from Soup To Nuts in this title right from start to finish uh I want this work to serve as a reference for people who want to um have a practical example of how to implement something like this right and build it so uh there is a bit of theory and
introduction to what detections code is and kind of a method theology but you're going to walk away with like example code and a practical example to build this if you want to so yeah um good to be back in San Antonio I uh I lived here for several years met my wife here over 10 years ago first place I worked and lived in the US so uh yeah love this city uh so bit about me before we get started I've been working in it and cyber security for over 18 years now uh during the last eight years or so I've kind of gone back back and forth between the practitioner side of security defending an organization from attack
and working on the vendor side um yeah I got my start as a sock analyst and then moved on to kind of build out detection engineering and threat hunting programs and then on the vendor side um I've mainly been doing threat research detection engineering um helping build Sim and EDR products currently work at Google Cloud where I work on Google SEC Ops um yeah spoke at a few conferences like to do stuff like this share knowledge uh created a talk Dorothy which is a tool to simulate attack of behavior in your alar environment to test your kind of monitoring and detection and yeah um yeah I used to live in San Antonio I'm now in Colorado
so uh you'll find me outside when I'm not not working so let's talk about who I think can benefit from this presentation um so I think anyone who's curious about what detection as code is and how to get started maybe you're a defensive security practitioner working as a sock analyst or a detection engineer maybe you've written some detection rules manually in your security tools and you're interested in kind of automating that that management of those and yeah if you're already an expert in this fair warning you might not learn a ton here but if you're experienced in this I'd love to to chat to you later um yeah just curious uh show of hands who's written a rule that
detects something before who's written a detection rule okay that's great detection engineer nice all right um yeah it's a good crowd all right so here's what we'll be covering I'll start by explaining what detection as code is how it relates to traditional methods of managing rules in your security tools we'll walk through an example workflow for a security team that's developing and managing their rules as code and then we'll talk about some of the benefits right why security teams work this way why what's the motivation for doing it and then we're going to walk through a process for kind of Designing and building a cicd pipeline to manage manage the section rules and then yeah wrap up share key
takeaways Lessons Learned From doing this a few companies and yeah links to useful resources example code um that I've wrote that you can get started building your own pipeline right whether that's at work or in your lab uh yeah so let's get started so starting off with a definition for what detection as code is um I like to think of it as this set of principles that uses mode and automation to implement and manage your detection content in your security tools um so I'm going to think of a traditional approach which a lot of us are familiar with um you're logging into your security tools manually whether that's a Sim or an EDR or firewall and you're writing and
managing your rules in there manually so this method still works for a lot of organizations it's you know by no means Obsolete and I'll talk about what types of organizations I think can can benefit from detection as code later on so with this approach right we're really borrowing uh devops kind of um software development practices that have been around for a while you're using tools to kind of test and deploy things using Automation and then yeah so really this aligns with mature security operations teams wanting to manage everything as code right whether that's configuration of their tools documentation response playbooks right you you name it so it's it's going with that that Trend and yes
um I've included some links here to stuff that's uh exist in work right that's influenced and inspired me as I've worked on this at the last few companies and I'll share these slides and yeah some links here to to work that I recommend you you explore if you're interested in this kind of thing all right so um just acknowledging right that everyone has varying levels of experience with detection engineering writing code some of the software development tools I'm going to be talking about so uh just to go through some kind of core technology or nomenclature here um before we walk through that example detection engineering workflow so first we're going to need a version control system
software uh you probably heard of git uh this is going to let us kind of track changes to our code base over time um let's do things like roll backs and kind of iterate on things to kind of improve it over time and then in the middle here we're going to need a software development platform right these are GitHub gitlab which I'm sure a lot of people have heard of uh this is going to provide that centralized workspace for us to kind of manage our code stage changes ask our peers to review it all that kind of thing and then we're also going to need a cicd tool um when a codebase for our detection content changes our cicd tool
is going to recognize that and it's going to kick in our kind of automated jobs right to test deploy um our rules all right so here we've got an example workflow that is sec team can use to manage detection rules in a Sim using a software development platform like gitlab or GitHub I'm going to be using gitlab for this project that's what I had access to at the time when I built this uh yeah as with most things right there are many different ways of doing this this is a way not B way so you can totally take any parts of this and and customize it to to fit your needs so starting on the left as an
example we've got a detection engineer who wants to make a change to the detection rules in the Sim as this example uh maybe they want to create a new rule or modify an existing one so the engineer creates a branch and a poor requesting gitlab that contains their proposed changes and at this point no actual changes are pushed out to the Sim and then after that pull request is created a set of tests are executed by our gitlab cicd pipeline job and these tests are going to check for things like rule configuration issues or problems with the syntax of the rules logic and that the rules still match on the the intended behaviors and generate alerts
so the goal here really is to kind of catch issues early on uh min minimize the risk of any bad changes being pushed out to our security tools and uh yeah you know we don't want to introduce breaking changes right and have a rule that misses attack a behavior all right so yeah third step here once those tests pass uh a peer review of th those proposed changes happens so this is where the security team's going to discuss the changes and provide feedback and suggestions on the RO and then after finally here on the right after that P request is approved the engineer gets to merge their changes into the main branch of that gitlab
project and then again our cicd tool is going to detect changes have been made to that main branch and then it pushes those Ro changes out to the Sims API which we'll see soon so yeah that's kind of a basic understanding of what detection is code is we can talk about some of the benefits right y security teams do this before we move on to to build this thing so the first benefit I want to talk about is collaboration so um I'm sure a lot of us have experienced this here right one of the challenges with managing rules manually in your security tools is that an individual on the team can log in make a change uh without any
input or review from anyone else on the team right people are nodding their heads we know what happens next right people people make mistakes um someone might make a change to a rule that causes Force positives or worse you know misses attack Behavior or red team activity team feels bad um yeah so working on rules in a software development platform like this makes it easy for the security team to to collaborate on changes uh which my opinion leads to kind of more effective rules and reduces the risk of pushing out any any kind of broken changes there continuing on that that same subject of collaboration so managing your content as code makes it easier to
share rules with the security Community right and that could be you sharing with your peers in the industry or what we've seen more in the last four years which is great we've seen vendors open up their detection logic right and share it with people um so even if you use you know Splunk you can dive into one of these other repos and make use of their rules right you probably just need to translate the the syntax to your your rule but yeah um I used to work at elastic I was one of the people who helped open up their detection rules repo about four years ago um yeah in my opinion this is helping people build
stronger defense against attacks so yeah if you're not familiar with these projects uh encourage you to check them out and see if you can use these rules to to improve your organization's detection coverage uh yeah next benefit I have here this gives us more control over changes that are made to our detection content so when our rules are stored in that software development platform we can make sure that changes are tested reviewed approved before they're pushed out um and in this in this screenshot example as an example the engineers changes can't be merged because the the tests in their pipeline have failed right so you just prevent them from even moving on to the to the next stage until
they fix that and yeah um you know some organizations you know if you're in the fin Financial Services sector or you know medical field highly regulated field um you're going to require this level of change for your change control for your detection rules right as well as your preventive controls we're not just writing rules for fun Yeah final benefit here before we move on to kind of building this thing um by boring these devop style software development practices and using these cicd tools uh we can yeah take advantage of that automation right with the building testing deployment of our content so yeah obvious benefit here is time it saves us when testing um we'll talk more about testing later and yeah
having a test of rules having a set of tests that trigger your rules and kind of validate alerts were generated gives you confidence that that things are working right and you're detecting the the intended behaviors all right so that's the introductory kind of theory covered um we can move on to designing and building our detection code pipeline so simple design right um for the pipeline we're going to build for managing detection rules in a Sim so on the left we've got our software development platform gitlab and this project is going to be used to store rules and we'll be working on our rules as detection Engineers so our rule logic is going to be stored in files in this
rules directory and then in the other directories we've got some python modules we're going to use to manage rules via our Sims API and we're going to configure a few cicd pipeline jobs in gitlab in the middle here one's going to run a set of test for our rules and the other two are responsible for retrieving Rules From The Sim and pushing updates to the Sim uh some organizations will have a separate development and production inst for their Sim they they can test and deploy changes in their Dev instance before they deploy them to prod uh a lot of teams don't have that luxury you know um I'll show you how we can take care of
some testing later but we're just going to be managing rules in a single Sim instance for this project and then on the right we've got our Sim that has an API that lets us read create update verify rules using Code uh I'm using Google SE Ops formerly known as Chronicle in this project that's what I had access to at the time um but the thing here right is this is uh methodology and for you to take away and customize based on like what what tools you have available right um I've got another example that uses free open source and Community editions of software so uh if you something that's something you're interested in you can
you can build that without spending any money all right so for the purposes of this project at this point we're assuming a few things are true so we've got a collection of rules in our Sim already ready maybe our security team configured those or maybe the vendor supplied those for us the Sim has an API that lets us manage rules programmatically some vendors might provide example code for you to manage rules via their API or some cases you have to read the the docs right and write that code yourself so on the right hand side here you can see a set of python modules that we're going to use to manage rules via that Sims
API and then yeah typically with a modern security tour right you're going to be able to carry out most if not all actions via the API as you can in the UI and users have come to expect that parity right so when the when they're uh evaluating security tools they're going to be scoring you on that all right so for this project I wrapped those python modules that you saw on the previous slide up in this simple uh command line interface tool and this is going to make it easy for me to execute certain commands in my cicd pipeline jobs without writing more kind of custom code so you can see here we've got a command to retrieve the latest
rules via the Sims API and write them to local files and we've got commands to update rules and verify the syntax of rules as well all right so uh for this project I've written code to manage rules via the Sims rest API so as an alternative and something you might see uh some security teams use infrastructure as code tools like terraform is a popular one or palumi to manage the configuration a and rules for their security tools so with this approach the configuration and detection content is still stored as code like in GitHub gitlab um and then those tools apply those changes to the infrastructure right which is your security tools so I just wanted to to point this out as
something you'll probably see um or if you're interested in implementing this your your company might already have these tools right and you can you can make use of those
all right so a few things for you to consider if you're thinking about implementing this so we I spoke about this a bit earlier so if you're managing your rules and configuration for your tools as code um do you want to prevent changes from being made in the UI right um because if people can go in the UI and make changes and not go through your Change Control process kind of defeating the purpose of what we're trying to accomplish here um some teams a small where they can just agree on that some teams will disable access to modify rules in the the Sim's UI so people are forced to go through it and then if that behavior happens
right someone's going in the UI and making changes to your rules do you want an alert to to tell if that's happening right you're considering that to be suspicious at that point all right so here's the layout of our gitl project so starting at the top we've got that collection of python modules for managing rules via the Sims API we've got our command line interface tool for executing commands to manage rules and each one of our rules is stored in this rules directory in its own file and this gitlab C.O file stores the configuration for our gitlab cicd pipeline jobs and then the file at the bottom contains the configuration for our rules so this is where we're going
to specify if a rule should be enabled or disabled archived it will store metadata about each rule like the unique ID we we'll see this in a sec all right so let's look at the importance and benefits of defining a schema for our rules if we're managing them as code so uh a schema like the simple one you see on the right here uh this is going to provide a way for us to structure and standardize our rules so this is important right if you've got a team of people working on your rules they all got their own style and layout um you'll find it difficult or impossible to kind of automate any any manager of management of
that uh so once you've defined scheme you can have a test detect rule issues early on and prevent them from being deployed out uh this is also going to make it easier for us to share rules with people in the community if you if you're into that and the example schema on this slide uses pantic python Library uh so this in this example we've got you know a set of field names the expected data type for each field and then whether each field is kind of required or optional so a couple of popular file formats that people use for their rules if you're not familiar with this Splunk and sigma both use yaml elastic uses
toml uh both of these formats provide that human read readability right they're easy to work with um reading and writing these file formats is widely supported by programming languages uh an example R in toml is on the right here you've got the various kind of field names and values um tommo's kind of cool because you can split your rule up into sections so you could have a section for like the metadata for the ru the logic itself and then m attack technique mappings um so for this project I decided to store my rule logic in individual files and manage the configuration and a separate rule config file and this is going to give me more
granular granular control over where my rules are deployed how they're configured all right so yeah one of the benefits of having that schema we can validate our rules against it and catch issues early on so um yeah we we're minimizing risk right of trying to push out any broken changes so uh in this example we've got pantic raising a validation error uh because an invalid value was provided for a field name for the RO and yeah you can start to take care of low hanging through when you de develop this validation logic so you can raise an error for missing values or configuration issues like um in most tools you kind of have a rule that's
like enabled and archived at the same time right um and then once you've taken care of the basics you can develop more custom validators like validating M attack technique mappings right sort of the technique ID matches the the technique name and URL and that kind of thing stuff that's easy to miss as as humans all right so last slide on kind of syntax um and verifying rules during our testing phase really so you're also going to need a test that verifies the syntax of your rules whenever any changes are being worked on right so some Sims will have an API method where you can push the content of your rule and the tool will reply back to tell you
if it's in fact a valid rule or not um if your security tool doesn't have that you it's going to be more kind of more work to develop your own kind of linta and and validator um and alter is kind of like a grammar checker for your tool for your code if you're not familiar with that term so that will take more effort and then as the vendor kind of updates their detection engine and Rule logic you'll need to play cat Mouse right or updating your code so L still works soit a bit more effort there all right so the next logical step here um is to build something that keeps our detection rules in our gitlab
project in sync with what's in our Sim so to help with this we're going to use that command line tool I showed you earlier uh this pulls the latest Rules From The Sim and writes them out to local Ru files and the rule configuration and metadata is written to a r config yaml file which we'll look at next so here's what it looks like when we pull the latest version of all Rules From The Sim um PA them into our rule schema and then write them out to their respective files so the logic for the yl rules are written to these individual L files in this rules directory and on the right the configuration and metadata for
each rule is written to this this rule config yaml file um so we've got separate schemas for the rules and the entries in the rule config file here and this is going to let us catch issues when we load these files from disk and before we write them to disk as well so on the right you can see we've got an entry for each rule in this config file uh specifies whether a rule should be enabled or not it contains metadata for the rule like a unique ID and creation time all right so I've created a get laab cicd pipeline job here to automate the process of pulling detection rules from my sim and commit any changes back
to that gitlab project so in this instance I'm just running this job once manually to pull the rules that are in my sim and commit them back to gitlab uh you could run this job on a schedule if people are still making changes in the the UI of your sim but for this project we're making changes to our rules in our code base right and pushing those out um so yeah here's the output from this job on the right projector is not the best I can explain what's going on um so the job pulls the latest Rules From The Sim and runs uh the git status command to check if there are any changes that need to be
committed back to gitlab and then if there are changes that need to be committed the job takes care of that for us and commits them to the main branch of our git laab project which is a protected Branch so here's an example of what that initial commit looks like in our gitlab project uh the cicd job pulled all of the rules that are currently in our Sim on the left you can see the configuration and metadata was written out to the rule config file and then on the right you can see an example y l rle being written to its own file in that rules directory so all those changes were committed to our gitlab project by
that cicd job and then at the bottom you can see the the G message associated with those changes and this is associated with my username but you could have a bot that does this right if you're doing it on a on a schedule all right so now we've taken care of syncing the rules between our cm and gitlab uh let's look at how a detection engineer can create a new rule so in this example an engineer creates a new branch and P request in our gitlab project CU remember we said the main branch is protected and they stage their proposed changes so on the left you see a new Yara L rule that detects a certain
behavior in an octo organization the rule contains metadata about what behavior is being detected and then the actual rules logic and then on the right you can see the engineers created a new entry in the rule config file um so they've specified that the rule should be enabled to run over their logs and it should generate a alerts when it matches on the intended Behavior so just a note on this rule um this detects when OCTA admin privileges are assigned to a non-admin user account so a user that doesn't start with admin um so if your organization uses OCTA right and if you have a set naming convention for admin user accounts this is a great rule to
have right or even if you don't use Octor and you've got that naming convention so uh I used to work at an organization where we had this admin who would assign admin permission out to another account log in as that account carry out certain actions and then they' removed the permissions um that was an Insider risk case uh didn't end up yeah didn't end up well for them um but this can also detect an attacker that's you know compromised an admin account is assigning those persistent um access permissions out to other accounts all right so we spoke about protecting the main branch and our software development platform so in this example we've configured protection rules in our gitlab projects so people
can't merge changes into the main branch unless those tests pass in the cicd pipeline so we've also got a rule where we need to get peer approval from a teammate as well right so this this overcomes that problem of people just logging into tools and making changes on a whim so yeah the bottom image here shows rule verification succeeded for nine rules but failed for one and the rule contains an invalid field name in this example so the engineer is going to need to fix this set a test pass and and then they can move on to that that peer review phase all right so let's talk about what happens when a detection engineer stages
those changes and ask their teammate to review their code so uh few Lessons Learned after doing this a few companies um so just to ack knowledge I know we got some detection Engineers threat hunters in here right um it takes a lot of time and effort so you have to do your threat research to understand attacker tactics your configuring a lab environment you're simulating attack of behavior you're generating events to analyze and then you develop your logic and your tests right to trigger your rule um so it's a lot of time and effort so the the thing here right is when you um stage your proposed rule in a PLL request for your team to review your
works on display for people to kind of pick apart criticize give feedback on um so it can be common for conflict to occur at this stage so you know um you have to try and not become defensive uh you have to be able to receive criticism um I feel like the way that criticism and feedback is delivered in this review phase is really important right so as the author of a rule try and assume positive intent for your reviewers uh the goal here is to try and make the best detections possible Right with different experiences and skill sets on the team you might have an expert endpoint engineer or an endpoint security engineer or an expert network
engineer um so yeah if you know more about something than someone you know don't say their rle sucks right avoid that um but spend time to provide constructive feedback explain your thought process right share knowledge um remain humble all all that kind of stuff yeah so um this quote on the right here don't be the reason improvements with on the vine this has stuck with me over the years from this book The Missing read me um it means as a reviewer right don't um insist on quality but not Perfection right help people get their changes approved and pushed uh don't become an impossible barrier right I'm sure we've worked with people like that before and then finally at the bottom um
develop a rule style guide so people can agree on the layout of a rule um how a rule should be written right so they don't argue with each other right you can just refer back to this document that you've developed together all right so yeah testing rules um in the field of detection engineering this is a broad and deep topic I'm going to try and cover some of the most important parts before we move on how we can kind of test the rule in our Pipeline and validate the alerts were generated so yeah don't skip this step when you're building your threat detection capability so if you're not testing your rules on a regular basis
you can't confirm that you're logging monitoring detection alerting pipelines working properly so at some point your detections are B bound to fail and result in falce negatives uh if you have 10 rules right you can probably test and verify things manually but when you talking about hundreds of rules uh it's not scalable so some challenges and considerations with regards to this subject so um yeah time constraints you know W in a I don't know if you've ever written a test for a rle but it can often take longer than writing the rle itself um the team might not have the expertise to kind of develop those tests in that case do you look at kind of hiring someone with that
expertise or looking to buy a tool that performs that testing and validation for your detections and then at the bottom here I've got Tech Deb so um teams might start building out the library of detections and not developing tests um but that that collection of rules right can quickly grow over to in the hundreds and it will take a lot of time to revisit those detections and develop tests for all of them so some reasons why I think people should care about having tests for their rules so our environments tend to drift over time uh Technologies entire networks come and go new preventive security controls are rolled out so you trying to detect a threat to a
technology that doesn't even exist anymore in your organization uh login interruptions occur so if you have a ro running that's never being fed any logs it's never going to fire an alert uh pesky vendors right Chang their logging schemas I'm sure a lot of us fell victim to this so uh you've got a rule that's relying on a particular field name and value the vendage updates their schema your rule breaks and yeah attack techniques you know sometimes they no longer work as systems are patched um every rule kind of has a the shelf life and if you have rules running in your detection engines that never fire they're kind of wasting resources right um that can be put to
good use so yeah a couple of options for testing rules so one option is to ship kind of test or synthetic events to your sim to validate your ingestion and then validate the a rule generates an alert uh this is better than having no test at all but it doesn't validate that you're logging detection and alerting pipelines working end to end you're essentially shipping old events right to your sim that you've collected previously so uh like I spoke about a second ago if you if your vendor changes their logging schema uh you're not going to kind of detect that if you're shipping those older events to your sim so the more comprehensive option right it takes a
bit more work but you got a set of tests that trigger your rules on a regular basis and this validates the all those components of your monitoring and kind of data pipeline the detection is working and yeah if you one of your test fails you can jump in fix those issues before you're blindsided by missing any attack or behavior if you need a testing rules um couple of projects here that offer fre collections of tests Atomic red team by Red Canary and Red Team automation by elastic um formly written endgame and yeah it's important to note right you can't test everything so if you've got U systems or uh you know using machine learning for anomaly detection it's
going to be hard to test those things right cuz they're relying on a Baseline and anomaly so you can't test everything and I think that's okay all right so yeah practical example of a tester rule we created earlier so in this example we've got some code that uses octa's API to carry out actions to trigger our rule so first it creates an OCTA user account that doesn't start with admin if you remember that logic earlier it assigns an admin Ro to that new account and then it deactivates and deletes the user account so we could configure our cicd pipeline to run tests like this on a schedule or when changes are being proposed to rules um yeah it's
unlikely right you're going to be able to run this test in your production Octor environment um we've usually got approval processes for creating accounts and assigning admin permissions but what you could do is you could have a Dev aror organization right and then that shipping events to your sim you could carry out those actions in your Dev instance and then uh yeah validate your alerts are generated logs are coming through and then close those alerts or tickets out automatically all right so once we've executed our tests we need to validate the alerts are generated by our detection rules so here's an example where we're validating that an alert was generated by our octar rule you can
check that theer the alert contains those specific indicators from our test like the OCTA username were created and then deleted and then close those alerts out so if any of those tests failed to generate an expected alert you can raise that error and then jump in and investigate that so in this example we can see the tests run by our pipeline completed successfully so we can move on to the next step all right so once our tests have passed and our teammates have approved our pull request uh we can merge our changes into the main branch of our get laab project so when changes are merged into our main branch that third cicd pipeline job kicks in and updates the
rules in our Sim based on the code the Sim get lab so this job compares the rules in gitlab to the ones that are in our Sim uh it checks if any updates are required you know creating new rules creating a new version of a rule if the logic has changed and enabling or disabling rules so in this screenshot you can see uh new R was created in our Sim it was enabled to run against our logs and enabled for alerting as well and then we've got this summary at the at the bottom um yeah so the cicd job takes care of deploying up dates to our Sim um many teams will configure a job like
this to run on a schedule maybe once a day uh to kind of push those changes out to the their Sim and override any changes that are made in the UI if people are still allowed to do that right so just just calling that out as an option all right so after that cicd job completes our poor latest rules job runs again to pull a fresh copy of all the rules from our Sim and commit them to the get laab project so this is done to ensure our code base is up to kept up to date with what's in um Alm and what's in gitlab right so um as an example when we created that new rule um we didn't know
the unique ID for the rule right or the creation Tim stamp that's decided by the Sim when the rules created so we're pulling that data back just to keep our codebase in sync with what's in what's in our tool yeah the process for modifying a rule isn't what that different from creating a new rule so uh the detection engineer stages their changes in a pool request uh once the test pass and they get that peer approval they can merge changes into the main branch and those changes are pushed out to the Sim yeah so another benefit of doing this is to call out this audit Trail that's left behind as we kind of um create a RS and modify them over time so
this version control system is going to provide us with a commit history for each rule uh the context for changes preserved D in these commit messages and then also in those PO requests where we've been working we're talking about um you know the basis for the rule the reason for the change there's conversation back and forth right so you're left with all this context when your engineers are collaborating in those PO requests and yeah this is going to make it easy for us to review the previous state of a rule if needed or revert back to it if needed right not every security tools got that um version history that lets you do that easily
yeah if you have the benefits of doing this um storing rules in a centralized repository so you might have Auditors come in and ask for proof that you've got a particular detection in place so if you work at a financial institution you might have an order to come in ask for proof you've got detections for data loss prevention or um detections to satisfy Swift compliance that kind of thing and it's easy to show them that you've got these rules right without giving them access to your security tools you can just give them access to to rules in a repo um and if you got a purple teaming capability you know depending on your relationship between
the blue team and the red team um you can give them access to to review your detection logic and they can look for ways to evade your rules or exercises to help you improve your coverage and then finally this is searchable right so if you have an emerging threat you want to quickly understand your coverage for that platform or technology um you can search it and just do a quick smoke test to see see where you're at yeah key takeaways after implementing this a couple of companies um so when I think about what types of organizations can benefit from detections code and this is obviously subjective um but if you're a large organization with complex Dynamic
environment lots of security data for available for analysis I think this could be a good fit right to help the security team keep their coverage up today stay one step ahead of attackers um if you need that additional auditing and change management we spoke about it could be a good fit uh you need budget for dedicated you know detection Engineers or people that are doing research and writing detections threat Hunters as well and you're going to need modern security tools right with an API that lets you do these things programmatically using code and Automation and then yeah conversely on the right I don't need to read all these off right but if you're a a small
organization um maybe not much security data or dedicated security team it's probably not going to be a good fit but then that type of organ organization is usually going to be partnering with someone like a managed security services provider and they're going to be doing detection is code in a way right to manage rules across multiple customers yeah um top three advantages really I spoke about collaboration um I've seen vendors sharing more detection logic with the security Community I think this is helping people build a stronger defense provides that more Change Control that we spoke about um ensuring that our rules are tested reviewed and approved before any changes are pushed out and then yeah with that automated
testing um I've got some more research on this that I'm going to be going to be publishing soon but um I think a lot of people don't know how to get started with testing to ensure their detections are working and their logging and data pipeline um so if you get this this implemented right it's a lot of leg work or up front work but you can be alerted to issues early on before you you miss stuff and then finally I could I'll share these slides so they're already out there um but some links to useful resources you're interested in learning more about detection as code or detection Engineering in general if you're looking for some tutorials on how
to get started uh we've got I've got one that works with Google secops one that works with sumo logic and yeah some of my favorite resources if you're interested in learning more about about the fundamentals RI I've listed some some link here and again if you're looking for collections of free roles to experiment with you can find links to these projects here too and that's it yeah
thanks um yeah any questions I have a question John very insightful presentation thanks question change and processes um soal large Enterprises which you say you know this is grateful um that have a wise change process maybe example something like now um but detection as a code kind of
like tool detction content change how yeah that's interesting yeah I haven't heard of anyone using service now to store their detection content um yeah what you could do is I don't know what kind of version controller is in in service now for those those objects right if you've got I know you can create sorry oh I don't know what version control is available for those objects and service now but but what you could do is have I don't know a job a job that periodically lists all of those detection objects in service now um kind of pauses those into objects that your your sim understands or whatever security tool you're working with and kind of push those out um
there might be some kind of last modified or last updated field that you could key off um to see if something's changed
yeah yeah yeah yeah no worries all right yeah like how do you usually work with the analy like for example the Rooks or whatever