← All talks

Rastrea2r: Multi-Platform Threat Hunting and Incident Response

BSides PDX · 201849:09165 viewsPublished 2019-02Watch on YouTube ↗
Speakers
Tags
About this talk
Rastrea2r is an open-source command-line tool for incident responders and SOC analysts to rapidly triage suspect systems and hunt for indicators of compromise across thousands of endpoints. It executes forensic artifact collection, memory dumps, and YARA scans via a client/server RESTful API without requiring additional agents, integrating seamlessly with existing security tools and consoles.
Show original YouTube description
Sudheendra S Bhat (@eaglesparadise) Rastrea2r (pronounced ““rastreador”” - hunter- in Spanish) is a multi-platform open source tool that allows incident responders and SOC analysts to triage suspect systems and hunt for Indicators of Compromise (IOCs) across thousands of endpoints in minutes. To collect forensic artifacts of interest from remote systems (including memory dumps), rastrea2r can execute sysinternal, system commands and other 3rd party tools (including custom batch scripts) across multiples endpoints, saving the output to a centralized share for automated or manual analysis. By using a client/server RESTful API, rastrea2r can also hunt for IOCs on disk and memory across multiple systems using YARA rules. As a command line tool, rastrea2r can easily integrate with AV consoles and SOAR tools, allowing incident responders and SOC analysts to collect forensic evidence and hunt for IOCs without the need for an additional agent, with ‘gusto’ and style!
Show transcript [en]

my name is sue teamed up hurt I'm here to talk about lasted or which is one of the projects that I've been contributing to for the past I would say about eight to ten months we recently also presented this project in black hat arsenal and it was pretty well received and I thought hey why not present into our local community so that I can also share some of the experiences that I had with this project and some of the things that I learned while implementing it I'm also the yeah my full-time job I'm a security architect at McAfee I'm also the maintainer for rusty door project we have a organization we have right now two projects one of them is the last

adult client and second one is the last two door server so let me give you a little bit of context on the history this is not a new project this project has been basically it was started by a smile valenzuela who is the principal engineer at McAfee he is like he's been in the security industry for almost like 20 years now and it was his brainchild where he decided that okay I'm having certain problems on the field I'm trying to find some more information but the tools that I have are not really giving me what I really want so the story as he says on a stormy windy day in New York there was a

snowfall and he had to work from home and he basically spent two days doing a quick and dirty prototype of this project where he wanted to basically take it to work on Monday and start using it to see if it really solves the problem so I'll get to the problem in a second but so with this project since inception it was presented in several conferences in 2016 it was presented at Arsenal and it has being presented or multiple conferences on sans smile he's also science instructors so he has also used this as a is this tool as a part of his curriculum in some of the courses which is being taught it sounds as well and we also so in 2008 early

2000 I started contribute I decided to start contributing to this project mostly because one I was trying to learn more about the secretary ins in general because on my day to day job I do most of the defending stuff at work but I also wanted to build some tools that could also assist the security operations centers in general especially with threat hunting and like identifying indicators of compromises quickly and so on so forth so I thought this project was one of those projects that had good learning opportunities so I just started contributing and we added a bunch of new features that we presented at blackhat this Arsenal this 2018 so I will be talking I have modified that if you

actually go look it up online you will find the slides presented in black hat as well but I have modified these slides a little bit to meet the needs of the audience I would say like black hat it was a very specific targeted audience so I'm also going to be little bit talking about the secretary operation centers in general I will also briefly discuss some of the newer features that we added in this future in this last iteration and the latest release and see how you can use that so also I wanted to mention the credits I give the credits is not just I'm not the only contributor this project has been in the been presented

in the past has been pretty much available as open source projects for last couple of years but I am recently pretty much added a bunch of new features rewrote some of the modules to make it more adaptable and I'm also one of the main goals that I'm trying to focus on is to basically make it more extendible so that open-source community can come and contribute and use it as they find necessary so basically putting decent process around it so with that I'm not really gonna go much in detail about what security operation centers are and what do they do but before I proceed further how many of you have heard about sock or worked in sock okay

that's that that makes my life easier so as for who I'm sorry for those who are not really familiar with sock I would like to think of sock as a methodology rather than big organization having a dedicated set of folks sitting in a closed room watching big monitors seeing different charts and basically taking care of the entire security issues that are around in and around the office and all the cyber related issues but along with that I would like to also think of sock as the methodology that what what are they intend to do because security operations centers can consist of like dealing with about 100,000 nodes to up to 10 to 15 nodes I come from a startup

background where my initial part of the career I used to work for startup which had almost I would say about two to three developers and about five program managers and two sales folks so there the sock was myself I was the only entire sock I was doing the hunting I was also doing the troubleshooting Incident Response so but the methodologies that are currently used in a bigger organization versus the smaller organization pretty much remain the same so from a Sox perspective pretty much they do monitoring investigation some preventative actions that's where majority of the advanced investigations occur and finally reporting like what did we learn what are the possible new threats that we should be prepared for

and so on so forth so along with that there are multiple processes that are also defined in a sock so again depending on the size of the company in size of the organization there can be tire one tile two entire three levels of sock analyst and they could be dedicated incident responding team and so on so forth so that's general notion about sock and for Intel folks sock is definitely not systemic so I used to be a part of Intel for almost six years I think so what what really what's a day like for a sock enlist so there will be pretty much some type of alerting mechanism I'm assuming in a decent sized organization where they'll

be a report saying hey you we are seeing some type of a malicious activity on one of the nodes or one of the endpoints in your organization so now there are multiple questions that a sock analyst needs to address saying okay should i what should i do next should I is it like a ransomware is it a PT type of an attack or how do I proceed further so how do I start looking for iosys so in a bigger organizations typically there are vendor vendor supplied tools that give you initial tracking mechanism but the problem like Ted mentioned in the earlier talk is that since you are basically up sitting behind the Windows software but you

really don't have much control on what data you could leverage to basically go in and try to look for iOS's by yourself because you have to believe whatever the vendor software tells you have to believe that it is true and maybe they are not covering all the aspects so one of the main reasons for this project was also to be more control to the SARC analyst or the hunters as we say to do thrashing quickly like how do you I mean not just I mean depending on the company you may have vendor software that you have already purchased but if you don't how do you go about looking for iosys and what tools are I mean typically you have you could

like I say if it is a Windows host machine which is malicious you would start running some system instances internal tools or you would write some you start looking for specific aspects like prefetch or startup processes to identify if there is anything malicious going on but very currently there are no comprehensive tool which is available for one to the the for the hunter to go ahead and start looking into different aspects of a host to identifying specific iosys so that's where we yeah so people like especially on the smaller organization do majority of these kind of tasks manually why first of all they are not really the these went to software's are not cheap right so so

what what's like a typical day like so you will get an alert notification saying hey you have a suspicious activity going on on one of your instances so how do you go ahead and respond so you have multiple choices right so first you could maybe go ahead and cremate the Box it might work for some time but in the long run maybe not a good idea right it's like playing whack-a-mole so so what are the other alternatives so that's where you start hunting or you start doing the smart incident response process so it's like if you think of it in the bigger sense you basically get an alert you start looking for indicators of compromises or

i/o sees then start thrashing that specific host to see what else you can find out to see if you can even root cause it at all if not time is of essence right you have to basically can't I like contain that particular issue so that you don't spread the blast zone for that particular issue is reduced so you have to act on it very very quickly then once you do then you start running these advanced forensic analysis on that particular host machine to identify what is the root cause how do you eradicate the whole issue from your organization as a whole and so on so forth so this is a general methodology that I just wanted

to list it here so that we understand what also - one thing I missed to tell here is that if you notice here at each stage of the whole incident response process you may require to do some type of hunting but maybe I you start looking for IOC and you have to use certain tools to identify if there is any type of malicious activity or any type of logs that you can find that can lead you to a specific IOC or you would like let's say you have identified the IOC s then at the next stage you wanna identify what is your blast zone what is the scope altogether so for that again you might require some more tools to

identify how many issue how many host systems are affected in my organization and so on so forth so the point I want to tell here is that like at each stage of hunting or incident response you will end up using multiple different tools so as a next step so how do you go ahead and do all this you could perform all these operations manually by running different tools from different vendors or you could consolidate all the related tools that are required for threat hunting and use that as a centralized tool to perform the threat handling process altogether including looking for IOC s so that's where the genesis of a secure project came and it is just to

give some background on the name roster door really means hunter in Spanish like I mentioned the primary author or the pioneer of this project smile is a Spaniard so hence the name roster door so it is a command-line tool why yeah because it's sexy sure why not so I got this light from my black head so I yeah anyways it is cross-platform what it really means is that you could potentially run this currently we have three separate versions of executables but we are trying to unify that in the future but we are trying to support all these functionalities from this command line tool to be executed on each of the individual platforms like Windows UNIX

and Mac OS it also has a restful back-end server where you could store all your results from your different scans that we'll talk about in the future in the upcoming slides and it can potentially run any given command as specified by the hunter so why did we basically think of this design pattern so we wanted to enable the hunter to pick and choose his tools or write his own tool like the there's a saying right the best security tool is the one that you write to solve your specific problem so for example forensic artifact collection if if you are specifically interested in getting only data that are generated by netstat so you should be able to do that rather than depending on

other tools so we I'll show those specific details in the upcoming slide but we wanted to provide a generic framework that is extendable as well so that people can come and write and extend it according to their needs along with we have also exposed some of the standard functionalities which are pretty useful on a day to day basis for any given hunter to identify the IOC s as well as to perform little bird little more advanced investigations so it is built in completely on Python and also the expectation here is that it is to be used on by an organization such that it can be executed using a centralized AV console or yes ECM like a

central management repository through which you should be able to deploy this particular process and execute it on endpoint and collect all the related data that you need for IOC investigations why is this important imagine deploying your own new agent on an endpoint it requires I'm not really sure whether you're familiar it takes a lot of approval process and security evaluation to be considered before you start deploying something onto the endpoint so we wanted to get out of that model and use the existing infrastructure to see if we could potentially deploy this executable on to the endpoint and execs acute it and collect the responses from that particular process to get whatever in from specific information that you're

looking for okay so okay so yeah it is the open source project you can see it under the rest of the organization in github and in terms of functionalities what are the current functionalities that are supported we did release the v1 trees for the blackhat Arsenal this year so we have fine-tuned some of the operations but in a nutshell the current functionalities that we support are fast Trajan so what does that mean so faster engine is a mechanism where you could potentially go ahead and run any type of commands on the endpoint and collect necessary information based on the command execution that you have performed why is this important so typically whenever you're trying to

identify more IOC s or any more specific information about a given host you would be interested in let's say auto run process or the net stat information or the web history information and so on so forth so faster edging is a mechanism where we basically expose multiple commands that are already defined for you you could also basically write your own customized script that you could potentially run it so that you can basically go ahead and execute that faster edging on the endpoint and collect that specific information for further thrashing and prefetch is the other functionality where we basically the command these are all basically boils down to the commands that are used along with the raster door client so

prefetch basically goes and fetches the prefetch files on Windows and that's the functionality that's currently only supported for Windows so why prefetch is important so as we all know that they when window I think it was introduced in Windows XP where every process when it starts executing if it is having any type of cache cache hit miss then they basically write that file onto the prefetch folder so that they can use it for improving the performance of that process later which is good from a security point of view why because we could potentially grab the prefetch folder as is and see what process ran before or some more information about that specific process right forensic artifact collection so this was

one of the recent features that we introduced where we basically go ahead and use the CY LR how many of you have you heard of that so I'll go in detail about that as well which is a process it's open source project as well where basically goes and gets the entire system configuration as well as the user details from the given host the interesting part about that particular tool is that it does not use any of the windows ApS to perform the data gathering which is good because we are not really contaminating the end and endpoint with unnecessary logs that are generated and also the forensic artifact collection is entirely done in memory so there is no not really artifacts that

are left behind after the process is executed so web fish web history is as it suggests it basically gives you the history of all the operations that were performed using the browsers and also gives you information about all the from not only one type of browsers but it basically collects information across multiple browsers across multiple for multiple users as well so Yarra disk and yaar mmm are the scans that are performed using Yarra how many of have you heard of Yarra great okay so Yarra is think of it as a general pattern matching to it is also called the Swiss are meaning of knife for pattern matches it's especially used for malware used by the malware

researchers to identify specific malware in a given system we go or in detail about that so deploying glasses there on endpoints so like I mentioned the idea here is that we wanted to design raster though such that the client can be easily deployed onto the endpoint and executed to collect the necessary forensic art artifacts so that we can quickly identify and look for iosys so in this case as I show in the diagram here the if you see the arrow mark we are currently luxacore could be deployed on any AV console or EPO EPO is a McAfee product again it's not necessarily has to be EPO but any centralized console that can potentially talk to each of

your endpoint you should be able to execute this so from there it will be deployed on to the endpoints and all the necessary commands that are executed will be executed on the endpoint and the data will be collected and sent back as needed so I basically show two folders here one is the the tools and the other one is the data so one of the pre requirements for running rosters or is we expect you to have infrastructure set up such that you have a tools and data directory set up on an SMB if you are running this on a bigger organization but if you are a small organization or if you're just a solo soldier then you could potentially run

everything locally as well so there's no really hard and fast requirement on SMB but this is important because in the tools folder is where you would basically place all your sysinternals files or we have a list of processes that we currently support which are internally used for performing certain data art artifact collection so and why we don't distribute that because this internal tools is currently the licensed by Microsoft and we don't want to maintain that thing our guitar repository right but however just to note on that note that we are trying to implement a new command called Russ's or in it let's say which will allow you to go ahead and set up this all these tools

and data directory at runtime where all the tools automatically will be downloaded from the target sites to provide you with the necessary capabilities ok so like I was mentioning raster door is the command-line tool if you basically run it we have already compiled the exe and it's available in our git repository and if you don't trust our executable which you shouldn't you should be able to go ahead and use the spec files that we have provided and use the pines pines taller to generate the exe for yourself as well so with respect to execution currently like I mentioned we support yara days chiara mam memory dump triage web history I think one of them is missing

which is the collect nothing fancy there so in terms of fallen seek artifact acquisition so this is like I mentioned earlier this is a wrapper 4-cyl R which is the open source tool as well which helps you to perform quick response collection this is like when I say it's really fast it is really really fast compared to how would typically go ahead and gather the information from the endpoint why because the entire operation is performed in memory there is no file operations involved there plus it is also written like specifically optimized to execute certain specific commands internally to go ahead and get the artifacts from the host machine so the other thing to note here is that this

the cyl our tool basically generates the artifact that can be potentially used with some third party and when i say third party is some other open source projects like time sketch i think it's a project from google where you could use to collaborate on threat hunting and identifying some of the IOC s as well it writes a nice UI where you could basically collect all these artifacts and timeline them which is pretty useful especially if you're multiple analysts are working on that specific problem okay so in the bottom you can notice the command that are used to execute this specific collect functionality the path will be basically expected to provide the parameters for to server as well as

data server I will show you an example towards the end on how to execute these commands and the next functionality we are exposing currently is thrashing like I briefly mentioned earlier try a gene where is the mechanism where you execute specific set of commands currently for example we support sis internal tools you could also write your own custom command and here basically you would you are expected to have that specific tool on the tools folder so that you could basically go ahead and execute on the endpoint and while executing it rusted or internally knows how to basically perform the use that specific tool located in your tools directory to execute on to the endpoint and collect

the responses and give it back to you in a readable form manner right so like I mentioned earlier we currently support only assembly shares but there are plans on extending that to support local file share systems as well as s3 or cloud native support as well in the future so what would a triaging look like so like I mentioned currently we support the basic internal tools and if you see it basically generates for each command that is executed on the end point it will generate artifacts in a CSV format that will have not just CSV sorry in a text format that will have all the specific information on the infected host about the info infected host and it

basically can be used by the sub hunter to further analysis and look for IOC s so yeah so one more example where we also run the startup list process internally that kind of gives you what are the processes that are running like auto run processes which are running as a part of your startup of your operating system right so in this slide we basically see that like this is a net stat information yeah so here we basically I'm just trying to point out the fact saying that if there is a you see all the specific information about what ports are open is there anything malicious that is going on using the net stat logs right

so the next command is the prefetch command which is like I mentioned it's very window specific feature which basically goes and collects all the information related to your prefetch files under the prefetch folder also to note one thing here that trusted or command to be executed in the administrator mode right like because lot of these folders require you to have that many privileges before you start accessing it so if you want it to work correctly and get you all the related information on the host you better than it has admin we don't mean privileges so the next one is web history we spoke about that for a bit which basically gives you information from different

browser statistics as for any given system user on that particular host and currently we support ie Chrome and Firefox browsers full memory damn so this is one of the other nifty features I would say which basically uses internally the wind PMM - written by michael Kohan which captures the not only the entire memory dump but it also allows you to capture the crash dumps which can be pretty handy when you're basically looking for i/o sees that as some of the threats could potentially crash your existing processes and you could potentially use that as a indicative of attack right so the memory dump basically generates you an eye image and that image could also be further used with some other third-party

library third party tools like the red line from mandiant or volatility which is a pretty famous one for memory forensic analysis so the next feature yeah before we talk about the yarra mam and Yarra disk I think it can it is important to understand two things how does the roster the server work and what is the intention behind it right so the rush to the server was initially written as a simple Python server in an organization when we collected the artifacts great like yeah we have enough data now but what do we do with that I mean we could potentially manually open each of these log files and see what is happening but we were kind of looking

one of the ideas was to basically see if we could automate the hunting process so if you want to automate how do i how would we go ahead and do that so we wanted to basically start leveraging some of the work that has been done on Yarra to see if we could scan using Yara and potentially post these results into a server for further analysis for the hunter why why scanning with Yara we would be basically reducing this the the false positives one plus also reducing the scope to a very specific set of iosys so that we are not wasting time in terms of try adding the overall issue so typically in a shock from what I have

heard at least on the bigger socks they would get like 10 to 15 minutes at the max to try as the initial issue so we wanted to see if we could potentially use Yara to basically scan the collected artifacts to identify any initial attacks indicators of compromise so for that reason we basically have the python-based plus server that is exposed which can be hosted by any organization and it also has some basic support right now for authentication currently to support only basic authentication nothing fancy there again in the future we will be adding support for LDAP and adding integration so so forth and we can be hosted only on a Linux machine right now just because that we didn't have time to

support Windows at the moment so there is a documentation but the other the Rossiter server is specifically used for to use cases right now one where we perform the era disk scan where we run where we look for specific uses or malware's on a given disk by running space running yara scans on the disk and the second one is running yara scans on memory where we look for specific processes or any indices in memory so the results are currently stored in JSON why because it can be easy for us to integrate that in the future with other open-source tools like maybe ALK stack that kind of gives you a graphical representation of the findings and so on so forth right

so with that Yara Yara is the open source we kind of briefly start upon it it's open source tool developed by Victoria Mallory's of virustotal basically a pattern matching tool like a advanced grep option like if you think about it right so these days the RS I think getting much more attention especially considering that more and more security researchers are using Yara to initially identify the malware and they're sharing their error rules with public so that public can potentially use that to quickly identify the specific malware signatures or in their organization - so most of the AV vendors as well are adopting to this idea of Yara so I think if you haven't heard about it I think you be worth to

take a look at it so Yara is also powerful in terms of basically not only for searching for specific patterns it also gives you a ton of customization options like if there are already a lot of add-on modules that are built around Yara which are readily available for at free of cost so along with that Yara Jen is also a very handy tool that basically takes the malware sample that you have as an input and generates the error rule for you why is this important because you could potentially use that rule now to execute on all your other endpoints which are located in an organization right so that kind of saves a lot of time especially from a threat hunting

perspective so what does a Yara rule look like so it pretty much has pretty easy to understand type of format where you have the context about the rule and specific patterns to search for and a match condition and it can be there can be a little bit of learning curve initially when you're starting to write Yara rules but it certainly is doable if you ask me and it can basically provide you with more tools to fine tune your pattern matching just like any other pattern-matching tools which makes it pretty unique in terms of malware hunting let's just say that so with respect to Yara discs this is one of the commands that you can execute on using

lasted or it basically performs the Yara scan on the given directory as you specify in the command to look for that specific patterns that you have provided in your era file so the rule files these files are currently look expected to be in the server why because in an organized every organization may have their own specific set of rules so if you are planning on hosting your estate or server you could potentially also write your own Yara rules and place them in your servers so for further consumption by your estate or client right yeah so yeah basically upon successful hits on the Yaris can it will basically give you what file got affected where can you find that what is

the location of that particular file and so on so forth the next command is the Yarra mam which is similar to Yarra disk but again the scan here is entirely done on in memory there is a brilliant article by Michel Kohan which basically he did analysis like so which is better should I do we are mmm or should I do er a scan on the disk so based on that article I think when some of the findings were like if it is done right the rmm can be very very efficient in terms of time consumption so imagine if you are running these Yarra scans on 50,000 endpoints right like you want it to be much more optimized and for that

purpose erm can be pretty handy as well in terms of that hunting but essentially they both do the same thing like you provide a Yarra file with the specific patterns it goes and runs the Yarra scan in memory or versus the disk yeah it also some other information it returns some basic information like on a successful hit which host name it was affected what file what is the location what process and so on so forth so now we spoke about different commands that are associated that are exposed by lasted or and if you think of it before we go on the general notion right like the idea here was to expose certain functionalities that can be used on a

day-to-day basis for hunting like what does the typical handle hunter do on a given day when new alert is issued so a generic flow would be like okay you have IDs IPS triggers a notification so you would have a basic sense of which endpoint is affected so from you will start from there so typically you could potentially use collect module like I showed earlier to do the initial level of data collection which is important because it collect module doesn't really leave any data behind in terms of it won't contaminate your existing operating system and its file structure so it if you're if the hunter is still not very clear on iosys based on that information you could potentially run

the triage command in the triage command you could base case on any of the sysinternals command or your custom scripts to basically go ahead and fetch a specific information which is not captured by your collect command so at this point you pretty much we would assume that there will be some signs of iOS's look okay you are in the place where like a trained hunter is pretty much able to tell what is causing that issue and how to eradicate by looking at the different data facts right which are collected using the triage command or the collect command so after that you would basically go ahead and perform the yarra scans on the endpoint to see what

is a full scope of that particular issue and also try to run the error scan across the overall environment to see what what are the other host systems are the endpoints that are affected so and once you do that you basically keep repeating this process because you will in she hid a lot of false positives so once you fine-tune your error rules you will be able to get a better result and basically define the overall scope of that particular issue and once you have figured out the entire scope you can think about the remediation plan right like how would you go ahead and remediate that given specific issue and then on you would maybe use the memory

dump to perform further analysis on identifying the root cause of a given issue maybe or maybe maybe not maybe it's a straightforward issue but the point I wanted to show here is that there are basic tools that are available using a raster door now which are integrated into raster door which is pretty much good enough to start off the thread hunting process and of course it's not the the most polished code out there it's the open source code but the advantage of this tool is that it could be customized for your needs based on your requirement and some people might be only focusing on memory scan analysis our some people might be only interested in running processes and identifying

what are the different processes that are running or the ports that are open across the organization and so on so forth so the point here was that like you could potentially use raster door to do a variety of activities which can assist you greatly in terms of identifying 4i OCS or any type of threads so what are the different choices you have in terms of customizing right like so like I mentioned the idea is to one of with one dot over release our idea was to basically make it very easy for adopting like you may already have certain tools that you have purchased so how would you use those tools to run against right so you could

potentially customize that by writing a shell script that can be executed using last resort as well so I'm not really going to go much detail about the server customization there are some basic options that we support in terms of customization but in terms of client this is where you could we basically specify what are the different commands windows commands or unix commands that you want to run when you're running your trash module like I pointed out here that you could also basically tell that you can run your specific customized command to do a specific task in an endpoint to collect and collect the response from that using Rusted or as well using the triage command so apart from that yeah like I

mentioned we added a bunch of new features which basically making it a little bit enterprise ready with respect to how it can be deployed on a server and what are the logging options and so on so forth so nothing interesting here yeah this can be pretty interesting so we are we're actively working on this project we are adding new and new features and newer features in terms of providing the init method like I mentioned where one of the current requirement is that like you are expected to host your tools and data folder why because we don't want to distribute the tools these are Windows tools or some open source tools we are expecting according to your needs you

would basically go ahead and create a tools folder and put all your tools inside the tools folder so that it can be used by rastas or what will trust it or do it will basically enable you to execute that on an endpoint plus it will also provide a output in a specific manner which is easy for you to read and process further right so yeah we are adding a bunch of features including supporting doctor eyes deployments for servers especially if you want to quickly deploy something onto your existing infrastructure I think docker is the way to go so that is coming and like I mentioned we are planning on also supporting LDAP for Asador so that it can be easily

integrated at an enterprise level than having a basic authentication and like I mentioned one of the big things that we are trying to focus on is currently we have three separate executables for Windows Mac and Linux and one of the main thing that we reach that we are trying to add is to basically unify them and provide one executable for all platforms this can be especially handy if you if you have organization where you have endpoints which are running on multiple different Oasis right so along with that there are certain some other notable projects that I wanted to list here one of them is the open CNA project so one of the problems with roster third

project is that we allow you to basically generate all the run of these different tools and generate these logs and these logs are every tool has their own formatting and the results are not very unified there is not really a clean standard so you would basically end up manually reviewing these logs right the artifacts that are generated so the open CNA project basically focuses on it is built on on top of roster though it uses roster doors for data collection but it does little more it basically provides the parsers to parse each of these results like for example netstat or auto run processes so it basically runs the parses on top - basically unify the output Y so that

it can run some analytics on top of it right so it's a very interesting project so the data collected by different endpoints are passed to generate some analytics information and based on that you could potentially perform take further actions this is especially useful in advanced the proactive IOC hunting and yeah like I mentioned Rossiter also provides wrappers for cyl our bin PMM which is used for memory dump - so you can learn about that in the following links as well awesome Yara is a brilliant reference for anything we are related if you are looking for any public I mean publicly available there are like at least thousands of java rules available so that you don't have to reinvent the

whole wheel right like so I would definitely recommend you check that link as well so in conclusion one of the things that I kind of specifically like about this whole sock and performing threat hunting and looking for initial IOC is that there are way too many tools out there and if you're not really using the right set of tools then you are basically wasting your time I mean maybe this right set of tools for right set of problems yeah so identifying and using the right set of tools can be very effective and the whole idea behind Rasta thought was to basically allow the hunters to customize their tools according to their needs so we also basically allowed a couple of

we added some new features in the roster door to not only perform the collection across multiple endpoints quickly but also started to we enable the hunter to perform some scans on top of it which is pretty handy especially considering the amount of data you're collecting let's say if you have 10,000 plus endpoints on your in your organization right so some of the generic statement here that like we were able to run execute we this positi like I mentioned is being used actively in some of the production systems with 50,000 plus nodes and majority of the cases we were able to use this tool to identify the IOC s at least as a starting point so the whole

idea here is to basically leverage the existing tools and use those existing forensic tools along with rostered or to execute those tools on the endpoint and the residual to be used as a framework and not as a tool rather as a framework which allows you to integrate these different tools quickly and seamlessly that's about it if you have any questions let me know [Applause]