
Whenever he shows
[Music]
[Music]
Thank you. Do we have voice? Yeah, good. We have sound. Yep. Good. Thank you. So, it's nice to be here. I've been having this on my bucket list for several years. So, it's really nice to be here at besides Talin. Um, I was actually in Talin. My last time was exactly 10 years ago when I was here for one of my first science courses. So it was time to go back. So raise your hand if you have used any of these tools that we see on the screen. Nice. And I I hope I can inspire some more of you to try out these tools because these are definitely among my favorite tools when it comes to incident
response and forensic engagements. So I'm Marcus. I am a security architect and incident response lead at the Swedish company uh sector. Uh I work in a security team where we manage the cloud hosted services that we provide. I'm a blue teamer by heart. So I I like all teams all things on the defensive side. I try to focus as much as I can on the incident response preparedness area. Uh which is really cool. I have been fortunate to take a lot of SANS courses and searchs and that might seem like a really expensive journey but it doesn't have to be. I can highly recommend the SANS work study program where you take the course as a student but you arrive
one day early to help out with things preparing the classroom and for that you get a really good discount. It's normally 75 or 80% discount. So that's what I did for most of my courses. These are great. I'm not paid to say that. So the agenda, we'll start by introducing the the use cases, some challenges with forensic work today. Look at my favorite tools and add some automation to that. And then we will look at the a demo of a case example, how this could be tied together in real life and also give you some hints about how you can get started doing this on your own. So sector at work uh we provide solutions for medical diagnostic imaging
and that is normally X-ray images like the ones you can see here and our customers are healthcare providers all over the world. So we have several thousand installations globally at hospitals normally and these systems are really important of course for mankind. So setting the stage with a use case for this discussion. Uh imagine that this is a doctor at your local hospital. He's working in the radiology department analyzing medical images. And the goal is of course to provide a diagnostic report and recommend further treatment. This function is of course very crucial for every hospital. When this doesn't work, they may have to close the emergency room. They may have to redirect ambulances. So you want these
systems to work always. One day when he gets back from his coffee break, it's a Friday of course, late at night, his screen looks like this. And this is not a good thing for anyone because as I said, they may have to close the emergency room. And here we have a need to quickly investigate, okay, what has been going on on this and nearby endpoints. And that that goes for both the customer, which takes care of the system, but also for my company who try to help them get back on track again and get these crucial systems up and running again. Oh, and it might seem like a like a kind of a drastic example, but unfortunately
this is really common in the healthcare industry. They are completely under attack and and this happens daily and it also happens to our customers unfortunately. But there might be other use cases as well. The normal would be investigating security alerts from your CM, your EDR or your sock when you need to go a bit deeper and validate an an alert. Uh it could also be used I'm using it a lot for more operational troubleshooting. So let's say that there is a server behaving oddly at night every every night at 2:00 but nobody could really and understand what's happening then this process or yeah that I will show you is really helpful because you will be able to investigate and see okay
what's happens exactly at 2:00 which processes are starting which files are touched so it could be just as useful for operational work but maybe you are in either a instance responder full-time or a sis admin but you can even use is for CTFs. So I did it a lot when I started out in in security. Uh and luckily there are quite a few CTFs now with forensic challenges where you get the disk image or im individual forensic artifacts to analyze. And if you have this semi-prepared at least you will like skip a lot of time and and have an edge on the other players and and get ready to analyze and find the answers quickly.
So I think this use case is really a a really good one for anyone. Some of the forensic challenges uh the attacks today they tend to have a really short timeline. So the time from initial compromise to the endame it could be yeah excfiltration or ransomware that is shrinking. It's really short. So you don't have much time to analyze and and find out what is going on. If you want to stop the intrusion that time window is shrinking a lot. and the attacks. It's normally not just one endpoint which has been affected. You normally have many endpoints that you at least suspect could be involved in an incident. So you have a need to analyze
many endpoints at the same time and immediately. Typically disks are really large. So the traditional way of doing full disk forensics where you grab a disk image, you analyze it and after a week you get back with a report. I mean that doesn't really help us. It's not quick enough. So we need quick data access from yeah all these suspected endpoints.
So the forensic triage process it looks like this. If we have one endpoint or several, normally we want to pull the the most interesting forensic artifacts from those endpoints, process them so they can be easily analyzed at scale and we want to reach this end state of analysis analysis like as quickly as possible because what we pull from the endpoints will be more raw files that are a bit hard to understand and it doesn't really scale. So this is the overall process. A crucial thing here is triage images. So instead of using these full disk images which could be really large, you want to pull the yeah the forensically relevant artifacts. That would of course
be event log files, registry hives, file system metadata, browser history or anything else that could be valuable. You also want to pull more live artifacts. That could be a list of running processes, network connections, DNS cache. And if you do that, you normally end up with a triage image, which could be 2 GB instead of the full disc, which is hundreds of gigabytes. And when you compress that triage image, you normally have two 300 megabytes in a zip file. And that is quite easy to handle. It's easy to copy, move, process compared to this big disc image. I've seen cases where we had an Azure VM 100 GB disc. Somebody had to find out how do we
create a disk copy of that, transfer that one to Sweden. It took I think it took three or four days before before people could even start analyzing that. And I mean that doesn't work. So 3 images are great, but we have an EDR. Isn't that good enough? We have prepared ourselves. We have an EDR for detection. and response. The problem with EDRs is normally when there is an incident, there is no EDR. And even if you think there is one, maybe the affected endpoints, well, for some reason the EDR deployment didn't really work there because the server was too old or whatever. But let's say we have the EDRs working uh then they are really focused on
getting telemetry for detection. So when you want to go back and see everything that happened on the system, you don't have that full coverage because the telemetry is streamed to the cloud. It's heavily filtered, sampled, good enough for detection. But when I want to see, okay, which processes ran, which network connections took place, I want to have the full picture. I don't want anything sampled and I I don't really understand what is missing or not. So that's why DDRs isn't really good enough. You can pull some investigative packages using DDRs, but that is semi-processed files that are really hard to handle, especially if you have many endpoints and you want to do it at scale. So, the
EDR doesn't help us here. But we will come back to how the EDR can really be valuable for us in an incident. Anyway, so what do we do instead? Well, enter Velociraptor, my favorite tool. It's an open- source tool that is traditionally used in a more client server setup where you pre-eploy Velociraptor agents on all your endpoints. They communicate to your velociraptor server and you can do live hunts, you can pull files and it it's great. I mean it you it's it's really what you want to have. That's what I want to pull push to all my endpoints. But just like with EDRs, when there is an incident, there is normally not a velociraptor client
deployed. So, but there is a bonus feature or a well a feature in Velociraptor that you can create an offline collector and that is an EXE file basically which is normally 50 60 megabytes large and it contains config and functionality to pull these forensic artifacts that you want. Create a 3 image and it will create a zip file with yeah everything that you need for your investigation and that zip file can be manually like copied from the system. It can be automatically uploaded to the cloud. It can be sent to a fileshare. But whatever you choose, this will be like a self contained binary that almost anyone can run because while you may have expertise on the receiving end
analyzing these data, you want anyone to be able to run this collector because that could be anyone in front of a computer where you want to have the data now. So the end game is really ease of use here. So how do we know which files to pull? Um because I mean we don't really know in advance which forensic artifacts could be relevant for this host. Some of you may have used cape. Cape is another great tool to create these triarch images and also process the received data. Uh case has some limitations. It's more based on working on static files. It has some licensing restrictions. So you cannot use it commercially. But something good with cape is the use of
cape files targets. And those are definitions about which files you want to pull from an endpoint when there is an incident. And this example is showing the the file path and and file like globs to find the Windows event logs which are of course is the most important thing. But there's a big library of such definitions and those are used by cape. But those can also be used with velociraptor. So this is how you tell the velociraptor flying collector which files do we want and the good thing is that there are these compound targets that include lots of other targets. So you just say to velocraptor use the cape triage or sense triage compound target and then it will
make sure it will get you almost anything of forensic value. These cape file targets are handled in the GitHub repo as an open source project. Cape is not open source. The cape file targets are. So here you can well up push your your own contributions modify the ones existing ones and it's a really active community around this. So this when there is a new artifact on a Windows 11 system for example it will soon be uploaded here and then you will get that in your collections in the future. So this is a great thing handled by the community but we want to collect more than files. Because even if we have the Windows event logs, register hives, I mentioned
before that there are live artifacts that you want to have as well in the investigation such as the running processes, active network connections, DNS cache and that is something that Velociraptor can include in the triage collection. So you get the files and you also get this more the live data which is really crucial. You can also use or include third party tools here. So for example sysmon uh or system tunnels outr runs that will give you information about everything with persistence on the system everything out of starting on a system and that is where you will find malware persistence normally auto runs is a separate exe file but velociraptor will like bundle that in the exe file
for the offline collector so it's still just one file for the user to run and then that binary can start other tools as well like auto runs. You can always even use this for winp to acquire memory. But then of course your triage collection would be huge because we all have a lot of memory now. Still usable, but I would put that in a separate offline collector just for memory. A really new feature being released the last few weeks is adaptive adaptive collection and that is normally let's say that you have malware with persistence using scheduled tasks in the system. Then you will see traces of that malware that it has been run it has
existed on the file system but you normally don't get the exe file, the binary itself, but using this adaptive collection, Bloss will iterate and check all the running processes, the scheduled tasks and and pull interesting files and put those as well in the triage image. And just to avoid getting all the included Microsoft binaries, you will of course normally exclude the signed ones and then the unsigned are normally the the bad ones or could be potentially. So that is a great addition to give you even the the malware not just the metadata. How do we build this offline collector? Well, there are various ways. Uh you can download the velociraptor binary, run it with a guey command and that will bring
up a web uh interface where you can quite easily build the collector. But there are quite a few config options like which cape targets do you want to select? which other artifacts this PS list or uh network connections out to runs. So when you go back and you want to update your collector after a year you it's kind of hard to remember okay which specific config options did I use last time and how do I keep building on that. So a better way is to use a this spec YAML file where you have all the config in a YAML file and you can create the command this offline collector on the command line and that makes it
really easy to do like new trial versions with new functionality uh and even handle this like as code config as code. If you're lazy or just want to get started you can also use a preackaged offline collector this triage.zip Zip and that is also an open source project by Ericano Whitney Champion at Digital Defense Institute that is a normal velociraptor of flying collector created with a default config of Cape targets which is a great start. It of course you want to customize that for your environment but this is a way to just download the file run it and it will output a zip file with lots of good stuff. And this project also has a good
build script so that you can base your own build scripts on this project as well when you want to create like a good pipeline for producing offline collectors because maybe you want to have different offline collectors for different areas of your network pushing things to one specific file server but it has to be another file server somewhere else. So now we have the offline connector ready. How do we run it? That depends of course on the situation. Uh you could be there could be someone walking up to the server with a USB stick inserting it and and running the frame collector and then pulling the file the triarch image manually. It could also be that you RDP
to the system run it but of course then you would expose your admin credentials. So if there is an ongoing intrusion you really don't want to use that way of accessing the server. PowerShell remoting would be a better way in that case from a nearby host running the collector and pulling the file like in the PowerShell session instead. A special case is that you can run this offline collector against a static disk image. Let's say that you have a 50 GB disk image that someone has given you and maybe you don't want to push all that disk image in your processing pipeline because it's still 50 gigs. Then you can run the offline collector and create a three edge image 200 megs
from the disk image and then the rest of the process will be just like if it had been a live system. There's also a way to like save data and processing time. Back to the EDR. We have an EDR. Let's say we have that. It's not enough to give us the telemetry we want, but we can leverage the EDR to run the offline collector. And this is an example from defender for endpoint but there are similar ways of launching your own custom scripts for crowd strike falcon and other edrs. So you prepare a launcher script that will download the offline collector run it and push the triage image to cloud storage. So that is a really quick way of getting access
to an endpoint because you don't want to hunt down the user get the local admin password and run the collector. This way you already have a hook on the system using the EDR and you can leverage that. You have to take care of data size and timeout limits because those custom scripts cannot run forever. I think sometimes it's like 10-minute limit that could like mess with your collection. So you have to create this as a background task. But but it's it's a really good way to to to run it if you have an edr. After you run the offline collector, you have to get the image somewhere and and my favorite way is to push it to cloud
storage. So this is an example from uh it's a Azure blob storage. So wherever you run the offline collector on with an internet connector connection it will push the 3 image to this uh blob storage and of course you have to harden that one. So the collector can only write files. You cannot pull the files because that collector will include the S tokens to do what it needs to do. But it's a good way to wherever the collector is run it will end up in the same directory and that's way you will later process the data and then you can do good stuff as sending a notifications in teams or slack that okay there is a new file
uploaded on your IR file drop and now we can start working if we stop there for a while even if you would leave now I think remember this prepare an offline collector or or use the pre-built triage zip one but even better customize it. Make it available easy to find for anyone whenever there is a suspicion about a system behaving in a weird way because then you can easily download it run it and then you can slowly start to analyze it. So that is even if you just do that you have gained a lot I think an edge we now have a lot of data. Yeah, we have these three images with lots of raw
forensic artifacts. So, what's the next step? Well, we want to process that with our favorite tools. And then open Relic is a really good platform. It's an open source platform. It's not an official Google product, but it's maintained primarily by people in or near the Google security team. It was released about one year ago and it has also active an active community around it. Um, and it's it's a tool for collaborative forensic workflows. So this is where you can upload your forensic artifacts. It could be a 3 image or single artifacts and define what you want to happen when they are uploaded. Which tools should be run, where should we send the output and also
it's really good for collaboration because it's a web- based tool and you can work on the same and share artifacts and share the results of these processed data. uh operelic is using workers and each worker is a spe is a single docker container. So you can run all these on a VM but you can also scale out if you need more processing power. So you can use large Kubernetes clusters for like better performance. And some workers are really like simple ones like extracting files with a specific file name from an archive. And other ones will run larger tools like Hayabusa and Clauso that we will look at soon. There is also a really good template. So if your
favorite tool isn't included with a worker yet, you can create your own worker and publish it on GitHub and you can have it shown on the Open Relic marketplace if you want. And that way we will make this even better. But there are really good starter workers available already. An operelic workflow looks like this. We have a an input file. In this case, it's the disk image to the left. And then we define which task to want to process the the data. And the output of one task is the input of the next of course. So you can both have things being done in sequence and also in parallel. So here we see that yeah the log files are
extracted parsed by Hayabusa which will create an CSV file and in parallel plas will run on the same source data and this way you can make a really efficient dynamic workflows and and make sure that you get data in even if this hasn't finished you can start using the data that has finished and the workflow will keep working so and it's really intuitive way of knowing okay what's happening when I upload a 3G image for example So a quick example here we have our open relic server things are organized in files and folders and we create a new subfolder for our data set here. Then we have a zip file with various forensic artifacts. It includes Windows
event log files and potentially other things as well. U so it's uploaded and now we want to create a workflow and this is where we define okay what do we want to happen to these forensic artifacts and we want to extract the Windows event log file. So even if this zip file contains other things we just want the event log files. So we use the extract artifacts task say okay just care about the evx files the windows event logs and then we want to send those files to hayabusa for which will parse the files and hayusa will create a cv file as output in parallel we want to send the full zip file contents whatever it
contains to plaso for more complete parsing and that will also create a zip uh CSV file as output in parallel. Then we kick off and run this workflow and it will yeah start in parallel. Uh while this is running, we can save this workflow as a template. So we can reuse it for yeah the next time we want to do a similar thing and your teammates can of course use reuse the templates as well. And now we see that the extract file step finishes almost immediately. It has extracted one file which is the security evtx file which is also visible down below in the repository. When Hayabusa is ready with the CSV file that will also show up down here and then
Plazo is still working and it takes more time as it usually does but this means that we can already start using the CSV output from Hayabusa while Plazo is working. Hayabusa that is my second favorite tool. I think it's it's an open source project uh by Yamato security and it's wicked fast when it produces it applies sigma rules and its own hayabusa rules on Windows event logs and you can run this on on large volumes of Windows events and and you get the output is all the notifications from the sigma and hayabusa rules and as you see it's like 4,000 sigma rules now it will apply all of them or a subset if you want and
produce a list of potential issues like detections from those rules. And as I said, speed is king here because this is normally the step that finishes first. And this is where you will get hints about, okay, is there any malicious activity on this system? It will definitely show up here. So this is where I normally start.
The other tool we have seen is Plaso. uh also an open source project. It includes a lot of parsers. So you can throw almost anything against it. Disk images, triage images, single artifacts, and it will automatically realize which parser it needs to use. And the output will be a super timeline. So it will pull almost anything with a time stamp from the system, put it in a big long super timeline with different activity. And this is an example from a a normal user laptop uh that we parsed recently. and it has 14 million events normally or in this case at least. So you can see that there are events from the Windows event logs from the NTFS uh system on the file
system like web browser history register hives. So everything that is possible based on your data set will show up here and that CSV file with 14 million events may not be possible to open up in Excel because it's going to be huge. So we need something else to process it. Open relic of course has some AI functionality that all respectful tools should have. Language large language large language model services can be connected to this and then it will generate summaries of forensic reports like the outputs from Hayabusa can be summarized and you can also like chat with your LM to to get it to explain your forensic artifacts that you maybe don't have any information about. So we
have an example here. It's a it's Kappa. Kappa is a tool from Mandant that will analyze Windows executables and show the properties of that executable. So we have a malware sample. We create yeah it's a Windows executable. We create a new workflow. Then we add the Kappa worker and when you run it Kappa will produce various output reports. is a summary and it's a detailed report and that detailed report can be pretty hard to understand if you're not used to malware analysis. So when we open up you can see the detail report the output here and then the AI summary and the top will show up in a few seconds explaining okay what really going on in this file
and you can also chat with the LLM make it explain okay what are the top three issues in this file and as always with AI distrust and verify but it will give you hints about malicious activity that you want to pivot
Excel was not good enough to open up large files. There is a tool called timeline explorer by Eric Zimman which is good but not even that one works when you have that large amount of of yap events in the timeline. Then time sketch is a great platform. Time sketch is not an official Google product but mainly maintained by the Google security team and that is a way where you can upload timelines no matter the size. It has an open search backend and then you can work collaboratively on this forensic data. So you can you can flag and add comments and and you can work together on this to just to get to find all the
events that could be relevant to this case. And the fi the timelines that you can upload are normally from plaso but also from hayabusa the the two most common ones and time sketch also has various analyzers. It can also apply its own its sigma rules and it can show brute force attempts geoloccation. You can add like various data points which could be valuable and but let's go back to this IR file drop. So now we have this good processing platform with open relic and hibusa and pluso and time sketch and we have the file drop with all the triage images just waiting for someone to look at them. How do we connect those two things together? That is the most tricky
part in this. This is an example from Microsoft Azure but there are similar constructs in other cloud platforms as well and you can do this on prem but it will be different of course. So what happens here we have a 3 image that gets uploaded to the IR file drop which is like storage account in Azure. There will be sent event to a logic app to trigger a teams notification that there is a new file being uploaded and it will also send an event to an event hub where you can have a consumer listen for new uploads. And this consumer could be a Linux VM with a Python script. It will subscribe to these events. So whenever
there's a new upload of a triage image, the consumer will download the file, push it to open relic for automated processing, which is like what we want here. And we have a helper tool for the last step to get the downloaded file into open relic and start the workflow. Then there's another great uh open source project from Erica and Whitney Champion. It will give you an end point an API with different endpoints and that makes it really easy to upload a triage image or any forensic artifacts to a specific endpoint and that will upload the file to open relic and start the relevant workflow automatically. A bonus feature here is that the installer for open relic pipeline
includes deployment of open s open relic time sketch and velociraptor. So you basically get DFIR in a box here and this is really good way to get started and like a proof of concept installation with these tools. So taking a step back now what's going on skipping the the nitty-gritty event details. We have a compromised endpoint where we suspect things are bad. We run the offline collector. It will upload the triage image to cloud storage. That triage image will be downloaded by the consumer VM. It will be using open relic pipeline to get it uploaded to open relic and start to process. And here's where our Eurrain can start working. So let's look at the demo. This is a
really nice forensic simulation that was released during the pandemic I think uh case of the stolen situ. Uh it includes various disk images uh memory dumps pcap files and various questions that you should try to find the answer to. So this is a good way to like practice forensics and and also test your tools. So what I did, I took two disk images here. One from one domain controller, another one from a desktop system and then we put them into open relic. So we have a triage image that was created from the full disk image, but it's just a 45meg triage image from the domain controller. Then it has been processed with a standard workflow with Habusa and Plazo.
And I almost always start with the Habusa HTML report which will highlight like the worst things going on. So checking the critical or high or medium alerts here we can see that okay there are services related to metraer being created here. That is not a good thing on a domain controller. We also see that there is an RDP session or several from a public IP on the internet and that is not good either on a domain controller. So let's go to the full data from Hayabusa and open it up in time sketch. We search for public to get this public IP notification and we see that okay there are various sessions RP sessions on this server. We can also see which IP
the connection originates from and it's the admin user. So it's not a good thing. We do a context search to see okay what else is going on here around the same time. We see a lot of brute force attempts apparently login failures and apparently yeah eventually they succeed they have a working session that's the RDP session we already saw and we can later on see that it has added a service which was this related to interpreter we can see that it's yeah the interpreter service and directly after that there is another service for a core updater which is probably related And then there's a service crashing which is could be odd. So let's search for that core updater. But we yeah let's
see here we have an RDP connection also from the domain controller to the desktop system. So apparently lateral movement we take the core updator information. Open up the desktop data in time sketch what plaso has found from the desktop system. We search for core updater to see are there any traces of the same malware there. And yes, we have we have 33 events for that keyword. And we can see exactly when that file was created on the system. Yeah. Later than on the domain controller, of course, because the user moved laterally to this box. We can also see that this core updator has been launched. The prefetch data shows that it was executed at least
once. And in the end, we will also be able to see that the system resource uses monitor data. will show that this binary was launched and it has communicated. You can actually see the exact amount of traffic received and sent from this binary not the contents of the traffic but at least you know that it has been communicating. So this could potentially be a C2 a commana control channel. So that is a way how you can pivot from keyword and and easily search and imagine it could be 14 million events here but it was really quick to find the hits for core updater. How can you get started with this? Uh luckily it's quite easy because these
tools are are easy to start with the velociraptor. You can just download it and run it with a guey command that I showed before and build your own uh offline collector. Of course, you have to follow the guide exactly which config options you should consider, but that's quite easy. Um, Open Relic also has a good installer script that will have you up and running within a few minutes. You need a box with Docker installed and then you run the install script and you're ready. The same for Time Sketch. You need Docker, you run the deploy time sketch and you're ready. And even better, as I mentioned before, open relic pipeline, the script for that one includes deployment of open relic
and time sketch MLS raptor. So that's a really quick way to get this up and running. And some additional resources. Uh everything that I mentioned here has is on GitHub. So search for these tools on GitHub and you will find uh and all of them have a really active community. I can especially highlight Velociraptor. It's managed by by Mike Cohen. Uh it was acquired by Rapid 7 a few years ago. They have a good track record of treating open source project well and even in this case it seems and he and the community around Velociraptor is really responsive. So I I filed a bug report I think one week ago and it took one hour before I had the bug fix and I
could like keep working and and new feature requests are handled in a really good way. So that's a nice thing with this open source project. It might be like no one is taking care or it's really someone taking care and I think these are always actively maintained. That's my experience. And some closing remarks. This might be a bit overwhelming like the full automated pipeline, but even if you just use a few steps of these, even if you just have the offline collector prepared to give you the good data and then you have to manually copy that file and you have the open relic set up with time sketch and a few good workers, that's good enough because then you
could pull this 3 image manually. You could upload it to open relic manually and you could build your workflow manually and start it. But even that will save you a lot of time and make it like a repeatable pro process that is easy to like add things to and it will also be easier to like collaborate on it. So skip this like the bridging automation things because if you think that is valuable yeah spend time on it but that's normally where it takes more time to fix things. So focus on this single tools because each of them are really good. Thank you for attending. Uh I'm open to questions now or if you find me during
the conference, I'm here. I love to talk about this. These are my passion projects. So I'm happy to spread some love around this. [Applause]
There's already a question, right? Thanks for the awesome speak. Uh I was just thinking I was think like I had like five questions already think about your presentation but you answered them all. So basically I'm just asking like you described this process quite extensively but what is kind of the biggest gap at the moment within your tooling kind of what is missing and what you're kind of investigating in the future to make this all better. I think the the the full process is good enough. But what I what I will what I will do is create new workers because I already have ideas about I will not spoil that because I want to create them. But but I mean there are various
other open source tools to process data types that are not available as a worker now. So that is what I will do in the next few months create just to make it more complete because I still have to run things manually on the side sometimes when I find another kind of artifact. So I think that is the primary thing that will add to my process. Um maybe I mean you can always refine these how things are like the events sent for uploaded files and stuff but as long as it works and it doesn't have to be perfect I guess to be valuable. >> Thank you. >> Thank you. Any Oh.
Yeah, thank you very much. Um, can you say how well would this uh approach work with the Linux endpoints? >> That's a really good question. The first thing a colleague asked me last week. Uh and I think there might be some Linux support in Velociraptor to create a lot of flying collectors. Don't quote me on that. But there is another one called UA C or UAL. That is also a script that will run on Linux endpoint. and then produce a a three edge image. It will be completely different of course when it's Linux instead of Windows. But then you can really parse that in the same way. Hibusa would not work because Hibusa is Windows specific. But the Plaso step
would just okay here are good Linux based files parse them create a super timeline and then the rest would be the same. So that is also another thing like a gap in my process that I will I want to explore and add.
And this is amazing for me because I I can see everybody intentedly listening and I just presume that answers are given. Any more questions? Oh, here we go. Um question about uh if the evtxes are cleared up, can you get the log data from other sources like from central log server or something like that? >> Yeah. Yeah, of course. I mean, if if someone clear the logs, then you're out of luck. But maybe you have an EDR and potentially the EDR has some telemetry from previously and it will at least save it for one month. So you could have to go back to that source. Uh but a similar example would be that you have
Windows logs but nobody enabled the process tracking well events which are really crucial when you want to follow the process chains. Uh and that's quite normal because it's not enabled by default. So you get the triage image with no process tracking events. But then it's a good thing is with what plaus will parse are the the prefetch files which will also there's one source of one additional source of process tracking information or even this this uh suram DB the system resource uses monitor will show you when and which applications were run. So you have these additional artifacts that you will use when your primary source is not available.
have time for one more question. So,
yes. No, maybe. Okie dokie. Well, you can uh I presume ask later at the coffee break if people want to have more privacy with the more intent questions. >> Absolutely. >> So, thank you very much, Marcus. Thank you. [Applause]