
Good afternoon, guys. How are you? Good afternoon, good afternoon. Where's the good afternoon? Wow, everyone is dying here. Damn, man! It's B-Sides, guys. It's like... It's above everything, even from Defcon. Let's go. Our next speaker is Alvaro Farias Jr. with the "Data to Decisions" lecture, the role of the CTI in the mitigation of vulnerability risks. Is it the one below? Damn, so sorry guys, I'm still waking up. So now I call to the stage Ed Motta and Edu Vivi, with the "Hunting" lecture. Come on, welcome to the stage of B-Sides, Ed and Edu. Two figures! Not everyone got a round today! - My presentation, I want my commission! - With this presentation from Isaac, we're good here! Thanks,
Isaac! Good afternoon, guys! As Isaac commented, today we'll talk a little bit about Hunch, bringing some questions from CTI, how to set up a framework, how to start doing ThreadHunch in your company. So, Hunch from Apt.2, the Query Language part, the Queries, anyway. Here's a disclaimer, Ed and I are bringing our opinions, leaving the badge aside. Talking a little bit about us, Ed and I, we are both from the X-Force team, from the IBM team, incident response. How long have you been at IBM? About three years, I'm also about four years old. We have a vast experience, forensic part, incident response, the hunting part. There I also similar to some things there, Threadhunting, the incident response part, also a little bit
of Cloud. The agenda here is the idea, so we'll go through these topics, what is Threadhunting, why do we do it. The types talked a little about some frameworks and tools, and then I opened up for questions. So, initially, what is Threadhunting? Let me change hands here, it's easier. Threadhunting, first of all, we won't be talking about anything that will be a silver bullet, anyway, it doesn't exist, we know that very well. So, it's what will be defined there, organization by organization, we go through some points that will help us fall more and more specifically to be able to do an effective hunting. So, it all starts with the question of making this statement, well declared, and we
have these key points that will help us. So, we will need to create a hypothesis, for us to base ourselves on it, and understand what our goal is. It is a proactive methodology, so we will always be proactive in seeking it. and the question of always starting from the principle that we have some kind of commitment to the environment. So, starting from that, we will make these searches, but it's a little bit of that. Nowadays, we don't talk about "what if" there was a commitment, we start from "when". So, based on that, the thread hunting will be very important because of that. Talking a little bit more about our experience, for example, and how the thread hunting comes into our
work, thread hunting is something very fundamental and it often gets between a blue team and a red team because you end up having to have these notions to understand how things happen. And it's a job that helps the blue team. So we had a case where we had to do a thread hunting, a hunting there, in a client, we had more than 150,000 endpoints, so it's a complicated mission, A very segregated environment, different, several different EDR tenants. So, an environment that is very difficult for you to be able to extract all that and then how you will do the search, finally, efficiently, really bringing value to the customer, finally. So, in this situation, for example, taking
a little of this methodology that we are going to pass here to you, finally, we found, for example, a beacon of Cobalt Strike, which also dated, even this question of timing, before when the EDR had been deployed. So we take something that was asleep, the team at the time didn't even know about it, so it was something we managed to find out, based on the hypothesis, on the question later that we will comment on the TTPs, using Maitre, So it's very important, it's something that can bring a lot of value and it's very useful. So the reason why we do this hunting, the idea is to reduce the time to detect these threats that we are from the assumption that they are in that environment,
so doing this proactive search we can reduce that. the issue of adequate security controls, and here I brought some graphs, which are from a very recent survey by the Suns about Threadhunting, where it investigates Threadhunters, the people who work with it, and it brings some very interesting points here, which are related to What kind of attack can you detect via Threadhunting? This is an example for you, if you have any opinion, any different point, but the question of the BEC, Business Email Compromise, is one of the points that people, since 2024, have been bringing as the top findings. Also, about hanser, Álvaro was talking about it before, it's something that, even though we had some reports recently
from BM itself talking about it, that the numbers of hanser are decreasing a little, we still see something, With the ThreadHunting we can advance this, we can find out before the effective impact point in the system. The point of the areas where people usually consider ThreadHunting more important, this part is important and interesting. we see Cloud as the hardest part of the hunt. In blue, the most important part, which people think, and the hardest part, Cloud. So, today, it's also a challenge, this question of thread hunting in Cloud. And there's the part of the techniques that are found by the types of actors, especially the Living Off the Lens part, when the actor is using something native to the system,
which ends up making things a little more difficult because we won't be able to work with IOCs, with these issues, and then there will be other points that we will be showing here. The types, right, guys? So, I've already talked a little bit, but the part of the internal, the simplest one, which we usually do more easily, looking inside the company, looking for signs of commitment, anyway, looking for activities of an insider, the external part, here comes a little more the question of CTA, of us looking outside our environment, looking for intelligence, that will help us to do the hunt within our environment with this information that we collect, and the part of laying traps, the question of honeypods, honey access,
that will help us to leave something something easy for the attacker, those traps that, based on behavior, on the contact they will have with that, we will also generate more intelligence for us to be able to deal with it. So, that's it, folks. I've said a lot of things so far. We know that we have a very large and high number, that is growing more and more, operational threats, in the whole world, these conflicts issues that we see, Russia, Ukraine, ends up bringing more of these issues, we have APTs, groups financed by governments, which is something that is very growing lately, but has always been, Each group will have its own interests, unique interests, in the matter of simply financial, the economic gain
up there, or the impact really by a cause, activism, these issues. So we have to understand this very well to know who we can be target. and the TTPs. Each group will have different tactics, techniques and procedures that will make our lives a little harder, but we will bring some ways to get through this, and the issue of having finite resources. So we have to know how to prioritize things, it's not a simple task. I'm coming back all the time. The attack indicators, and we have the issue of the pyramid of pain, imagine what you've seen, I don't know how familiar you are with this, but the idea is to go from the beginning, finally passing, We
have what is more trivial, the hash part, the hash values that we see everywhere, lists, whenever there is a commitment, we go there and make the blocks, but it's something that will be changing all the time, we can't take it as the only point. IPs, same thing, it's also something that will change, domains, network artifacts, we are already improving a little, until we get to the part where we can make a good correlation with the groups, which is in the tools and TTPs. However, this is also something much more difficult to monitor, to get to. So, it's something that we're going to be passing some forms, some tools. Using this, it's also interesting to have in mind how the Cyber Kill Chain works,
to help us in the part later when we make the hypothesis, where we have to understand that the attacker, generally, this is something developed by Lockheed Martin, so it's something that comes from the defense area, from the military area, which is adapted here for us in the cyber area, where we will have the recognition stage, when the attacker will understand the environment, finally, understand the ways he has to get in there. After that, we have the weaponization part, where he will arm himself, he will understand, based on that environment, the best ways for him to invade the vulnerabilities, after this recognition. The delivery, when he will implement that plan he made, send and try to hit something vulnerable, it can be an
email, it can be a file, something like that. The exploitation part, when it will effectively explore, based on what was delivered. Installation, when the exploitation part is working, it will install itself, create persistence in the environment. And command and control, when it will return to that access, finally, to be able to make remote access and everything else. it will really be well established in the system, to then get to the part of "action on objective", which is when it will generate an impact, anyway. So, all this stage that CyberCushion takes, we are usually talking about an average of 196 days that the attacker can be in his environment. It's an average, it can be more or less. It looks like a
lot of things. And the idea with all this is to reduce this period to be able to detect it as soon as possible, avoid a phase of impact, to be able to advance in this. The Maitre, it's also something that... well, there was supposed to be a matrix here of Maitre, it will help us by bringing a categorization of techniques used by the attackers, tactics too, which will basically divide into which within those CyberCuseChain phases, it will be a little more divided and we will have a kind of categorization to use this framework in other things. So we won't need to reinvent the wheel, we already have it all documented. So, Mitre is a tool that will help us a lot in this process, whether automating something
or not, it is already a great starting point for you to use it. We will bring as an example the question of an APT group, which, as I said, could be any other, but the APT29 is a well-known group, it was associated with 2021 with that SolarWinds commitment campaign, which was very big, and now also with all these conflicts, it's a very relevant group, but we have to understand if it is something that could have its company as a goal, but in the example that we will be passing here in the lecture, the idea is to create this scenario for you to understand a little better how this methodology, as we usually do here in our day
to day, we will be using APT29. So, here we have some industry reports and the names of other campaigns that also occurred there, besides SolarWinds. Some of the tools that we wanted to list here, some we see that are repeated in other lectures that we had here, but I think this first one is a very interesting one that we use later in the process that Ed will pass on to you, and it is not the main one, but a very important one for this framework that we are passing on, because this ETDA is a It's a website where it categorizes these groups and brings a lot of information about these groups. It's a Thai website. It has a good...
It's well updated, well, constantly updated. And it also has a tool, a whole engine there for you to do searches and make correlations and... about the target country, which Ted Hector may be aiming for, the issue of tools, also dealing with some TTPs, so we can use it in a very good way to improve this procedure. Malpidia, we also had in the last lecture talking a little about it, we also have this issue of actors, Based on these feeds, they help us a lot to understand which groups, and not only APTs, but which groups can have our industry or our company as a goal. And then, of course, MITRE itself, here in the groups part, also has information about it, and the specific vendors. IBM
itself, Xforce, very good, we recommend it. Paulo Alto, and all the others that we also have. To start putting these things together, will be based on this Open Threadhunting Framework, which is an open framework made by Ace and BM, where we will start from these steps here. So, first we will define an important goal, what we are going to do, because for us to do a well-made threadhunting, we need to have these things very well defined, because otherwise the scope is too wide, anyway. We will have to have the goal, create the hypothesis, understand how much data we have to be able to make those searches for the thread hunting to happen. We will generate test data to do these searches. What will be
our goal with this test data that we will be looking for. will test it later in a more real environment, and then, as soon as possible, the idea is to automate and document. So this is cool, because I'm basically talking about the way we do thread hunting, and before we did it a little manually, but we need it to be scalable. today, with IA, with all the issues we have, we have ways to automate and facilitate this work a little more. So, the example here of hypothesis, for example, having the APT29 as an example, is that the opponents, this specific group, will use legitimate Windows tools, like PowerShell and WMI, for the lateral movement part. And,
observing this, the part of scheduled tasks and system service modifications and the execution of malicious payloads were also identified. So, these are the TTPs that we have for this actor, one of them, the techniques. And then the example that we will have for the objective part and hypothesis is this, basically to detect these malicious modifications, that we understand that this specific group will use to make the lateral movement via PowerShell WMI, and then we create a hypothesis, which will be, understanding that the attackers will make use of these tools, we will understand what are the events that will be generated for this type of action, we will ensure that we can monitor this, we can see them in our CIEM tool,
and we can correlate that, in this case, with all these variables happening, we can have an action of this group, of this malicious actor that we are monitoring. So, an example of the workflow, it's basically like this, we have the part of creating the hypothesis, which is what we just talked about, the missions for the hunting, so the issue of scheduled services, scheduled tasks, services linked to PowerShell, WMI, out of the standard file paths within these event IDs, remote execution of scripts, indicating lateral movement, and then the hunting analysis part, so we will also have something similar there, the analysis of these of these results, having the event IDs 4624, 5145, and so on. So we can have this
workflow for this situation. In addition, we have something that will also help our work to find within the TTPs that we already have for attackers, which is SIGMA, which is an open source collaborative tool, which has more than... There are already more than 3,000 rules, which are rules.yaml, that we can run and then make a conversion to our CIEM, and play there to make these searches, so it's already this kind of field, so we understand which groups can have us as a target, we can understand what the TTPs are, the tools, we pass this on to Sigma, to help us, because there will be a lot of things created there, we won't need to recreate so much, make this conversion to CIEM, and it's cool
here because we have three types of rules. today, there are even some beyond that that are in development, but at the moment we have the generic ones, which will be the rules that will use to identify based on behavior. the Thread Hunting Rules, which we will use a little more, so they will be a little more open, but based on the parameters, on the tactics that we managed to list from the groups. And here, this Emerging Thread Rules is also cool. It will also be updated from time to time, based on in the news, in what the people who work at Sigma are noticing, and then from the APT campaigns, Zero Days, and more specific Mowers. So
it's a very interesting tool. Here is the list, the matrix that they have today of coverage, basically, so, within what we have here in Maitre, it's all based on Maitre. We have all the points that are covered today, and then we have the legend showing that today the coverage is still a little low, but we, for each tactic, anyway, but we have something very broad, even based in industries, etc. We'll have to filter it based on each case. But it's very interesting. I recommend you take a look. And here, guys, I'll pass the talk to Ed, so he can comment a little more about a tool that... Edu talks a lot, man. Let me see here. So, after all this story
that Edu told, a sad story, a work of... of corn, let's say. "Go there, get some information on the website, get the rules of the myth, get this and that." I said, "This is a hard work, I don't want to work my whole life with this." So I thought of some tools, and their combination was born in the apethehunt chimera. The first one was... I have something in my hand, I want to mess with it. It was the Fadzing Hills. What does it do? You simply pass a parameter like "a-technica xyz" plus a keyword like "password", a keyword for lateral movement. It would look for this in the Sigma directory, in the Sigma rules, and
bring you these rules according to your research. And it would copy these rules for you, and then ask you if you wanted to convert these rules to a CIEM, to a Kelly language. And that's it, I stopped there. I thought: "Wow, interesting, at least I've killed a stage here." So I didn't stop there. I started with the tail, there he is. Then I went to the APT. What does this APT Groups do? It goes to that ETDA site, generates a list of options, from a JSON that it downloads from there and generates a little menu for you to choose from. What do you want to look for? "I want to look for a Threat Hector." "I want to look
for a Threat Hector from the country XYZ." "I want to look for a Threat Hector that attacks the country XYZ." "I want to look for a Threat Hector that uses the X, Y, Z tool." So the tool does that. It looks for that, generates a summary, a report for you, of this Threat Hector and saves it in HTML. That was the lion's head. Then I went to FAD APT Attack, which uses STIX, Mitre Attack, and searches in Mitre's database what I found here in FAD APT Groups and generates a report from what I found here, from the tactics, techniques and procedures of that threat check I found, the top 10. So here's a summary, the first thing he does is access
here, download JSON, create an option list, create a HTML report of what he found, create a TXT of the group ID that he will look for in the attack submission stick, that ID and will bring some information to me. From the top techniques, from the top 10 techniques of this group, he will save the HTML with this information and will take these techniques, will say Sigma, find for me all the rules that you have there that match with my search, with what was created from this automation. It will find the IML rules, it will save, it will copy, it will create a list of backends. What are these backends? They are our ETRs, Call Strike,
Splunk, Elasticsearch, etc. And it will say: "Man, which one do you choose?" "I chose this one." "Okay, I'll convert it for you and save it in a .txt, as a folder with the name of the EDR, with all the rules that it found in a .txt file. And from there you copy those rules from the .txt and send it to search. So all that mess we had at the beginning of going to the site, searching, the tool does that automatically. Does it solve your life? It helps, but it doesn't solve. And here is the flow of how it works. You make a choice of the three tools and it generates all that there, and in the end a little test of the
information that you choose will come out. And let's move on to the demo, which is the most important. Let me zoom in here. Okay, I'll tell you what's going on here. The first one here is the list of options with the tools I have. I choose option 1, which is the find APT groups. I go to Hector, Threat Hector. Here is the list of options that he downloaded from JSON. And there I write which APT I want to look for. I will write the example APT, which is APT 29. Okay. Then he asks if I want to generate a summary or a complete report. I say I want a complete report. So he generates on the
screen this information output that he looked for on that Thai site. and generate it on the screen here and also create an HTML, it will create a folder called Finds when I save it in HTML. Do you want to save this in HTML? I click yes, it creates a directory called Finds with the group ID and the group name and another directory with the information it can get from the site. With this I could stop my hunt and search for a Bint Attack manually, but I want to continue a automatic hunt, I don't want to stop here. Here I come back, he asks if I want to continue a hunt, I say yes, and here he goes to
the second tool, which is the Find APT Attack. From that ID group information, he already generates for me, through the call there in the Bint Attack Stick, the tactics and techniques of this group. And he will make a selection in the top 10. And here it is, at the bottom, top 10. And these keys will feed my Sigma, which is my "continual hunt" here. Here is the report he created, from the Mitre Attack, with the information of tools, used keys, associated tactics. And here, "continual hunt", I'll be back there in the directory, which is the TXT he created with the tactics and keys. And here is Sigma, already copying the directory. I'll be back there. The rules are already copied.
And now it will ask if I want to convert it to a Carol language. I say yes, here are the backends, my EDRs, etc. and Cien, etc. I will choose Splunk. It is converting the rules it found to Splunk language. I will open another example here, a PROC something, I don't remember the name now. PROC creation. I opened in the wrong place, I opened with Sublime.
And here is our rule, converted to Splunk language. I copied this and played Splunk to look for something. Will it find something? Maybe it will, maybe not. Sometimes it's necessary to do a lasso, because it's not 100% sure that it will have the perfect conversion. And here, you don't become a Threat Hunter overnight. You need to have knowledge, you need to know how to move, you need to change a... "I need to put a different path here, so just copy and paste, I became a Threat Hunter." And I think it ends there. I need to zoom in here. So, future and better work. I'm going to do an API integration, for the IDRs of life, how? I don't know, but I'll try
to do it. Let's try. And also implement LMMCP, which I learned yesterday in Francisco's training. to have more data sources, because we are basing ourselves only on TDA and Mentor Attack. So, maybe using LLM to look for more data sources, to enrich more data, to have more things to feed and have more reliable results. And that's it. Our Pix, if you want to make a contribution. And that's it, guys. Questions? Not yet. Only after Pix. No, it will be. I will publish it on LinkedIn, the GitHub. It's closed for now. I'm still working on some things. Yes, but there, in the list of backends, you can install it, depending on your environment. I have it
in Cent, no. I'll put Cent, no. I have, I don't know, Qradar, I can install it. So, there is already Sigma Klee, that you install the backends. If you're not there, he'll show you the list. Okay? Questions? That's it. That's it, guys. Thanks.