← All talks

Taking ChatGPT Hunting by Nicholas Carroll

BSides Tampa · 202638:0244 viewsPublished 2026-02Watch on YouTube ↗
Speakers
Tags
About this talk
Nicholas Carroll demonstrates how generative AI tools like ChatGPT and LLaMA can enhance threat hunting and detection engineering workflows. Drawing on real-world testing, the talk covers AI-assisted SIGMA and YARA rule creation, explores the capabilities and limitations of current models, and examines common pitfalls when deploying these tools in security operations.
Show original YouTube description
Taking ChatGPT Hunting by Nicholas Carroll While many vendors are pushing generative AI tool sets into their solutions, the use cases so far often end at simple generalizations and summary outputs. "Taking ChatGPT Hunting" focuses on how LLAMA and ChatGPT style solutions can enhance threat hunting efforts based on real world testing with multiple security technology stacks. Participants will learn how generative AI tools can be leveraged to assist in detection engineering workflows with SIGMA and YARA rule creation, as well as the current capabilities and limitations in common threat hunting use cases. This talk will briefly cover multiple parts including... -An overview of generative AI tools with a focus on LLAMA style solutions for local deployment. -Enhancing threat hunting operations including how generative AI can assist in different threat scenarios and detection rule creation. -Creating YARA rules with generative AI. -Creating SIGMA rules with generative AI. -The current boundaries of generative AI capabilities in and common pitfalls found when attempting to use these tools for SIGMA rules, YARA rules, and threat hunting. The session will allow open questions throughout to ensure attendees are able to get the information they need to make informed decisions about the potential usages for their operations. Attendees will take away a general understanding of the use cases for generative AI in detection engineering and threat hunting, and will hopefully empower analysts to safely adopt these technologies into their threat hunting workflows.
Show transcript [en]

So, we're going to talk about AI and AI generative pieces for threat detection and threat hunting and for cyber security operations because this is the big thing now, right? Everybody's putting AI in everything and it's just kind of the way things are going for us. So, I'll start you out here with a little bit of CTI, right? So, there's a newer piece of malware going around that is coffee loader. Coffee loader uses GPUs for doing some of its hiding and it's got a kind of a novel domain generation algorithm and yada yada yada. You've got these pieces up there. And then we've got ransomware groups like Play that have been recently abusing some zero days

and then using those tool sets and those malares in their attacks. Well, we briefed on the zero day usage and plays adoption of it as a true zero day before Microsoft patched it. Right? So, we use this thread intelligence to guide some of our threat hunting. We feed this to who we're going to protect. We use this to kind of inform our hunts, give them something to jump off on, and then we turn that into some kind of detection logic. Now, I've got three pieces of detection logic on the screen based on some recent malware and ransomware activity. By show of hands, who thinks A is generated by chat GPT? How about B? or C.

You think I've heard some B maybe people are thinking a little bit like a lot of uncertainty on which one's which. People are trying to find the flaws. You know, which of these detection rules has too many fingers, right? They're trying to find those pieces that might give it away. All of the above. Every single one of these detection rules was written by chat GBT. They all work. They're all okay. But I will tell you that this first one a has some pieces in it that if you go to convert this into certain languages like crowd and other things may not convert well. And then over here on C, this PowerShell one is going to get you so

many false positives in most Windows environments that you will just drown yourself. It gave you functional rules and it gave you functional threat detection pieces and this is actually one of the better ones that it's done, but they're not necessarily always the best thing to use and it doesn't know what good or is or good isn't. And I gave the same thing to Claude and Claude wrote me a novel. It keeps going. We will we will revisit this. But this is an example of a bad threat detection rule written by a generative AI tool set. Right? But if you don't know what a good threat detection rule looks like or what good is in your environment or what good

output should be, you may accept this and run with it and then drown in your alerts. Because when AI is good these days, it autonomously drives you around San Francisco or it frees you from having to mow your own lawn. And when AI is bad these days, it creates nightmare fuel like duck with hands. Why? When I was younger, I kind of thought that the AI would look like this because this is what media told me that AI would be. And if you get this reference, I'm sorry your back hurts. And if you don't get this reference, I'm sorry I brought that Ohio Ris vibe into the room on you. But the reality is is that AI as a

concept has existed for a very long time. It originally came out in the 50s as an idea and this is from war games in ' 83. You know, we've always been kind of fascinated by what it could do. And we're finally to a point where we're seeing what it can do. And what it can do right now is fuel advertisements everywhere because everyone is having to shove AI into everything you do for anything techreated. It is the modern gold rush right now and everybody wants to try to be the one on top for it. Everybody wants to be the first one to AGI or general intelligence, artificial general intelligence, right? Everybody wants to be the best AI tool and

everybody's got their own that they're throwing out there and they're marketing it to get all of those investor dollars that are being plowed into it and it's creating a lot of unnecessary noise in our markets and for our users and for our leaders to understand what good is or isn't. And we have to help understand what those things mean and how to make them sense of it so we don't get trapped in AI garbage. So, hello. As introduced, I'm Nicholas Carroll. Uh, I've done everything from help desk to CISO. Right now, I'm doing a team. I lead a team at a company called Nightwing. Uh, we're based up by DC. We've got offices here in Florida as

well. In fact, one of my colleagues, Rain Baker, is up in there in the front. She is down here in Florida. Uh, we do threat intelligence and forensics and hunting for the government agencies and the intelligence community. My team really focuses more on like Fortune 500s and large nonprofits and Fed Sid stuff. And we're trying to push forward our own initiatives, our own internal initiatives for efficiency and freeing our sock analysts and our threat hunters from the junk work like having to mow my lawn. And so we're working on these things ourselves and we're trying to bring them to fruition. I'm sure many of you are as well are working to pull these things into your

environments and at least you're getting flooded with all of your vendors who are telling you that they've got the magic bullet for AI and it's going to be great. So, let's talk a little bit about artificial intelligence, right? Let's go back and make sure everybody's kind of on the same page. There's a ton of things we could mean when we say AI. There's a ton of different tool sets and how they work, ML, everything like that. It's all different things, all different takes. Like I showed that example from War Games earlier, it's been around for forever. But one of the big things that you're hearing now and the big focus now is on generative AI. And the thing you need to

understand or remember with generative AI is that it knows about things. It's a collection of data points. So it knows all of these bits of data, but it doesn't actually understand them. It can't fully reason yet. And if you talk to the right analysts, they'll tell you that like reasoning is like a year away. Uh but that may be like a Tesla self-driving vision thing now where it's like every year they say it's a year away and it just kind of keeps coming or it could come tomorrow and you know Sam Elman is about to give us uh Skynet and we don't realize it yet. We don't know. But the current tool sets that we have

have limitations because they cannot fully reason and they don't fully understand. They just regurgitate data. As I mentioned, this concept has been around for a long time. AI has gone through a bunch of phases. You know, we're dealing with stuff that realistically the groundwork for many of these things has been happening for decades. It's been going on in the background, but it's been the domain of either hyperspecific tools or major data centers, things like that. We're finally achieving a point where some of this stuff is power friendly enough that we can democratize it and put it in the hands of the users. And that can be a great thing or it can be a terrible thing.

You've probably heard of the Turing test, right? This is Alan Turring's idea that if a computer could fool someone that it was human, that we might think that it is human. and we might accept it kind of into our society as one of us. The reality is is that the deterring test is just a collection of questions and some human assessors who are asking those questions and trying to discern whether or not this thing that they're talking to sounds human enough. The big thing to remember is that the Turing test is not a test of accuracy. It is a test of how humanlike something is. And I know plenty of humans who get things wrong and plenty of AI that get

stuff wrong. It's not about accuracy. But chat GPT and a lot of the current tools we have have passed the Turing test. They are convincing enough that you will accept them as human when you're talking to them on the other end. As long as they put the right robot voice on it. And even this stuff right here, I didn't write that. I was too lazy to make this slide. That's all written by chat GPT. That's chat GPT telling you about the touring test. The test that it's itself has passed. So I don't even know if this answer is right because I didn't fact check it, but I trusted the machine. And that's the thing that will get you

in trouble. So from a really high level thing, you know, a lot of RJ and AI stuff right now, it's kind of based on neural networks and a lot of training and you're basically just throwing a ton of data into it. If you wanted to train an AI on something, generally speaking, what you're doing is you're collecting different pieces of data and you're kind of putting those data into different groups and you're putting them in different databases. You're letting these pieces come together. So, if I was going to train like a vision system, I would feed it images and I would help teach it things by, you know, I don't know, launching a massive capture

campaign that asks you to click on crosswalks and buses and things. And I would let it learn what those things are in those pictures by people telling it what is right and what's here. And eventually you'll wind up with a database that tells it, hey, you know, this is the kind of data that's in this picture. This is what these pixels represent. So in this case, it's an orange SUV in a forest. One of the big things you have to be mindful of is that when you think of an orange SUV in a forest, maybe you think of this, maybe you think of this, but maybe that's not what the AI comes up with based on its collection of data.

Maybe I wanted a picture of a Jeep and I got Whatever that is, you have to make sure that when you are prompting and when you are feeding and you are working with these tools that you are being hyper specific and setting the appropriate guides for them to work inside of or you're going to get back things that you didn't expect. Same thing happens when you're talking to chat GPT. It's literally just like predictive text on steroids. And I'm I I am massively oversimplifying for everyone's collective sanity, but that's kind of the reality of this, right? It's going through based on your prompt. It's looking through its databases. It's looking at the collective knowledge that

it's been fed, which is basically the collective human knowledge. It's going, "Hey, I generally see these kinds of words and things grouped together. That must be what you're trying to talk about." So, that's what it's going to spit back out to you. And sometimes it lumps the wrong things together, too. You have to be very careful with these tool sets or you will learn very important business problems. Air Canada famously put a chatbot on its website that was chat GPT powered. They didn't train it. No training. Just chat GPT. Got to have it. Put it on the website. Let the users use it. A user went to it, asked it about a refund. And because they didn't train it on what Air

Canada's policies are for a refund, which would be getting like a travel voucher, it said, "Oh, I know that in human knowledge, generally speaking, when you ask for a refund, you're supposed to get your money back. you should call our number to get your money back. And she did that and the human agent that answered the phone said no. So she sued them. And in court, the court decided that because chat GPT's bot was on your website and acting as an agent of your business, what it said goes. And so the chatbot went. They removed it the next day after the court case was gone because they learned an expensive lesson that you have to

train these things. You have to be very specific with what you input to get good stuff back out. Another thing I would like to point out, you'll probably hear a lot about hallucinations. Hallucinations is a really cute term for what AI is doing when it's working with you. We have been training these systems to think and act human. And to be human is to do things like make things up and lie. Hallucinations are just AI lying to you about either its sources or what it's making up with it. But it's just lies. And the kicker is is that's okay because when I'm doing something that's more like a creative writing prompt, I want to be creative. I wanted to make

things up. I wanted to come up with stuff there. The problem is is that these tool sets don't know when it's appropriate to lie. It's like dealing with a toddler. I don't want lies in my cyber security data. I don't want lies in my threat hunt. I don't want lies in the things that I am taking to the boardroom. So, I have to be very careful every time I use these things because it's play playing two truths and a lie. We're somewhere in the AI adoption hype cycle right now. We're kind of still in this early adopter phase. Maybe we're over here. Maybe we're over there. Depends on which analyst you talk to, where we're at. There's this pit

right here. You see that trough of disillusionment? We are rapidly moving that way. And what's going to happen is a lot of organizations and places that are rushing to cram AI into everything that they can without really rhyme or reason are going to wind up stuck down here because the AI is not going to deliver on the promise. The current tool sets cannot deliver on every promise. And sometimes you have to pick the right tool set and you didn't pick the right tool because there's 80 billion different AI models. Now, it's very careful, very easy to get stuck in that trough of disillusionment and actually miss out on the good stuff because there is good potential in the

current crops of tools. So, and I I will point this out to you as well. Everything you buy, everything you work with is going to get that AI. It's coming for you whether you want it or not. It's in Crowdstrike. It's in Microsoft. It's in I've got like I use Recorded Future a bunch of stuff. I've got recorded future AI. Not a big fan of all the stuff that it does all the time, but it's there. You can't buy a Windows laptop right now with getting a getting a co-pilot key on it. That's just the way the market is going. Everyone is going to cram it in there, and you're going to get it whether you want it.

I would also point out to you that your adversaries are very much early adopters. They have been using these tools for years now. We've been dealing with malware being generated by AFI for at least a year now. We've been dealing with fishing messages and fraud messages generated by IE AI for at least the past two years. So you got to catch up. So let's talk a little bit about threat hunting, right? And what was this kind of ties into where we're going. If you're doing any kind of sock operations in your environment, you're probably trying to follow some sort of standardized threat hunting model or framework. This is my high level one that I like that I kind of boiled down

that works for most people. So you've got a starting point, some sort of hypothesis, right? Like what are we going to hunt for today? What am I looking for in my environment that shouldn't be here? Maybe I base this on, you know, a an OSET report. Maybe I'm basing this on for us, like we've got a bunch of different customers. Sometimes we'll see something in one customer. It's like, well, let's go see if it was somewhere else, right? You you kind of track things around. You have some sort of jumping off point. That's your CTI life cycle, your cyber threat intelligence life cycle. It's feeding into that point. And then you're going around here and you're doing your

testing and your recording and things. And there's a lot of really good points that you can actually use generative AI in these tool sets and you can be testing it. Uh record is a great part here, right? If you hate writing incident reports, try feeding in stuff to a llama model that you stand up and see if it can help summarize it for you. Right? You don't want to write that thing for the boardroom or you don't really like writing. See if a local llama model can do it for you. You want to get a faster jump start on that CTI part. I showed you earlier where I took that Zscaler report, we fed it in. Those were

all chat GBT generated threat detection rules. Were they good? But they were a jumping off point that I could use for tuning and testing and getting things done faster than having to start from a blank page myself. There's a lot of really good points where you can be using these tools if you're using them responsibly. In our environment, we support clients with a ton of different security stacks. uh that's just kind of the way it goes when you're in an MSSP kind of environment. So we write and we operate with Sigma as our ruling rule logic for us. Sigma allows us to write a rule detection logic in one kind of condition in place for most of our stuff. And then

we can translate that rule through PI Sigma or Sigmac into CrowdStrike or Splunk or whatever the other languages we've got there. There's a huge community of Sigma writingers as well that helps kind of inform some of the stuff and there's a great GitHub for this that you can go out and get, you know, but Sigma allows us to collaborate and operate internally and get things done faster and I only have to write this stuff once and mostly kind of use detection as code pipelines to convert what we've written into the stuff that matches our clients environments and just go. That works really well for us. Maybe that works for your environment. Maybe it doesn't. A lot of this stuff

isn't one size fits all. So let's look at another example here. I mentioned we'd look at coffee loader, right? Coffee loader is a newer malware. It's got some fun stuff with it domain generation algorithm and C2. It's got, you know, kind of uses your GPU for some encoding and stuff. There's been some OSET reporting about it. We took this, we used it to do threat hunts in our customer environments by writing up a sigma rule and then using this to inform our hunts and converting it to the matching languages for match our client environments and doing those hunts. This rule right here was written by Rain Baker who's up here in the front. Uh she writes Sigma rules

every day which means hers are actually pretty good. I write Sigma rules a couple times a week which means mine are kind of functional. You know, I write Python and Sigma the same way. If it works, I'm done. Rain is really, you know, the kind of person who drives home making sure that it's hyperefficient. So, this is actually a pretty solid detection rule that works out pretty well in your environment. And you'll notice she's got a couple of things, right? So, we've got the sigma format. The really important part over here, looking at our log source, looking at our detection and selection, what we're actually looking for on these systems. So, in this case, we're looking for some

DLS. We're looking for some scheduled task creations. Those are the actual points that we can go like, oh, this is a good detection. That's the kind of stuff I can watch for that's happening in logs or in my environment and it might tip me off that I'm seeing this infection. I fed the same OSENT into chat GPT and it gave me this. It's missed the schedule tasks entirely. It's got the DLS. It's got this post entry point return part on it. So that actually means it's looking for things that most likely your EDR aren't going to catch, right? that's not going to show up in most places. This is technically a functional rule. It's most likely not going to get you a

really good detection or any detection at all. So, it can serve as a jumping off point for maybe I go back to chatbt and I go, "Hey, what about this other stuff?" And I can reprompt and I can rego through and I can tune in and things inside the model itself. Or I can use this as a jumping off point and hand it to somebody like Rain who's better than me and she'll like, "Oh, yeah. You kind of missed these parts. Let's let's make this better." Right? We can collaborate and get that jump start and get things going faster than having to go from a blank page. But this by itself is not a functional rule. This is not something I

could just take and run with an environment. Same thing here. Another example. This is for sock goalish. This is for the C2 domain generation algorithm. Really simple rule this time from our own team. We're just looking for some things in the URI that match what our uh reverse engineer Brian found, right? And we use this to catch things in a couple different environments. I asked CH GBT, same thing again, same starting point as our humans. And it kind of went extra. It's pulled in a bunch of extra fields. It's looking for some things that aren't going to work right. It down here in the selection point, this I equals post, that's not even going to do anything, right? It's just

made stuff up. But this looks like a functional rule. So if you don't know what you're looking for, you might accept this, run it, maybe it'll convert, maybe it won't convert, maybe it'll give you, you know, a successful run, but it won't have actually hunted anything or searched for anything or queried anything. It just existed. Or let's look at our our cloud example earlier where they gave me a novel. Out of the stuff that I've been testing, I've tested a bunch of different models. I personally prefer some of the local llama stuff because you can really tune it in and make it tight and you can shrink that AI's worldview to just focus on the one thing that you need and turn

it into an agent and let it pass things off from there. Chat GPT does okay. Claude, I don't know what Claude's doing. Claude has given us everything under the sun. It's got some stuff here that will work. It's got some bat files and some PDF pieces that will kind of be there, but the syntax is wrong through a lot of this and it's looking for stuff that just this is a terrible rule in the terms that like if you fed this in here to your seam, look at this. This rule one, two, that's not the end of the rule. Three, four, the rule just keeps going. Your seam will choke. Have you ever tried to feed Splunk a big query?

Come back next month. And that's the kind of thing that's going to happen with this rule if you don't break this into chunks that make sense and use these things correctly. It's just not going to be good or efficient. It's not a good detection piece. But it is at least a jumping off point that you can read through and go like, "All right, I can see what I needed to start from or I can test and tune from there." Same thing's kind of happened with Yara. So Yara, if you're familiar with it or not familiar with it, is for more like malware work, right? you're looking for specific strings and things from malicious files that are on disk. So

instead of looking at sigma where we're looking at things in logs and more likely looking at tactics, techniques, and procedures, Yara, we're looking more for like IoC kind of items. Here's an example rule. This is okay. Uh again, it's looking for literally a lot of stuff. The pros for using it for Yara is you get that nice jumping off point. You get the, you know, things going pretty quickly. But the cons when you're using these generative AI tools for Yara, again, you have to feed them quality data because it's garbage in, garbage out. And getting that quality data is hard sometimes. But the biggest part is there's this huge risk of false positives or just it overfilling and

flooding itself with too much stuff because for some reason all these AIs really like to try to get everything they can get their hands on and pack it in, even if you're, you know, kind of trying to tie them in. And that's where having your own things works a little bit better sometimes. AI is a tool, right? It's something you can be adopting and putting to place. How good the outcomes are with your AI and the things you're using depends on how good your users are. It still takes a master to paint the Sistine Chapel, right? And you have to make sure you're picking the right tool for the job for a lot of this stuff,

too. So artificial intelligence is cool. That's a fun term. I would realistically say that with the tools we have today and what they're capable of, they're much better for augmenting intelligence, for allowing us to take things that we don't necessarily want to do like report writing and speeding up some of those workflows so that we can focus on the things that we do want to do or get better at or focus up our people to focus on what's more important. If you were in here for Bryson's talk a little while ago, you saw the pyramid of pain. Same concept, right? We're going to look at it from a different perspective though. Everything on the bottom of the

pyramid should be automated as much as possible in your environments because all the thread actors are automating this stuff as it stands. Great example when Bumblebee loader was really popular for malware for a stage uh last year or so. Bumblebee loader was scripted to change its C2 IP address every 15 minutes or less. That's hundreds of IP addresses a day. I cannot put them into a spreadsheet and add that to an analyst. And yet, I know places that I've tried. I'm going to drown my analyst with that. I have to automate everything here at the bottom of the pyramid so that my actual people can focus at the top where we're most likely talking about hands to keys

activity, the tactics, techniques, and procedures part up there. So, we can use AI to help with that, right? We can bring AI into these workflows, you know, and but we can let it augment our humans because it doesn't know things. It knows about things, but it's really good at quickly processing lots and lots of data and finding things that are potentially anomalous. Those are great use cases that are bad for people and good for AI, but it's a good complement to people and what we do. For us in our environment, we've started using a lot of local llama models that are custom models for us where we're actually kind of coring the AI's worldview down. Some of this is by

necessity. Sometimes we work in environments where we have to be able to air gap or kind of pull things apart and not have everything be exposed to the internet or put our data in a place where like we don't know where it's going, right? I want to keep it crowded. It's really easy to set these things up and then train them to focus on what you want them to focus on. Right? So, this is an example of one that I've got that's trained on sigma data and it goes through and it can help me jumpst start writing my sigma rules for my threat detection pieces and I can ask it all kinds of different stuff about that

because its worldview is narrow. I have turned it into just a thing that understands and knows sigma. That's its whole knowledge base. It's just sigma rules and sigma logics and the basics of those things. So, it generates rules and it generates them with a bit higher fidelity than usually I get out of something more public like a Gemini or a Claude or anything like that and it works out pretty well and I can connect it to the internet if I want it to be connected to the internet and I can use that to get even more information into it. But what I've started doing here when I've done this when I've taken this piece this AI and

I've trained it on this narrow piece that I wanted to understand is I've essentially started to create an agent. And so you'll hear a lot of words and people saying like AI agents and stuff like that. Really what they tend to mean is they're talking about very narrowly trained AI pieces. And so we've got one here that just knows Sigma and that's its world. And so I can use it pretty well to help me jump start a lot of my pieces and conversions of thread intelligence to sigma rules that then go out or use it to help me convert stuff to from Sigma to like crowdstrike language or to Splunk language and things like that, right? like it has

that capability to tighten things up for me, but it's still not perfect. It will still make mistakes. It will still occasionally generate things that don't make a lot of sense, right? And so you kind of have to go back through and like what was this? There was one time a little while ago where for some reason it got like super into sysmon and I don't know why and it's like I'm looking for this thing and it's for Linux sysmon but no could you do it in this sysmon like every response it was coming back to was sysmon and that's kind of what it was trapped on for some reason and eventually it broke out of that like

phase but that's just kind of the way that it goes you know these tool sets work and are okay for certain things and you can turn them in ages that could be useful for things, but they're not going to be perfect for things all of the time. And the only way that I know that it's not perfect is because I've had to do these things and practice these things and work with all these things. And so I've got some of that human expertise with it or I've got a team of people that I can go to to help back me up on these things to be like, okay, this is right or wrong or what's this,

you know, like what's that there? And that makes it a little bit easier. I have to partner these AI tools with people to really get the value out of them. There it goes. Look at it go. Now it's making elastic rules. It's so happy. Happy little AI with its narrow world view just focused on Sigma. And that's what you want right now is to really corral these tools down. So I've got mine. You can have yours. You can use any AI tool you want to practice with and play with. If you've got stuff that's already built into the tools you've got because everybody's going to have an AI in it, right? You can test that and see if

that's good enough for your environment or not. Or if you're concerned about coring your data and not putting it out there for other tools, you can stand up your own. Or if tools that exist don't match what you need, you can stand up your own stuff pretty quickly. Olama.com will help point you to a lot of different llama models. Llama 3.5 works really well for stuff. There's Llama 4. Some of these things are a little bit resource intensive. My my Sigma guy you saw earlier, that's a Llama 3.5 model running on a basically a desktop computer with an RTX 4060 in it. And that's a a lower parameter model and it's gets the job done because I don't

need something that's super wide, right? Because it's got a narrow world view. I can save on my compute with it as well and I can run a smaller model and that's great because I don't have 5090 budget but you can go out and see what exists already because there are people putting out models all the time every day. So Olama if you go out there to their library you want a model that knows Chinese you want a model that knows code you want a model that knows Sig SQL they've got those for you. You can use that as a jumping off point. You don't have to start from scratch. Now, you will need to go in and look at what kind

of system parameters are being put in there and what it's doing to make sure you're not picking up things you shouldn't have picked up. But the same goes for anything you get off of GitHub, right? You should be inspecting it a little bit to make sure you're not bringing in unwanted gremlins. Even better, open web UI. Open web UI allows you to have a pretty interface for these tool sets that you can just go out and get. And uh the nice thing is it's you can be up and running about 10ish minutes or less. It's all running in a Docker container and it will stand up the llama server for you as well. But if you want to tie it to, you know,

OpenAI and Chat GBT, you want to tie it to Gemini, you want to tie it to whatever DeepSk or whatever model you want, you can easily set up those models and tie it to it and run it. And again, if you go to the library, there's a ton of pre-built models out there that are already trained for stuff that you can just use as a jumping off point. So if you just need a generalist cyber security model, those exist. If you just want a generalist, you know, storywriter model, those exist. You don't have to start from scratch. You can take advantage of what's there to get going faster to really test and see like, does

this make sense for us? Because the reality is is that we shouldn't always jump to AI for these things. Even though that seems to be the way that everybody wants you to go right now is everybody's got to jump to AI first. That's not always what makes the most sense. I'm sure somebody has probably already said crawlwalk run today. I'm a big fan of crawlwalk run and I think it works really well for these kinds of tool sets and adopting these kinds of things and bringing them in. Start from where you're at. Look at what you've got. If you've got tool sets that already have AI pieces in them, see if it makes sense for you. See if it improves your

workflow. Test a little bit. You know, don't go running off to make your own if you don't have to. But if that doesn't make sense to you, you can make your own. But you start I I will tell you start slow and really pay attention to your use cases. If you do not have semi-mature processes and workflows already documented or an understanding of your environment, it's not helpful to bring in another thing you don't understand. You have to make sure that you've got a good control on what you're trying to do or what you're looking to accomplish or what your workflows look like so you can see where there may be gains of efficiency. before just going like,

well, throw AI at it and see what happens. It doesn't work out very well, right? And take full advantage of the stuff that you've already got. There's a good chance that your your seam will accept things like sticks taxi or, you know, you've already maybe got sore platforms with what you've bought, right? you have the capabilities to start building in these automations with what you have before you even go to AI and start building on that stuff because a lot of the junk work that burns out our sock analysts is often having to go back and forth between tool sets or copying data or trying to find stuff and looking for reports and you could automate a lot of

those workflows and pipelines and processes just with API connections and things and you didn't even have to get chat GPT involved to start freeing up your humans once you've done those basics, we can start walking. We can start playing with our own models for AI if we need them or we can play with what's available out there and start seeing if it works for us because not every tool works for everybody and not every tool is the right fit for every job and then see what makes sense and eventually what you wind up kind of getting towards is you're moving towards having your own little agents of things for the stuff that makes sense for you,

right? So you make your own little models or your own little carved out pieces that'll be your agent. But you can also start looking at how you're interconnecting your systems. Maybe your theme is really well tuned now and you're taking full advantage of its sticks taxi capability and the sore that came with it, but you want to tie it into Jira and it doesn't natively do that. Well, you can do that through API connections or, you know, scripting if you're good with that kind of stuff. And if you're not, there's a ton of tools now that will allow you to do no code and low code automation for a lot of these tool sets. Trace is open source

and free. You can play with that right now today and not have to put out a dime to start tying these things together and seeing what works there. The other ones will usually start costing you money and stuff, but you know, some of them are really fancy. Torque looks amazing. They brought Gravedigger the monster truck to RSA, which is kind of awesome, but Trace Cat's free, so uh that's my budget. So, you know, the reality is is that you can get these tool sets going and start tying things together and seeing where you're gaining efficiencies there before we get to the point where we're like, "All right, now we're running, right? I've got AI agents that I've built. They

work for the workflows that I have and I need. I've got my tools tied together. I can start bringing my agents online and incorporating them into my low code, no code automation pipelines or incorporating them into speaking to my Steam's API so they can read that data directly." and we're off to the races for really getting fun stuff going. So anyway, there is a lot of really good stuff that's happening right now, but a lot of it is at a point where if you don't know what you're doing or don't know what you're looking for, you can accidentally shoot yourself in the foot. And so you really need to be doing that mindful adoption and that careful

pairing of AI to human and not trying to do human replacement but human augmentation for these workflows. For anything in cyber security remember the AI is going to lie and if you can't catch the lie you might be in trouble. Make sure that you're really focusing on what is capable of these tool sets these days too because there are multiple vendors who are trying to come in and be like, I've got the sock agent AI. It's going to do all your sock work for you and it's amazing. You don't need humans anymore. Probably not going to work as well as you want it to work. And you might get caught in that trough of disillusionment

when you buy this expensive tool and it fails to deliver, right? But you can automate away a lot of the junk work that is low-level sock work even without AI by just connecting things together and tying up the API connections and making everything work and actually get the efficiencies you're looking for that allow your people to do more or you can partner some of these geni tool sets to free your analysts from doing things like report writing or having to come up with things out of scratch and get more out of them but you have to do it by partnering the AI with a human expert and putting these things together, right? It's all about being a partner to

push things forward. So, that's what I've got. If anybody has questions, shout them out. If not, you still got a few minutes before the keynote.