← All talks

Kevin Sistrunk - Automating Security Operations Around the Clock

BSides Knoxville34:43140 viewsPublished 2025-06Watch on YouTube ↗
About this talk
How can security teams stay ahead of 24/7 cyber threats? This talk explores AI-driven SOAR tools that combine intelligent decision-making with automation. Learn how AI enables dynamic workflows, automates after-hours responses, and empowers SOCs to reduce alert fatigue and improve response times.
Show transcript [en]

All right. So today we are going to be going over uh AI AI used in sore platforms and specifically it's going to be chat GPT using Cortex exor and uh so we're going to go through a couple uh pain points in in the industries especially in like security departments um in security operations. Uh then we're going to go into um uh into a demonstration and we're actually go through a workflow inside uh inside of uh Exor that's being powered by Chat GBT. So we'll go through the nuances and exactly what it's doing and uh some of the situations it shows. All right. So that's a pretty handsome fellow up there, but I'm not sure who that is. But um it's actually

actually created by Chat GBT. But so again, my name is Kevin Cyrunk. Um I have about 15 years in cyber security experience. 10 years of that is strictly um in security automation and probably about five six years of that is specifically in Demisto or Exor. Um I have worked with a lot of different security tools going from CrowdStrike um carbon black uh you know Splunk XIM across the the gamut across my uh my career. Um I have created complete pipelines start to finish using security automation from the start from like SIM to soar uh EDR to soar etc. All right. So, this is actually a slide I just kind of threw in there. I was actually talking to uh you

know, one of the uh Palo Alto customer success engineers that's assigned to my company and they said, you know, I can't not put this in there. So, the story behind this is I I have a a little I'm like kind of like a hobbyist for raising chickens. And so I kind of adopted that whole fence and depth inside my chicken coop. So if you actually look, I have a I'm not sure if anyone's familiar with like the wise cameras, but they have like AI kind of built into it. So it can tell the difference between like a person or a package or a pet, which is anything that has four legs. So you to make a long story short, um it I kind of

adopted that defense and depth in there. Camera is kind of like a like an EDR. I have wire mesh around my chicken coupe, which is like like a perimeter defense. And uh the actual chicken wire would be like a DMZ. So, I'm not sure what the automatic water connects to, but it's in there. And uh I'm actually using it. All right. So for the agenda, so like I said, we're going to go through some real world pain points when you're talking about, you know, using AI, especially in security operations. Um, and then we're going to go into how Exor and Chatt actually communicate with each other. It's going to be a demo and then also we're going to do a deep dive

inside of the AI reasoning. you know how chatbt takes different data from exor and actually decides to progress through the playbook. So technically if the way the demo playbook is created is uh chatch is actually driving the response inside of uh inside of the playbook. Um so and then at the very end of that we're going to have a final uh report which also has the final response built into it. So it takes the whole entire combination of that entire workflow inside of Exor and it shoot puts it into like a nice little formatted email if you're sending to like the sock or like some kind of a a team that that turns over. So all right so the first one here is

the accuracy concerns. So even like myself, not just not even kind of creating this demo, just me using chat to be t in general, if you don't if you don't explicitly tell it exactly what you want that outcome to be, it's going to try to guess and most of the time it's going to be completely wrong, which kind of turns into the hallucinations. And especially in security workflows, you know, that's something that you do not want. You want to have you repeatable uh, you know, expectable responses. um especially if you're using this to do some kind of automation. Um so that kind of leads into this next slide which is kind of like a guards rail. I I call guard rail.

I'm not sure exactly what it's called but um it's pretty much a rule set at the very bottom. So if you see it says rules, it just kind of gives you an example of you know what it should should do, what it shouldn't do. So you kind of look at this as more of like um you know if you know or else statement in like a scripting. So you know it's more a trial and error because if you actually look at this if you're trying to use chat GBT or any kind of AI in this manner this is just another security tool. So if you stick a a security tool right into your environment you know what's the first

thing that you probably are going to be doing you know for a little bit and that's pretty much like tuning and adjusting that to your uh your company's environment. The same principle here. Um, when you try to create a prompt to do these automations, you're going to want to try to have like a bottom rule set that says, "Hey, if this happens or you know, don't do this action." In this case, it's very trial and error, but I got to a point where it was actually pretty reliable inside the demo that we will be seeing in a little bit. And then this comes into the actual privacy issue. So I think this might be neck andneck with the the accuracy

problem because you I've you know I've heard it in other companies. I go and like Reddit and I I look at you know AI different topics and number one that keeps coming up is the privacy problem and what that is is so if we're going to be using our data in or the company's data to be using chat GBT what is what is that company doing with our data? Is it training their models on it? Is it you know holding on to it? you know what if that AI company, you know, was breached, you know, all our data is going to be exposed. So, there's a couple ways that they you can get around this.

Um the first one is you can I think Chady offers like an enterprise uh license and they offer more insurance asurances through that and they also offer um you know other things but so the second one is so I mean this is actually the third third item here but uh the second one is you know actually taking Azure if you have an Azure environment chad offers like a uh it's some kind of Azure instance you can set up if that's your uh if you want to do that. But then the third one is actually this slide here um which I actually did in the demo and I actually explain how I did it in the demo. So what it does is

once you get the incident say it's like an edr or whatever whatever incident comes into uh exor. So it's what it's going to do is it's going to redact that data, you know. So chat doesn't care what your internal uh artifacts are, your IP addresses, your your usernames. It doesn't care. So what I did is I redacted it using a Python scripts. And then on the step two, it's going to send that redacted prompt to chat GPT. So all all the contextual uh information of the incident still stays intact. So you you have like the incident name, the reason why it triggered, you have all the malicious artifacts still in there. That's all it really needs to make an

intelligent decision. And then so what happens after that, you know, chat goes through its workflow, does its thing, and then um once you're not going to reach out to that uh that that task for chat anymore, you can then you reinject that data back into either the context key or uh whichever And so in this case, you know, with in the demo I'm going to show, it's going to inject it back in. So it's going to send all that that whole entire story to uh to like a sock. It's going to be it's pretty much like an email template. And that's just to demonstrate how it's it tells it how it's thinking, why it made choices, why

it didn't make choices. And in cases where if a playbook fails, it'll actually tell you how to better uh you know situate that playbook. So this maybe it's like an edge case that the developer didn't think of and it helps them kind of almost almost like selfheals the playbook. Um so that all that data is inside of that final report with the the data injected back in it. I get a drink of water real

quick. All right. So, I kind of added this one in here more of like just a give everyone more of like an idea of exactly you what the other side's actually doing with AI. So, as defenders, you we're kind of we're in the corporations or we're working for security teams. Um we're still trying to decide if we're wanting to use this in any capacity in our teams. um it's kind of wrapped up in you know kind of red tape and all that such while that's happening that attackers are using this they've been using it I think for quite a while and they're getting more and more sophisticated and so the first one I want to go over is like the AI generated

fishing emails so a lot of like when I was when I first started in cyber security I mean we there was no like fish me or coence or anything like that I was we were literally looking through the headers of the emails And it was pretty much a shared inbox, exchange inbox. And we just looked through all that. And after a while, it was pretty obvious. You have the usual red flags, you know, kind of kind of a little bit of like a broken English, you know, kind of typos, things that don't look right, don't make sense. Well, if they're using this, that all goes away. So, you know, it'll make it a little bit it make it harder to discern

what's actually real, what's not, without really deep diving into it. Um and then the second one is uh deep technology. So what that's that that's actually been happening not just for companies actually happening for like you know you know actual private people they on their own personal devices. There was one story where this lady thought she had having a relationship with I think it was a Jason Mamoa and this she was like, you know, chatting him on FaceTime or whatever and actually looked like Jason Mamoa and she was sending him all kinds of money. I'm not sure why he would need money from her but um it it was working. So I mean it's it's pretty dangerous the stuff you know

and uh just to take that into more of a corporate kind of spotlight, you know, if you you know like I'm standing up on this thing. This is going to be, you know, you know, probably put on like LinkedIn or whatever. Uh, somebody could actually take my facial structure, see how I'm talking, how my lips are moving, how my voice is sounding, and they can use that to train an actual AI AI model. So, they could, you know, pretend to be me, FaceTime somebody, FaceTime my boss or whoever, saying, "Hey, you know, um, you know, my my account got locked out, you know, can you reset my password and whatever." So, it's it's kind of crazy

how how they're using that technology. And the one that really kind of hits home is the AI powered malware. So, this isn't pretty much in in like wide use at the moment. It's pretty much used by uh like state sponsored actors. And what this what this does, it's technically malware with an AI model kind of built into it. So what that what happens then is if somehow you there's like maybe could be a fishing link you click on it or whatever whatever you do to get it gets on your machine technically that that machine is now the malware. So yet you're not looking at it like the malware is affecting machine machine that malware actually owns that machine

now because what happens is if you if you block the hash you you block the file name you try to you know do anything with your your edr uh tool it's going to realize okay well I'm getting blocked here so what I'm going to do I'm going to change my hash on the fly I'm going change my process name so it keeps on going through this cycle and so it's so it's going to keep on doing that and Well, maybe when maybe when it changes hash, it's going to sleep. It's not going to do anything at all to be undetected, you know, for in x amount of time. Um, so that's not the only thing

it does do. It also does automated vulnerability scanning. So, what this entails is, okay, they're on the machine, you know, they're you maybe not detected yet. Maybe they were detected and the analyst or whoever blocked that hash and like, hey, it's blocked, you know, we we deleted it and it's removed. Everything's good. but they didn't realize that it either made a copy of itself or it renamed itself. Um, so while it's kind of hidden, it can actually activate like vulnerability scanning hitting whatever connections it can to other machines that's connecting to that that beach head. And what happens after that is let's say that finds a vulnerability in a server. So either it's already preloaded with uh

you know actual malware um actual packages or exploits or whatnot or it'll actually reach out to a C2 domain which they could have hundreds of IP addresses ready to just switch out. And so what happens there is it pulls that it says here's a match. You know we we found we found a vulnerability and here's the actual package to exploit that. So I'm going to pull it down. So, I'm going pull it down and I'm and I'm going to use it. So, the only way I was actually thinking about this like how how do you stop this from happening? So, the only way to really stop it from happening is like probably right when it gets onto

that machine because the first thing it's probably going to try to do is going to probably try to pivot. It's going to try to copy itself and spread itself to the it's almost like a like a ransomware tactic. So, the best thing you do is you have to isolate that machine. You have to isolate it completely in every aspect to make sure that that malware doesn't come out. So main reason like I said the main reason why I was kind of kind of going over that um is just again to show how sophisticated and how dangerous AI is being used in the in the attacker sense. All right. So now we're going to go into the

demonstration. Okay. Edit this. I'm going to hit run actually on the screen. Oh,

sorry. Power. Okay. Oh, I didn't screen shared. Okay, let me see. Drag it. Come on. Come

on. Stop. Hold on. Let me just May I just have to close your PowerPoint? Yeah, I can try to do that. Oh yeah. Okay, there we go. Okay. Can everybody see that? Okay. Do I need to zoom in or anything good? All right. Sorry about that. So, I'm go ahead and run it. All right. So, with this this is the demo. So I'm going to kind of just walk through is exactly what's happening and how chatd is actually interacting with exor. So the the first one you don't really have to worry about. That's just kind of you know me injecting you know fake malicious on your IOC's. All the data in here it's not any kind of uh

sensitive data or like live data. It's all made up um by that task. So, the second prompt, sorry, the second task here, I'm going to zoom in on the where is it? There we go. I'm not sure if you can see that, but I I'll just talk what what it's actually showing. Um, so this is a prompt and in its entirety that I've been using, it says like you are a tier one cyber security analyst. you integrated uh um into pal out to exor and it just goes into the incident content or the context and the way this is formulated it's never going to change uh this is going to happen through every single incident it's just so it's

formatted correctly to do the actual proper data sanitation um has extracted indicators so you can see it has like 192 168 it has actual potentially real uh you know impacted um artifacts that you know that potentially you want don't want being sent out. So the next one it goes to is so we're setting the setting the prompt with sensitive data and then it sanitizes the prompt and I'm not sure how good this will show. It's kind of like all jumbled in there but um it's actually replacing it with uh with actually dummy like was it dummy IP? Not sure if you guys probably can't see that, but um so it has dummy IP, dummy username. So it's kind of stripping that

data out. And then we're going to go down to the sanitize prompt. So this is kind of like the same thing. Um this is a prompt. So it's taking the data from the above task. It's running a Python script using regular expressions. And that's why the that's why it has to stay the same. It pulls that out, puts dummy labels in there, and then it's going to send that completed prompt into uh chat GPT. So, right here at the top here, and it's saying, hey, this is a suspicious credential dumping activity was detected specifically involving uh the use tool of Mimikats and it was listing the host name, which is now dummy host name, dummy IP, and then also lists down here.

um the actual uh the username. So also it also tells you what actions it's taking. Let me see if I can zoom in on this. Zoomed in a little bit. So the actions it's taking it's saying okay we are we're going we're choosing to quarant or isolate the machine and we're presenting the in the prompt are presenting it. Hey there these preconfigured actions that you can take if you see that this is actually necessary to take it. So it decided to quarantine the machine and it decided to block the malicious domain um on adding it to the EDL or external dynamic list of the firewall on the block list and then it also decide to uh reset the

credentials and potentially this this could all be happening in like seconds. So then you have the you know the playbook error handling. So technically there was no errors present but if there was it would say hey your task failed and you can actually have uh the error path if that task failed going to chatbtd to kind of diagnose why that failed and on top of that um I didn't put it in here but on top of that you could actually have it go and find that missing data. Maybe there's maybe something happened where the context key wasn't populated. You're trying to add a host name to be isolated. It fails because it's it's it's a null. It's

blank. So you can just say hey I'm a the the engineer can configure predefined sim queries uh one could be against uh your edr one could be against windows. So it takes the data it does have in this case it would have been like an IP address and maybe um you know a username whichever and then it's going to correlate that and it's going to find that missing data and it's you know then it's going to take that bring that back into the playbook and then it can go ahead and continue with its incident response. So and then there's escalation details. Um so I I kind of I threw this one in there. Uh there was actually a

task in there to notify like the CEO. Uh so I don't think you know it was logical to notify the CEO at 2:00 in the morning over you know one incident. I kind of just threw that in there just to see what it would do and probably say 90% of the time it decide not to call the CEO which is probably good. Um so then there's recommendations you know the recommendations you know is like implement regular security training and just kind of like basic stuff about mimic hats. Also, I forgot to mention at the top um where is it? Somewhere in here where it actually assigns uh I think it's in here, but it'll actually assign

uh to an AP group say hey you know an AP this AP a AP whatever use usually uses this it'll actually tie it to the MITER framework and that's all chat doing it. So, in part of the one of the prompts I actually actually put in there was, you know, that I purposely left the MITER information out because I wanted to kind of show how it can actually correlate that all on its own. So it pulls in the the tactic, the technique and it correlates that to any kind of you know a group and then in recommendations it can also say hey um you know these are the usual attack patterns of this AP group and it might suggest you know

certain tightening or certain additions to your playbook. All right, so let me close that one out. So it sent it to chat GPT and then I set a response that the response is just kind of breaking it out into string and what you see here is exor is actually taking action on the recommendations by chat GBT. You have isolate machine which is uh tied down into this yellow action. The action is actually not doing anything in the demo, but in the live environment, this would be your automation that is connected to your EDR platform. That's actually would actually send out and isolate that machine. And the next one is, you know, block uh XDR. So essentially what that's doing um

you know, we're we're using XDR. So it's going to add the hashes, the any kind of file names, paths that it's living on. It's automatically block that. And then again, it's adding it to the firewall block list. And this is this is technically would be happening all at at once. I just put these pauses so I could just demonstrate this. And then reset credentials. So we're assuming that the user is not on at 2 o'clock in the morning running mimicats on his machine. Um, so we can probably safely deduce that his user account is probably compromised in some way, shape, or form. So it resets his credentials. And I think it's also it also recommends

in the final report to reset anyone else's credentials that have used that same machine. So I'm going to go ahead and I'm going to complete

these. It takes a little bit, but all right. And then once that's done, it's going to continue down the workflow. So what it's doing now, it's creating an email template. And that email template still has that dummy data in it. So this is kind of demonstrating how we can safely use AI redacting sensitive information that your company is worried about being exposed and then once AI once AI is doing what it needs to be doing. It can still safely put that information back into the workflow. I mean the information never left to be honest. It's still inside of Exor. Um, we're just kind of putting replacing the the labels so we can send it out to the

sock or whoever because I mean I don't think it'd do anyone any good if we we left that in there trying to let someone know what exactly happened. So let's see. I'll go in here and I think it Yeah, it actually shows dummy host name in here. Still has dummy IP. Now it has, you know, dummy username. It has all the the situation that happened. You know, malicious file observed suspicious outbound domain. So, it's literally giving you a whole entire report of what happened and what it did and also what it didn't do and the reason why it didn't do that. So, it's completely auditable. You know, exactly what it's doing, why it didn't do

something. And then also right here, you can see it actually matched the MITER technique. You know, it's a T1003 credential dumping. Um, so that wasn't in there. I didn't give that to the AI. AI decided that on its own based on the file and the type of activity that's that's happening. All right. And then the one underneath it is so I just had to I had to pretty much pull that out um from the from the actual context and create its own context key. And then right here we're running another I ran another Python script where it's pretty rejecting that data back in. It says rehydrated report successfully. And then we can actually see the full report. And now if we see

the full report here, we can actually see that um if I can click right here. Let me see. Oh, I'll just highlight it. So it's corp workstation 1337. That's not a real workstation. Um but uh in this sense, in this demonstration, this is a real workstation. And before it was, you know, dummy dummy host name. It's no longer dummy host name. it's actually put it back in there and then also reinjected the IP address and then also reinjected the username plus the the domain which in this case I mean it doesn't look like a domain like it's actually the actual workstation and then it kind of continues on and it kind of tells it pretty much what we're seeing in the

other one that it it has observed malicious files what it did so it's exactly everything that it just you know sent when it had the dummy data in just put that data back in and now you can go ahead and uh send that. All right, so that is it for that demonstration. Uh let me go ahead and try to start this one up again. All right, so let's Okay, so this is a bonus question like so I want like for a show of hands, how many people actually use chatbt in any any sense at all? All right. So, how many feel you should use it for work, sending an email, you know, maybe just kind of doing uh make

sure you're not being offensive to anybody. So, so anyone that actually works in IT or is in security department, how many is actually putting incident incident data in there or actually saying, "Hey, I have I'm creating this regular expression or I'm trying to create this Python code and I'm having problems. How many you actually use it to like debug or any anything like that?" So, I'm g raise my hand for that one. Um, but yeah, just just a disclaimer, any any of the incident data you put in there, you got make sure that it doesn't have any uh actual data, sensitive data in there. Um, but yeah, so I mean the reason why and also so so how much did that chat

demo cost me? Anyone have any any guesses? Want to throw some out? The $10. So the entirety probably the entirety of me not just this demo but me creating a demo and constantly running it over and over again probably cost me like 10 cents if that. This demo right here probably cost me less than two cents. So probably cheaper than that. Um so I'm actually using in the demo I'm actually using the GPT40 model which is isn't the the best but it's the most cost effective. So if you're actually be using this in like a live you production environment you'd probably want to use the you know the best of the best at the time but also

that does you know cost more but it also increases the logic thinking and the responses and everything that it's actually sends you. All right and that is it. So, anyone have any questions, concerns? Should should I uh stop presenting? I mean, to me, to be honest, like the last time I presented was probably like five years ago at RSA, but um I've been working from from home for quite a while. So, you know, I'm trying to get back out there and uh up my speaking skills and such. So, all right. If no one has any questions then uh I guess that Yeah.

Yeah. Sorry, Ros can't hear you if you might step up or Oh um are you talking about like prompt attacks? Um I actually I don't I don't think so. I I didn't get that that far in into it. Um but yeah, that's a good question. I I'll definitely look into that. So any any other questions for me? Yep.

Yeah, it it actually puts a like a like a placeholder like a label. Um, let me walk over here. Yeah, it's a it's a label and so what it does it the script runs and it says it's using like regular expression and it kind of grabs that data and it pulls it out and replaces it with like if it's a host name replace like dummy host name. So it's pretty much it's that's decided that's not like automated in any way. It's all decided like based in the script of what you're actually wanting to redact. Um, and then once it sends it to the chat t or whatever AI, it still has all the context of everything else in there,

just what you changed. Um, and then it responds still with like dummy data. Then once you're done using chat GBT, um, you want to do like a like a summary report or whatever, then it it pretty much takes the py the script, puts it back in. It's a different script, but it's it's really just grabbing the the same context key. So the key is never deleted, never changed at all inside of the store platform. Um, so yeah, it's just kind of just removing it from the equation for a minute. So chat does it thing and then once it's done, it just puts it right back in. So all right, any other other questions? Yep. I hear somebody Yep.

Have you already deployed this No, no, I mean um I mean I'm with ADT and uh you know they they're still kind of trying to figure out where their place is in in this uh not just this but just AI in general. Um but hopefully after watching my presentation because they're I'm pretty sure they're going to you know maybe they'll change their mind. Um, but again like the main main concern, not just from my company, but other companies I've heard was the data privacy problem. Um, but you know, hopefully with what I just demonstrated here, I'm sure there's other more straightforward uh cleaner ways to do it, but that's one way you that you can

resolve that problem. So, yep. I missed the earlier software. Oh, that that was Exor. Yeah, Palatotos that what used to be Deisto, but now it's the their sore product. Um, so a lot of the stuff you you can use inside the playbook, like I said, but there's certain portions like it depends. It all depends on how you you create the playbook. Um, if like if you're going to create the playbook to where it it follows like a static, you know, workflow like say I mean I don't recommend this workflow, but let's say you're basing it all based on, you know, um, IOC reputations. So in this case, if you did that, you know, you would have

uh an alert that triggered using mimic hats at 2 o'clock in the morning. Um but then IC's came back all benign or unknown. I mean, it could say, okay, well, you know, they could do an equation and say, okay, well, this isn't a critical threat or this this isn't something to be concerned about, and it would it would drop it. It could could close it. Um and then that kind of goes unknown, and then the attacker's going, you know, doing what it wants to do. But with AI, um, it looks at beha the whole entire picture, the behavior and everything. Like could you could you code that into Xor by itself? You probably could, but if there's edge

cases or there's some situations you weren't expecting, the task or the playbook's going to fail. It's not going to read something correctly. So, this kind of just overencompasses everything. It adds like royal thinking logic um outside of Exor. So, all right. Any other questions for

me? So, all right. And that I'll wrap it up. Thank you.