← All talks

Hype vs. Hands-On: What GenAI Actually Brings to ID & Response

BSides Tallinn43:3070 viewsPublished 2025-10Watch on YouTube ↗
Speakers
Tags
CategoryResearch
StyleTalk
About this talk
Generative AI promises to revolutionize how security operations teams and investigators detect and respond to threats but, how much of this promise is real and how much is just hype? In this talk, we go beyond vendor marketing to explore what practitioners and experts really think about GenAI’s place in modern detection and response workflows. Drawing from a Delphi study I conducted with global SOC leaders and AI specialists as part of my academic research with Luleå University of Technology in Sweden, we’ll uncover: Where GenAI is already making an impact (and where it's not) for detection and response workflows Key opportunities for GenAI in threat detection, triage, and investigation Real-world challenges: hallucinations, trust issues, operational risks, and more How security analyst roles and skills are evolving in the face of GenAI adoption Practical considerations for integrating GenAI into existing detection and response SOC processes Expect an honest, evidence-based discussion, free of buzzwords, and grounded in what the experts are actually experiencing on the ground. Whether you're skeptical or optimistic about AI in detection & response workflows, this session will give you a grounded view of the path forward.
Show transcript [en]

Okay.

And there they go. Off to do some crime. Always good.

>> All right, ladies and gentlemen. Um while the stage two is preparing for finding out strategic milestones for blue teams here on stage one we are going to move on with hype versus reality what Gen AI actually brings to incident detection and response. So please welcome on stage Mr. Marvin GMA.

[Music]

[Music] All

right. Thank you.

And I'm using this here. Okay.

All right. Can people hear me? Yes. Yeah. All right. Great. I think uh Oh, it's this one then. All right. Thanks. Thanks a lot. Thank you. It's um Yeah, it's always strange with these arrangements because I can't really hear myself, right? So, yeah. But let's see how it goes. All right, I think we can get going. Um, so welcome to this talk. Um, so my talk is entitled uh or it's titled hype versus reality. Right. So um, basically I'm going to share on what Jenna actually brings to incident detection and response. And this is based on some research that I did with a university in Sweden. And uh, I really hope it's going to be um,

applicable to um, the folks in the audience today. Just a show of hands. How many of us are working in a sock or a detection and response function or something close to that? All right, about 30% I would guess. All right, thanks a lot for that. Well, um, let's start off with who am I? My name is Marvin GMA. I'm a principal security architect at Elastic. Um, I'm passionate about security operations and intelligence. I've been working in this domain for about eight or more years. Um, I'm based out of Stockholm in Sweden, just over the water. Um, I love to play best guitar outside cyber security. I love football. I'm a Manchester United fan. Please don't

laugh. Um, and yeah, and slowly and surely I'm falling in love with uh Estonia. I think I come to this country two to three times every year for the past 3 years or so. So, yeah, really interesting people. Um, yeah, super loving Estonia in that sense. All right, this is the agenda for my talk today. Um, I'm going to go through hype versus sock reality, right? and then why I thought there's a need for doing a research right like what's realistic in terms of how we're adopting AI in security operations versus the hype that's there in the industry today um and then I'll share some expert insights that I gathered during my research right um and then also some key hints on how

we can move from this research to actual practice in reality and I I I'll give my closing thoughts and discussions after that all right so to start off with um we had a new inflection point right so AI has been spoken about for a long time but I think we're beginning to see the realist realistic applications of AI in everyday personal life and then also in our organizations as well right so technology is advancing and when you look at where AI today is um it's that inflection point right it's becoming a reality similar to the to the other generations that we've gone we've gone through in the past as well um those of you that have come to my

presentations in other um audiences you you know I like to show this slide every time I talk about AI right so this is a Google trends simple simple Google trends just showing how often people in Estonia are searching for the key term chart GPT right so this is for the past 12 months and you can see that generally speaking Estonians love their Christmas break because during the Christmas holiday around January end of December the search for chat GPT on Google goes down because people are on And then January, people come back to work and the search uh trend goes up a little bit. And you can also tell that Estonians love their Easter holidays cuz

again somewhere on March, April, the search for chart GPT goes down. When people are back from their Easter holidays, it goes back up again. And then also another telling um trend here is Estonians also love their summer breaks, right? So somewhere around May, I don't know what goes on, but the the the search for chat GPT goes down on Google, right? And then it goes in that fashion. And you love your summer holidays as well, clearly based on what we see on August. But what does this tell us, right? When we're not on holiday, we're either working or do some doing something productive. And it shows that people are beginning to use chat GPT even in the workplace. those of you

that are in data privacy that could be concerning or maybe not concerning in that sense but then the question is okay great we're seeing this adoption of AI in personal lives in organizations can AI development helped augment those of us that are working in the sock like the workflows that we're doing in the sock or from a detection and response function point of view and this is where we need to go back and just reflect on what does a sock do right it's so a sock basically does everything from collect ing relevant data, doing detections on that data of security analytics on that data and then also doing the alert triaging, the investigations and the

incident response. And of course there's other auxiliary functions that a sock does such as digital forensics or threat intelligence analysis and all those juicy things, right? Um so again the question is can AI help to really augment um the sock workflows? So we know that there's a lot of hype right vendor hype. I work for elastic. We are making noise about AI. Talk to Microsoft, talk to Google, talk to there's a lot of startups like there's just so much noise about AI, right? And the industry is really hyping AI. But then is AI adoption realistic for sock teams and detection and response teams in reality? So as part of my studies um so I did a masters with Lulio University

of Technology. This is a university in the north of Sweden. I decided for my research to look at gen AI in the sock. What are the opportunities, realistic opportunities? What are the challenges of adopting AI in a sock? And what are some of the perspectives that are there with regard to adoption of AI in the sock? So why is this topic? We've already seen there's a rapid growth of GNAI in cyber security. Um but then also socks are encountering a lot of challenges. the usual stuff, alert fatigue, burnout, um there's shortage of skills in the market. Um there's complex threats, especially in the times we're living in. The threat landscape is continuously changing and that's putting

pressure on sock teams. Um and then bringing it back to the genai stuff, there's need for guidance, right, on how practical guidance on how you can actually adopt jai in the sock. I meet a lot of sock teams in my line of work almost every day and yeah there's there's there's just a lot of blockers and challenges in terms of adopting AI in the sock. So my research focused on again what are the opportunities and challenges for effectively for effectively adopting generative AI in the sock and then I was looking to identify what are the promising use cases again not from a hype point of view but a realistic point of view as well and then I was also exploring the

readiness and risk factors associated with adopting AI. So if you're a sock, what are those readiness things you need to have in place? Uh and then ultimately I was looking for really empirical evidence, people that are actually working in the sock, right? Not a vendor, not somebody who's doing a startup, but people that literally live and breathe sock every day. So I decided to take a research meth well I did a lot I researched on research methodologies. Um it was so hard to choose a research methodology, but essentially I ended up with a deli method, right? So this is a qualitative way of doing research. Uh but I liked it because what I was looking for was

expert opinion around the topic. I was not looking for vendor or any other opinion but people that are really well vested in the sock and detection response as well as um in AI itself. So again, I've already said it's a qualitative study, expert-driven uh and I was trying to gather those insights and this basically involves multiple rounds of surveys and interviews as well. And what I was looking for is just to do an analysis on consensus. Do experts in the industry actually agree with the fact that AI can make a difference in the sock or not? Right? So I was looking for that consensus agreement or even that divergence disagreement in terms of opinions. Um why the deli method is actually

suited for emerging topics not just in technology but in other domains as well and since AI is an emerging topic it's it's it's trending at the moment. I thought it would be um the the research method would be suited in itself. So I'll go through this quickly. It's a lot of uh text and steps but basically the deli method approach that I took um again this is based off the literature review that I did is based on three stages right so you've got the exploratory stage the distillation stage and then of course the utilization stage of um the the data that's been collected. So in the exploratory stage basically you identify the research question which I already did as shown on

the previous slide and then you need to identify potential experts right so it's it's people that are relevant to the study itself and in my study again I was looking for um sock personas in that sense um and then I had to define a criteria of what people in the sock do I want to be part of this study um and then obviously I needed to reach out to experts and get their consent and then do the whole privacy the assurance and then get them on board in terms of the study itself. So after the first stage I ended up with 12 practitioners with deep sock and jai experience. I promise not to reveal names but I can talk a little

bit about the organizations they're coming from. So I had three people from suns at least sons instructors uh people that are actually creating a difference in the sock space without mentioning names again. Um I had soap managers at security analysts. I had people from um AI research centered on security as well as part of my expert panel. Uh really really impressive um uh panel of experts that I managed to gather in that sense. Then I was also looking for a nice variance around the experience. Right. So how how many years for how long have people been working within a sock context right? So when I gathered my panel um the experience um range was between 10 to 20 years of experience.

Then the last criteria that I was looking for in my study was also a mix of variance in terms of the type of organization that my expert partner was coming from. So I had um a mix of in-house socks so security operations centers that are protecting a single organization and then obviously we do know that there's managed services around socks as well. Also I interviewed also stock personas from MSSPS those vendor security people as well those consultants and academic researchers focusing on security operations. In the second stage which is the distillation stage obviously I was looking for expert opinion right so I had to go to these experts and then um ask them questions right. So what I did was uh from my

dissolation stage I went through two rounds of interviews with these experts just basically getting first of all their open uh opinions around the opportunities the challenges and other perspectives around the adoption of AI in the sock itself and then finally of course um I did that uh reporting so like I said um the research process was two rounds in the first round it was open-ended um questions and I wanted to gather broad perspective Right. So let's look at jai in the sock. Right. So for each one of them I asked them to say what opportunities are you seeing in terms of adopting ji in the sock? What challenges are you seeing? And any other perspectives that you

might want to share with um uh as part of the research itself. In the second round uh what I did was I structured some ratings. Right? So, for those of you that have done qualitative studies before, when you do open-ended uh interviews and surveys, the next thing that you need to do is begin to identify themes in uh the responses that you're gathering in that sense. Okay. And then the final part was the utilization stage, right? So, there's uh some nice results that I got quite revealing actually and I'll share them in a bit with you. So, what what were some of my key findings across these two rounds, right? So after the first round um I did what we

call thematic analysis. So again the questions were open-ended what are the opportunities challenges and perspectives around adopting AI in the so in detection and response. So based on the thematic analysis I identified four key themes right the first one was around opportunities and use cases. The second one was a lot of my experts were really emphasizing on readiness and integration, right? Having a little bit of a structured approach to how basically you make yourselves ready to adopt AI in your sock or your detection response function, but then also there's an aspect of how you integrate AI not just in your workflows but also with your current tooling as well. Um the third challenge was challenges I mean

sorry the third key uh theme was challenges and risks right uh in that sense and and then finally a lot of my experts were also talking about how the roles within the sock are going to change in the future with the adoption of AI right so we'll look into um some of the specifics around these four key themes so far so good yeah all right you all look serious it's a little bit scary but uh all right I'm used to this all All right, theme one, opportunities and use cases, right? So remember as part of the Dely study, I was looking for levels of agreement and levels of disagreement around these themes, right? So when I asked these

experts to say, hey, what are the opportunities and use cases you're seeing with regard to adopting AI in the sock? There was a lot of consensus or agreement around things such as alert duplication, right? Because again in the sock there's a lot of alert fatigue. Most of my experts said hey there's a high chance and a high um value use case in terms of doing allet dduplication using genai. Then the second one was um the aspect of having this assistant to be helping so personas in the sock themselves. Right? We all know about the co-pilots elastic assistant and all these assistants. Most of the experts said those things are actually proving their value within the sock itself.

And then there was also an aspect of bringing in context right we know that AI most of the time is backed by LLMs and LLM is a trend on public data right so um there's a need to also begin to use AI workflows that allow you to also bring a little bit of context right organizational context into the workflows and most of my experts also identified that as one of the key use cases for um or rather as one of the big opportunities for adopting AI in the sock Then the other high consensus area was around knowledge retrieval right so usually LLMs again the trend on public data but we also want to get context

from from our organization right and most of them we're mentioning concepts such as rag which is retrieve augmented generation and how you can actually embed that into your sock workflows as well as part of opportunities and use cases of course those outlier um opinions um So there was low consensus around detection engineering being aided by Genai, right? So there's uh I know there's those of us that do detection engineering in our sock, you know, detection and response function. Um my study actually showed that though most of my experts were saying we need to exercise caution when it comes to doing detection engineering um that is fully aided by AI, right? So there has to be a

little bit of more of that human in the loop kind of validation. So this is just a bit of a distribution of the opinions that I got. Again I gathered these insights and did a bit of statistical anal analysis on the responses themselves. Um but yeah um I will share this but you can see that certain areas the ones that I highlighted in the previous slide had a lot of consensus around um the opportunities and use cases for AI in the sock. The next theme is that of readiness and integration. Right. So what did I discover in terms of consensus and divergence around readiness and integration right so a lot of my experts cited certain critical

prerequisites so if you're a sock and you're trying to bring AI in your production environment there's certain key steps that you need to have in place first right the first one where there was high consensus is you need to have structured sock workflows so you don't really want to introduce AI in the sock and just use it as a band aid to really hide your inefficiencies in your sock, right? So you really need to make sure that the processes aspect of the sock itself is actually working correctly. And then the other thing is that of analyst skill readiness because again AI is its own domain. It brings its own principles and its own concepts. Do the

sock analysts understand how does this technology work? How do you interact with it? How is it built? What are some of the caveats that you need to have in terms of um using AI in the sock? Then the other readiness aspect was to compatibility and integration. The current stack that the the current two stack that you're using in your sock is it compatible technically compatible in in terms of integrating with AI as well. Right? So that's something that you really need to do an assessment on and a validation on in that sense. And then lastly but not the least you also need to have clear governance and guardrails. And we'll look at that when we begin to

touch on the challenges that are associated with adopting AI, right? But in terms of readiness and integration, you really need to have that governance um around privacy, around access control, who's going to use these workflows, what kind of data are you going to be exposing to AI technologies and things like that, right? And then the last one in terms of readiness was once you've defined your sock workflows. The recommendation is you also need to make sure that you identify which key workflows you actually need to have that human in the loop validation for right because again there's a misconception that AI will replace humans. Personally I don't believe in that. Uh but I think AI is

really going to augment with a lot of our workflows. Of course, some roles are going to evolve, but there's certain key processes, especially in a sock and detection response perspective, you always need to have that human in the loop kind of operation as part of the workflow. So, in terms of that critical prerequisites for geni adoption in the sock, again, I just did a distribution across um the sub themes within this team uh and just that's reflecting on the consensus and divergence in opinion across the the expert panel that I gathered. So far so good. Yeah. All right. You all look serious again. Um trying to smile. You're not smiling back. All right. Theme three, challenges

and risks, right? Um what are some of the challenges that are associated with adopting AI in the sock or in detection and response? What are some of the risks that AI brings into our workflows? So those top concerns or those high consensus around hallucinations and incorrect outputs, right? Um again just a show of hands. How many people are implementing AI in your environment in your sock workflows? Yeah. Two, three, four, five hands. Six. How many of you are still exploring with AI? You're not so sure if you're adopting it. Okay. Um how many of you agree with this sentiment that the biggest concern is hallucinations and incorrect outputs? Yeah, it's more hands, right? Um that's

the biggest concern and that's the part that irks me even when I'm using chat GPT. the fact that it hallucinates a lot. So sometimes I just want quick information, but I spend time doing the prompt engineering and trying to structure the prompt and things like that, right? So when you bring these kinds of workflows into a sock, it becomes even more of a bigger challenge, right? So hallucinations and incorrect outputs were the biggest concerns. And then there's also a very dangerous aspect with regard to adopting AI where you begin to develop so much trust in the outputs coming in from Genai. And when you think about the mandate of the sock and the detection and response

function, you don't really want to begin to trust something that hallucinates, something that gives you the wrong outputs in that sense. Um, and then there was high consensus around protecting AI itself, right? Because again, it's it it becomes a new attack vector that the sock also needs to do continuous monitoring on, right? Uh we've seen that some models have been hacked in recent times. Um there's concepts such as uh prompt injection um adversary use as well where the adversary begins to use some of that data to confuse you or like to to sway you away from what's actually factual in that sense. Right? So there's also a concept of model security as you introduce these models in your

environment. You also need to make sure that you are protecting yourselves against attacker activity around the models themselves. And then lastly, again, but not the least, I believe it's the last one, is privacy explanability and accountability. So with privacy, most times when you adopt AI, you're integrating it into your private organization data, right? We need to think about what data we're allowing to be put into these models. Depending on the models, again, some of these models will get your data and continue doing training based on your private data, right? So you just need to do that data classification and then understand what data you really want to integrate into these AI platforms. And then the other

aspect as a challenge is explanability. Chad GP is great. Now they're trying to bring in explanability by giving you reference links. But still it's a lot of madeup stuff that these platforms are producing, right? And we know those of us that work on critical incidents, sometimes we need to take evidence to the court of law, for example, we need to gather evidence and show that it's actually factual evidence, right? But we see that today AI lacks a little bit of that explanability. And then of course there's that accountability aspect as well. Who did what, when, where, how in terms of using some of these workflows. Again, I show some of the distributions around the responses from the experts.

And again it's the con it's the consensus that I was looking for. So certain areas had a lot of consensus and certain areas also had a little bit of divergence in opinion. Finally the last uh theme is that of future sock analyst roles. Right? So a lot of my experts in my study also said that there's going to be changes to some rows primarily the sock analyst role. Right? if you're a sock analyst, uh maybe it's time to get concerned or not, but let's let's find out what my study actually um brought out. So, in terms of anticipated shifts in terms of sock uh staffing, um a lot of my experts said in the near future, there's going to be

fewer tier one or entry- level rows in the sock because a lot of stuff is going to be automated by AI. How many people agree with that sentiment? that hand went up so quick, so fast. All right, I also tend to agree. I think there's going to be fewer tier one entry level jobs, but let's see how the other uh points came out. Um most of my experts said yes, um the tier one row is going to be reducing. It's going to be fewer tier one rows, but there's now going to be focus on workflows such as threat hunting and detection engineering, the ones that require a little bit more of human validation. So the row is going to change. It's not

like people are going to go away, but I think they they will be spending more time doing more advanced human relevant roles such as threat hunting and detection engineering. And then it is also anticipated from a staffing point of view that there's going to be ongoing analyst upskilling, right? So analysts will now need to understand a little bit about data science, what AI is, and then also how you can actually integrate and adopt it within the sock itself. And then most of my experts also said human judgment is still going to remain critical which also aligns with the previous theme right so you still want to find in your workflows those parts that actually still require that human

in the loop kind of validation so human beings are not going away if a vendor is telling you AI sock fully autonomous maybe we need to have a little bit of a conversation right and I also don't fully agree with that because what is secure for organization A may not be secure for organization B right so there always has to be that human element to bring context into the whole picture. And again, similar to the previous slides, this is just a distribution of the consensus and the disagreement, right? The variance in terms of the responses from my experts. So coming to the discussion and recommendations, what were the overall insights that I gleaned out of my report? Um the first one is

that um we noticed that there was strong consensus around where AI is very applicable, right? those repetitive tasks such as alert triaging um streamlining triage and then also just supporting analyst workflows right so the assistance and those kinds of tools are really lowhanging fruits in terms of use cases that you can adopt within the sock itself and then also despite the promise uh participants also emphasized the fact that AI should not you should not look at implementing AI in a way where you're looking to replace human beings but rather you need to augment ment AI into the workflows there's a lot of efficiency gains that AI brings into the whole picture in the sock and then

key barriers they still persist again trust in outputs hallucinations and then the governance frameworks as well usually when we talk about governance regulation some people start yawning but these things are really important because they form the guardrails that we need in terms of adopting this technology and again the human being is still going to be uh the key part of um the AI adoption in the sock. And then lastly, but not the least, um there's an emphasis on maturation, right? Um if you're not really mature as a sock or as a detection and response function, maybe don't rush for AI. Maybe clean up your house in that sense, right? And then begin to adopt AI um when you've

actually got some structured processes within your organization. Again, it's that garbage in garbage out kind of concept. Quality in, quality out. um this was the now overall consensus distribution. So again doing the thematic analysis I did an overall sum uh summarization in terms of where was um there the most um consensus across the four themes that I introduced earlier on and this is just a bit of a summary of that thematic analysis of my research. So from research to practice right what are some of the key takeaways that we can actually practically go do in our socks start with narrow low-risk use cases right so we we want to where you want to start off from in terms of

adopting AI in your sock is um start with those major gens but low lowrisk kind of use cases right where there's not so much risk from privacy or hallucination or those kind of um concerns right so things such as response guidance analytus is fired. Hey, what should I do to this? Right? Models are really good at suggesting workflows. Those are lowhanging uh fruits in terms of um use cases you can do for adoption. Things such as alert summarization and alert explanation, right? So, assistants are really good at these workflows. Maybe you want to start from there. And then the next thing is obviously you want to build guardrails and validation steps, right? And this is

why I'm also a huge fan of agentic AI without really going into the concepts of what agentic AI is. But agentic AI is basically helping you to also bring in validation loops into the entire workflow. Right? So you need to make sure that you've got double checks. There's always human oversight for those very critical workflows. And then obviously um there has to be that approval workflow, right? So always the human needs to make that final call in the workflow. And uh I I I decided to say you know you need to treat jai like a very smart very very smart junior analyst. So that means that it's very knowledgeable but then also you need to

exercise caution right. So sometimes we get these junior people in our environments they're brilliant they're smart straight out of uni but sometimes you know they're too ambitious they make mistakes and I think ji is a little bit like that right and I'm not throwing shade on the graduates in the room but that's just the truth. Okay. And then the last one is you need to measure value. So I'm a huge fan of sockmetrics. I'm a huge fan of detection and response metrics, right? You need to track because again the biggest gain with AI is efficiency, right? More than replacing human beings. So you need to measure, hey, how were we doing before we adopted AI? And then now that we're

exploring with AI or actually adopting AI in our sock, what are the how fast are people getting um in the workflows themselves? Right? So you need to track those metrics in that sense. And that was my last slide. I'll take some questions. Any questions?

Well, I'm assuming this questions. I warned you. So two questions are walking out. Um, >> well, I mean, I I was hoping we're going to break the stereotypes of Estonians, not asking questions, but we're not. >> All right. >> Well, the give them time. You know, they're preparing maybe. Oh, here we go. Amazing. >> Hi. Uh, thank you for the presentation. One question. And uh did any of your participants in the in the survey actually already used AI in their sock or they are just preparing to use it? >> That's a very good question. So I would say about 30% were exploring or in the initial phase of actually implementing GI in the sock. So there's one sock

team. So I had the sock manager and the sock analyst on my panel that came from an MSSP uh kind of setup. So a man is sock um and they were exploring they were in the initial phase of actually implementing AI in the environment >> and uh maybe some vendor specific question also that uh maybe a few years ago we really considered using Microsoft uh defender for experts AI solution but uh it was limited to only high and medium like incidents and uh in overall you could give it permission to just alert you but uh it was also possible to like isolate or take action. Uh so what do you think about these topics because

uh in your recommendation was to start lower hanging fruits but uh already vendors are providing top end solutions to like high incidents and isolation and taking action. Uh which which one should like customers choose? >> All right, that's a very good question as well. So um when it comes to AI uh and automation like doing the automated steps uh when when something is going on I still stand with the same opinion I had about saw platforms right so your security orchestration and automated response platforms you can only automate a process once you've tested that it works and once you've also validated that those response steps are the ones that actually apply to your organization and

I think also with adoption of AI if we do go down that direction where AI begins to go through the high uh severity alerts and then starts begin to do uh response actions and taking certain actions in our environments. You want to make sure you don't turn that on for everything, but you do that for the things that you've actually validated as um well structured in that sense. And I still think though and again please let's record let's replay this in in the future. I don't think it's going to be fully autonomous. there's still going to be a human in the loop because again a lot of the the workflows that we do in the sock there's

context that we need right and that context comes from organization itself so yeah I I think that is a possibility technically it is possible to do automated responses with AI but is it something that we really want to do in a production environment the other thing that I didn't bring out in my presentation is some of the experts said for very critical assets for very critical areas of very critical use cases in organizations Maybe you don't want to automate everything, right? Because you don't want AI making the decisions. And again, it goes back to the hallucinations, the and with these AI platforms, the output is not always predictable. Even with chart GPT, I can ask it the same thing

today. The response is always going to be different, right? So there's that unpredictability on the outputs. Why would you want to use that um in terms of trusting it to do automated actions within your environment? So it's a bit of a it depends kind of place. Yeah, in the in the last point you said that you can ask the same question but that you get different answer there this was this there was some kind of breakthrough like last week that they they come to the conclusion that if you put all the different users queries in the one bucket and send to the model then the answer will be different. So if you use only query one query one one model then

it should give you the exact answer every time. So that's already been taken care of sometime some sometime uh in some point but hallucinations uh still remain because if you train on garbage data then you get garbage sensors. So >> absolutely. Yeah. And also it's it's it's easy to forget that these models are trended on public data and we know there's a lot of garbage on the internet, right? So again it's a it's a whole coffee conversation but uh exercise caution. Basically that's what most of my experts were saying when it comes to like automated response actions especially around your critical environments. Just exercise caution. Try these things out. Make sure that they're working. Validate your workflows. And

then once you get to that level where you feel AI can be put in production, you can actually do that. I'm not saying AI won't get to that level, but exercise caution.

>> Hi, thank you very much for your talk. Um, you said that in the future entry- level jobs might fall away. What would you say how would newcomers get into the field then? >> Yeah, that's a good question. Um well in the research they said there's going to be fewer entry level rows um and that means um a lot of these workflows are going to be automated. We still need the entrylevel skills so to speak like sock analyst level one kind of skills but that's going to be automated in the future. So if something is automated you don't necessarily want a human being to be doing that. The second point that came out in the presentation is the fact

that maybe the entry roles that are humanlade are going to be things such as threat hunting and detection engineering. So those will most likely be the entry level roles. It doesn't mean we won't have um sock analysts in the future. We will have them but then also that's where the upskilling of the current or sock uh roles in needs needs to come into the play. Right? So we need to make sure that we understand how the role is evolving and then also the needed skills that human beings need to have in order to work with AI that augmentation play. I think that's what we're going to see going forward

right there >> very >> thank you for the presentation. So my question is about you mentioned that for Gen AI to provide us valuable insights we we would need to feed its context of organization right but doesn't this introduce another security complexity when we might potentially feed some sens sensitive information about infrastructure or whatever uh organizational information. >> Yeah good question as well. Um so there's multiple aspects to your question or to the answer to your question. Um the first one is definitely yes when you expose your organizational data to these AI platforms um you are at risk of uh privacy right. Um but then also and I'm going to bring in a little bit of my flavor from Elastic. One of

the things that we do with our AI is yes, you bring your data, but we allow you first of all to select which data sets you're comfortable bringing to the model and then also we allow a concept called anonymization, right? And a lot of vendors are doing this anyway. So you can also choose to say, hey, I want to send usernames, I want to send whatever credit card details, but sanitize the values in those fields, right? So um I think you also need to understand the risk appetite. Um in the full report of my research I expanded on these concepts as an organization you also need to understand what is your appetite for adopting AI generally as an organization

right what are the types of data within your organization that you're comfortable exposing to these models what are the parts of your organization that you're not comfortable exposing to the models so I think you need to take it from that perspective as well and then the last aspect to that question is there's a lot of models out there both cloud-based the GPTs the cloud uh the bedrock models uh Google vertex and whatnot. You also need to see the terms and conditions because that becomes very important. Some models will tell you hey by virtue of subscribing to this service you give us the permission to train to retrain our models using your data as well. There's good organizations without

dropping names here. there's good models or companies behind these models that will say hey interact with our model but we give you the guarantee that we're not going to use your organizational data to train to further train the models. So I think that part also needs to be addressed as well and that's the governance aspect right the from the readiness um um um section of my report of my research as well. So those are the things that I touched on a little bit more. Um I will hopefully write an adoption methodology for AI for socks in the near future. But those are the governance and guardrails that you need to have in place before you actually

adopt AI. >> Yeah, >> thank you.

>> Thank you for the talk. Uh you mentioned or the experts mentioned prompt injections as but not very critical as automation increases with AI agents. Shouldn't prompt injection be a bigger threat uh in in AI or in the system? That's a good question. Um so if you saw the challenges and risks slide, there was a lot of consensus and agreement that prompting prompt injection is a big concern, right? So um it wasn't downplayed, it was actually um um upplayed, if I can call it that. uh it's going to be a big problem and even without AI if you're running a sock this is my opinion and if you go to suns and some of these organizations they do the

same recommendation you're monitoring your organization you also need to monitor the sock who's doing what you need to audit everything you need to audit the tools accesses and whatn not and I think it's going to be similar with adoption of AI because now you're augmenting an extra attack vector within the sock function or detection response function you need to monitor hey what prompts going to these models, what responses are we are we getting back, who's making the prompts in the organization, right? And then also the validity of the outputs from these models, right? So there's there's ways you can do observability and continuous monitoring in terms of the interactions with these models as well. And then

obviously if you're using a cloud sourced uh model, you also need just to keep a breast of vulnerabilities that are being announced by these model vendors as well. So yeah, it does bring in a little bit of complexity to your equation. Um but I think the the benefits will still outweigh um um the downsides.

All right. Anybody else? I mean, we've got the ball rolling. Thank you very much, sir. See, you ask three questions in a row. Everybody's like, "We can do that." It is allowed. What's on the left side? Only answers. Yeah, there hasn't been a question this side. Yeah. Always clear. Yeah. >> Yeah. It's that how it goes. Like half of the room is Oh, we knew that already, man. This is And again, we are in a country that's uh whose minister of justice is heavily in favor of giving all the public information from universities to develop AIS. And as a former college professor, I'm not certain AI deserves to read like every fees that's in the library. But just

saying, you know, I mean, it's AI's problem. Going once, going twice. Okie dokie. I'm not going to torture anymore. AI is going to do it for me. Thank you very much, Mario. >> Thank you. Thank you.