← All talks

AI in the CTI–IR–CISO Triangle

BSides Zagreb · 20261:00:1130 viewsPublished 2026-03Watch on YouTube ↗
Speakers
Tags
About this talk
A panel of incident responders, threat intelligence experts, and a CISO examine how AI reshapes cyber threat intelligence, incident response, and strategic decision-making. The discussion covers real-world use cases—early detection, triage, investigation, noise reduction, and attribution—while openly addressing over-reliance on AI, model bias, accountability, and emerging regulations like NIS2 and the AI Act.
Show original YouTube description
Panel: Artificial intelligence is no longer an experimental add-on for security teams — it has become an operational reality. However, the true value of AI in cybersecurity does not lie in “magical detection,” but in how it connects cyber threat intelligence (CTI), incident response (IR), and strategic decision-making at the CISO level. This panel brings together perspectives from the operational front line and the executive level: an incident responder from the Microsoft DART team, seasoned CTI experts, and a CISO who must translate AI into measurable business value. The discussion will focus on real-world use cases: how AI helps (and where it hinders) early detection, triage, and incident investigation; how CTI teams use AI to reduce noise, support attribution, and anticipate adversary behavior; and how CISOs balance automation, risk, accountability, and regulatory requirements. The panel will also openly address uncomfortable topics: over-reliance on AI, false confidence, model bias, accountability for AI-assisted decisions, and the impact of emerging regulations such as NIS2 and the AI Act. The goal is not to promote tools, but to clarify how AI is reshaping relationships, responsibilities, and expectations within the CTI–IR–CISO security triangle — today and in the near future. Recorded at BSidesZagreb (https://www.bsideszagreb.com/). #cybersecurity #bsides
Show transcript [en]

I hope everybody had some coffee. This is our last topic before lunch. So it's gonna be a panel discussion of the five of us. So we have spoken a lot about AI, how AI can do some things. We heard about threat intelligence. So now let's try to see what else AI can do. So let's investigate, let's say the triangle between let's say instant response, threat intelligence and decision making general insecurity. So we have diverse panelists who will share our experience. So please welcome in Roth 13, alphabetical order, Vladimir Ozura from, he's a principal security researcher from Microsoft's most expensive team you never want to call. He has extensive experience in instance response, so we'll share his insights into that perspective. Thank you.

Please have a seat. Then we have Bojan Alkazavovic from, he's a principal threat intelligence specialist from Diverto, oh sorry Marling. Infigo. Now from Infigo, he specializes in threat cyber threat intelligence, so he will give us his perspective on that side of the coin. Then we have our already keynote, Vlatko Kostojak, aka Kost, VP of research at Marlink Cyber, who, as you can see, has many talents. So he now leads the research part of Diverto, well, Marlink Cyber, as you said. but he has experience in all things mechanical and electrical and other cyber attitude. And last but not least, Herve Egelman, who is a CISO from Span,

So he will give us, he has both experience in instant response, help and containment eradication. Of course, not at Spun, but at other companies that had problems, right? And we'll share our experience, his experience with us. So guys, to set the stage. So a lot of talk has been flying around about where is AI now, how mature it is.

I would most would say in still this hype cycle, there is still innovation happening on basically on a daily basis. So have we actually, are we still in this innovation part? Have we reached maybe the peak of these inflated expectations or are we now falling down to the pit and you know, reaching the bottomless pit and of the trough of disillusionment? Or are you slowly rising towards the slope of enlightenment and reaching potentially the plateau of production readiness or not? Let's see which use cases AI is good for and which is not. So to start off, can each one of you in short, like two minutes, describe the last time AI meaningful change the outcome

of an incident or a security decision or anything that you were involved in and where maybe it got in the way and made a mistake. Let's start with you. Well, can you hear me? No.

Push the button. How about now? Yes. How now you can hear me?

Basically every day right now I use it and it changes my life meaningfully. I really cannot imagine writing anything without AI, right? Including the answers to your questions. Okay. So for example, writing policies, procedures without AI is, it's a pain. But with AI, especially if you have to cross reference them, it became much, much easier. Where it hinders Well, I'm swamped with AI slope every day from every kind of vendor that is out there, especially in CTI. Everybody is now a CTI analyst. And it's best case useless, worst case misleading. I understand. Kost? I don't know if this is turned on. Yeah, it is. It's working OK. Closer. Yeah, but you know. This is humanly operated, so you have to hold it.

In short, I think I already demonstrated what someone who knows how to use AI and what he actually can do it in very short time. But on the other hand, there is a problem who doesn't know how to use it, which we saw a lot mentioned, how I mentioned AI slope, for example, that lot of now when you submit the bug, they all first asking, hey, is this AI generated? Is this, you know, why is happening? Everybody's, I mean, even in the past when you submitted the bug, they were very, you know, ah, it's nothing, right? It doesn't affect us, impact is low. Now you have even more further thing is, okay, is this AI generated? And I will not

look at it at all, right? So we have this point where, Nothing is trusted anymore, which is kind of good from the security point of view. But on the other hand, you have gates where this slows things a lot. So this is where something I think needs to be done. So from my viewpoint, if we are using this to find vulnerabilities, why not, for example, to have it to try this once as well? Then it will be much faster for those who are doing it. And the gate will not be so slow. Yeah, okay. So you ASI for both sides of the coin? Yeah. Vlado, what about instant response and... Does it work? Yes. Okay. So in terms of instant

response, where it really helps is when it comes to like ransomware cases that I've seen in the past. And it would basically stop those ransomware cases. I'm not saying it's going to do it for every single out there, right? But... It is definitely good at picking up the attack path and then blocking the threat actor in certain stages of the attack to basically have a customer that is gonna have just a bad day instead of the worst day of their life where everything is encrypted and all that stuff. So I think that's where it definitely helps when it comes to kind of the contained part of it. Again, it's not perfect, but it does help a lot in those cases. Where it

kind of hinders, I would say, it is, I had a recent, it was a business email compromise case. And those cases are typically done by identifying like impossible travel, right? Because the threat actor is logging from a different location. But the customer still had ADFS, so Active Directory Federation Services. And that is something that basically did not fit well and the models or the identification of those impossible travel alerts were just all around. So the customer was not able to see, okay, I have to focus on these things because those are the ones that are actually related to the business email compromise case as opposed to, like, they actually had so many of them so

they just said, okay, we're just ignoring it. Okay. And Bojan for cyber threat intel? Thank you. So at least from our perspective, intelligence job usually starts with the data collection, right, and producing information, et cetera. So this is the place or this is the step where AI, I will use this umbrella term for LLMs, machine learning mechanisms, and AI in general. It's the most useful tool because

Nowadays, I don't need a ton of programmers in my team to program very simple script to scrape the web page and continuously collect any kind of data that we need for our investigations. But at the end of the story, when you have to analyze something, I mean, analysis is the art of asking right questions. That's the essence of the intelligence job. Here, I just spoke with the guys five minutes before our presentation, our panel, and I said I asked the same LLM five days in a row the same question, and every single day I I saw different estimation, different assumption, etc. So this is a pretty bad junior analyst. So this non-deterministic nature is basically hindering your trustworthiness in AI?

Yes, and if you ask him, him, right? Them? Yes, them. It depends how you declare them. If you ask them to go in reverse to make an argument mapping for decisions that he make, it is impossible because the nature of AI is the black box, still black box for most of us, unfortunately. Okay, so obviously it does hinder in some cases and we do have some trust issues with it. It's a junior sometimes. Sometimes it can do a mountain of data but still doesn't compute well. So if we had to rank it, do you consider an AI like a copilot? No, Microsoft pun intended. Or something more of a filter or data enrichment tool to just fetch data or a data miner to mine

data or just to help support your decision. or just experimental thing, what would you rank it? So let's make this a poll. So everybody, including you, who considers AI now basically a copilot that helps you run things? A lot. OK. How many of you just use it as a fancy filter, a search engine? OK. Data enrichment? Okay. Data miner to get through a mountain of data? A lot. Who just thinks this is just experimental and would not use it for anything but child play? No one. Okay. So I guess it's not that immature if we are not considering it as a toy anymore. Okay. Let's stick to AI in detection and triage. So, Vlado, I would start with you.

So, as we said and seen before, AI brings machine level speed with at least the promise of human-like intelligence. So, is that perfect for triage? Is that perfect for detection? And has AI in your cases truly reduced the noise in detection or has it just basically increased more made more noise in your experience okay i think it would be so in my experience um the detection side of things is getting better and i think this is where it actually is most valuable because it's reducing the noise down from you know a ton of alerts a ton of events that are happening correlating everything and this is where you would typically see the most value in because everything after it depends on that

kind of initial detection, right? Summarizing logs and things like that, triaging, yes, it does help. There is a problem at the moment from my experience in terms of scope. So kind of defining what are the compromised users or the compromised devices, it doesn't do that to the full extent. Like you will not get the full list of that. And then, somewhere where it kind of is, not somewhere, it is very good I think in containment. So in those cases where it's detecting the, like I said previously, the attack path and it's able to contain the user or device or whatever it might be. And it's basically moving towards even predicting, right? So this is kind of the next

stage where it's predicting what the attack path is gonna be. and what it needs to do to prevent, for example, a threat actor to get to either a critical service or whatever it might be. So based on the TTPs of non-threat actors, if he identifies one attack vector, then he knows basically what's next? In a way, yes. So it kind of learns from past incidents, TTPs, all of that. But again, you need to have that kind of initial detection to feed it into the system and to be able to do it. So to me, I think that those are probably where it's most valuable. Where it's not valuable is if you have an environment that is like, we recently did an incident where my colleague described

the environment as everything is everywhere. So you basically have everything you can imagine, it's everywhere around, so a very messy environment. Then you will get a lot of false positives and kind of what it generates is not going to be available. Yeah, okay. Herwe, Spun has a managed SOC, so you're a MSSP for others. So do you see and how do you validate AI-generated detections? So does that increase your SLAs or decrease your SLAs? You know, how is this then fast enough for if you have to do additional triage for AI stuff? Are you still faster with AI than without? I would say that it's kind of depends, right? Vlad hinted at it. You have to have the

basics right. If it's all chaos, AI really doesn't help, it just amplifies the chaos. I'll give you two limitations there. So if you have, say, a CIM that's collecting all events, and then you apply AI on top of that, AI will be as good as your data that you collected. And we run into problems there. In most cases, those CIMs do not collect useful data. Because it's the drive on the vendor, to make you pay for more events per second. So collect basically everything and you run into bottlenecks. We have customers that do not have the bandwidth to push all those required logs. So AI kind of becomes useless because you don't have those logs. I would much

rather see proper detection engineering to pick and choose the proper events to forward to CM and then maybe apply some machine learning. But that was the truth before AI. Sorry, that would... But that was the truth even before AI. That was the truth. That's kind of the point. AI amplifies what you already have. The other limitation we run into is if you have an agent on an endpoint and an AI doing something with it, the opposite motivation is true. The vendor does not want to collect much data because they own the compute cost, right? So you end up sometimes with the blind sensors. And even with when those alerts do trigger, it kind of functions like a honey token. You don't have

enough data unless the model is transparent. You know how it got to that point, but usually not. And you have to reverse engineer what happened. So it's sometimes it slows you down actually, right? That's what we see. If it's properly done, it can be faster. But in most cases that we saw, especially when we arrived at the new customer, AI just slows you down. If I can jump in here, I think regarding detection, I think LLM can be really used in detection. I think most of you know I'm also into honeypots and deception systems. And I think here actually it helps to generate the fake content. You know what LLM is best, very realistic content. So believable, so you can actually generate the fake content. you can

actually have it as a honeypot which will actually trigger the detection. So here it helps, but also it helps in threat intelligence as well because you have these honeypots which looks very believable and you can generate it quite quickly to gather the intelligence as well. So using LLM for really what it is used in generating right now, right? If you're into honeypots, don't forget, don't miss the lecture later about ADCS honeypots after lunch. While we're still on detection and then incident response, so, Vlado, you are responders to incidents, to your known customers or even they call you up when things really get tough and you need to dive very quickly in there and do something.

So in your experience, in which stages of your engagement, incident resolution, do you find the most value? put something in there for initial detection, summarizing logs, some kind of triage, containment, maybe eradication if you know what is happening, other data enrichment, or later to build timelines, reports, post incident notes. For which part of that whole chain do you get the most value out of AI currently? So I kind of touched base on this already. So the first part is basically the detection mechanisms, right? That's the first thing. Log summarization, yes. Scoping, kind of said it's not probably the best. What we are currently also seeing is with the incidents that are generated, we know that there's a lot of

incidents that can be generated, like alerts and incidents. The AI can actually go back and kind of reevaluate those incidents. do the triage, enrich it with data and all of that stuff, and then have the analyst basically have that information up front with all of that enrichment done. But they can also prioritize the incident so that you're not going through 700 of them. You're going through the top 10 that have already been enriched and that have been detected as being the ones that you should be focusing on. So I think that piece there, I did mention containment in a way in certain circumstances. And for those circumstances, yes, it's fairly correct in doing it. I don't know of many cases where it

actually shut down the whole company because of that. Where it's not the best, well, first of all, report writing, yes, that too. Anything text-based, I can't write. I'm not a good writer, so I use AI mainly for that piece as well to kind of write good reports, but I would definitely read through it before you actually ship it over. And then in terms of where it doesn't do the proper thing, like where it's not the best at, it would be eradication because at the moment, you can't be certain that it's gonna do the right thing. Like if it says, hey, I need to you know, shut down these 50 servers. It doesn't know whether they're the

critical servers, it's going to be, you know, the business impact for that. So, yeah, those would be the, probably the bits and pieces. This service principal name was involved. We just shut it down. This is like the main account for, you know. Yeah, that I wouldn't trust it without like human behind it saying, yep, go ahead and do it. Speaking of eradication and steps which you don't trust. So, kind of a way, your Spun has an MSP. So you also respond to incidents. So, or you do, or you're a managed SOC. So in the whole kind of, you know, pixel, you know, chain or all of the steps that have to be taken within an

incident, is there something that your clients say, this is off limits for a third party? Is that eradication? Is there something else that, you know, it's completely off limits? They will do it. You just tell us what we do. Eradication, definitely. It's hard for the AI to know the business content. I mean, it's hard for us instant responders to know the business content. We have to ask the client about that. So that's definitely off limits.

For the rest of it, go for it. AI can do stuff. But you, as an MSP, without AI, you can do everything except eradication. Yeah, definitely. We can do everything except eradication. In some cases, containment may be tricky. don't want to disable the whole network, right? So, but if it's a single machine or maybe some users, yeah. So like a quarantine laptop, that's fine, quarantine server or this, the main controller, not really? Not ideal, not ideal domain controllers. Just ask. Yeah. Actually, I think that was probably the first use cases we had with automation in general in the SOC, automating the whole process of compromised accounts, right? How to disable them, how to enable them, communicate and reset passwords. That was the first. So do you feel that,

let's say a customer will grant you more leeway in what you can do than an AI? Or are they now, if they're empowering their own organization with AI, basically now maybe willing to give more autonomy to AI than a third party?

I don't think they care, actually. Yeah, I think it's just make it work, don't interrupt us. I don't think they really care. Of course, tell me something about like, you know, based on this level of maturity, what kind of human oversight in your mind for security is, you know, good enough in your mind? Is it like, you know, human in the loop for decision making? Is it like human on the loop, just oversight? You know, human in control, they can always shut things down, you know? What level of autonomy? We all saw, well, most of us saw the movie War Games with my brother Rick. They made this powerful computer to push buttons and launch missiles from silos because they don't want people to push buttons because they can

fail. Now this computer almost destroyed the whole world. They were theoretically in control but they weren't actually. in that movie at least. Yeah, the question is which human, right? If you see what's happening in the world, definitely a good question. But on the other side, I think the guys already explained what makes sense. Detection and triage is definitely human on the loop. So that means monitoring what is happening, right? For any kind of containment or some decision where it's reversible, human in the loop, right?

And for some strategic decisions, decisions which will have impact like a server or something that, you know, have new stuff, it's definitely human in command. I think that's definitely. The problem I see with persons, people today, humans, aliens, I don't know. is you start trusting it very quickly for very serious stuff. So I think here we need to have this step back and have critical thinking in place. I agree. I agree. Okay. To pivot towards cyber intelligence and OSINT in general. So recently I heard the Nobel laureate Jeffrey Hinton say, and he's like, regarded as the father of AI. He actually was awarded the Nobel Prize in 2024 in physics for its fundamental discoveries regarding machine learning and artificial intelligence neural networks. And he said that AI

gets thousand times more experience than humans, just the way that natural neural networks and deep learning works. You have to basically ingest a mountain of data to make all these synapses align to make this an elephant, which you have to align all of the kind of ponders, whatever it needs for them to classify this as what it is. So it has much more experience than us and has a mountain of data to go through. So that seems like a perfect thing for like cyber intelligence where you have a mountain of data as we saw with our colleagues from Group AB. So Boan, tell me, so you worked on this First, as I announced you, and now in Figo.

How do you see it? I mean, cyber intelligence is all, as we heard from Igor, it's about separating noise from actionable intelligence. So do you see that basically you can actually separate this noise or basically is AI creating more noise that you did not account for? Oh, unfortunately, at least from my personal experience, it will generate a lot of additional noise and misunderstandings. But what I saw so far, especially on the panels in Croatia, unfortunately, don't get me wrong, but discussing this human nature versus AI nature behavior Usually as a panelist, all of us came from engineering and technical background. But for such questions and answers, we need anthropologists, psychologists, etc. So we currently here on the panel, we have a pretty narrow view of all of the

problems right now. So I cannot give you a right answer, unfortunately. But all I can say, you have to be cautious and trust, but verify. That's the most important thing. Yeah. OK. Kost, what about OSINT? So there's a mountain of data of open source things that you can easily scrape. There's a lot of fake news. There is a lot of threats. There is a lot of things which are just plain stupid. Can AI help here? And what are our chances that we don't act upon like unverified, very low fidelity, even fabricated information? Yeah, well, I think it, I just, I think I had, it was last year at DEFCON, how we can use open source intelligence, you know,

take advantage of Gen AI to actually take it, you know, if you have internet and looking at all that data and extracting from unstructured data, which internet is if you don't use APIs, which structures things, I think it can definitely help. And also, I think Igor from Group IB also showed what it can do with different languages. So that's definitely in use today.

But I still think that this intelligence part is usable when it's traceable and verifiable and there is someone who needs to verify this one. And that means verifying from different sources. That's an alphabet of intelligence where you have to verify from two different sources if it's really that or it's really just a noise. Yeah. Completely understand. Yeah. Okay. So we discussed instant response with intelligence, open source intelligence. So now let's kind of try to combine this. So it touches basically all three of other mains. So I'm just curious, in the whole chain of what security has to do, is this actually improving how teams that have to work together actually work together, or it's getting

more complex? So for instance, you work on cyber intelligence, but Infigo has its own managed SOC. So they have to protect the customers based on some intel. Can you tell me, is AI helping to make this more operational, more actionable for your SOC by either helping write playbooks, detection engineering, some YARA rules, prioritization, something that is a direct result of your work and AI can help to prepare something in minutes for them, which might took you a year ago hours or days or months? Oh yeah, definitely, especially when we are talking about low-level technical stuff. We are using it for finding errors inside of the different kind of code rules, et cetera. So yes, it is very useful for such activities.

You asked me about writing playbooks. Nowadays, we are witnessing a ton of texts written by AI online in our companies. But the main question is, is it logical to generate, from the perspective of numbers, is it logical to generate a large number of texts very quickly? Who reads them? Do we need that? What's the point of all of that? If AI can help you in writing a book of 200 pages and nobody used that book, what's the point of that? Tell me something. Last year, my team in Infobip did some purple theming, some other simulation. It took quite a while to profile adversary. We did Akira, we did Sotafun. and actually make that a proper

adversarial relation, get the samples, get everything into vector and do the preparation was huge. So yeah, now we're gonna be trying AI, all sorts of things. Do you think that, so you have profiles of threat actors. Is this such a big leap from the profile, from their TTPs, samples to something that basically can use for purple teaming? You mean in the short term campaign planning, right? Yes, it could be useful, but yet again, mostly from technical part of the game. But if you need to understand intent, expectations from the attacker and from the group, you need the context. That's very important in intelligence if you have a context. And that's something that AI unfortunately isn't good enough to prepare a

campaign, unfortunately. Lazo, you already said that you would use AI to generate reports, but you would check it. I'm just curious, what would you check? The completeness, the correctness, is it pretty enough? What would you check? For what you don't have good faith in AI that it will do well. I would just say quality is probably the main thing. So like I said, I don't like writing, okay? And this is what I use AI for most often. Whether it could be, I don't know, write me this form of, I'm thinking of doing this training, write this for me. I'm thinking of, hey, I need to summarize something. or I need to actually put it into paragraphs here as a couple of

bullet points. Once I get the output, I would go definitely read it and make sure that it's what I meant to say. So this is what I use it for. So quality is probably not probably at the top, absolutely at the top, because if you're providing non-quality data, non-quality reports to customers, then it's like, what's the point? Yeah, I understand. And where were? A quick question, Herve. So if you're already using AI and you are using AI also for customers, do you need to have something which makes AI-related decisions defendable in a sense towards your board or their boards? Or are they accepting this as just a new piece of technology like any other? And if it works, it works. They don't ask for any more evidence that this

works than any other previous technology. No, no, no, they don't. To them, it's not that interesting. They already assume we are using AI. I mean, not assume, they are pushing us. I haven't met anyone in an executive position that doesn't push AI all over the place, right? They want to use it. But still, they are interested in risk, right? They want to talk risk. They want to talk about business cases. They want to talk about returns on investments. Of course. So that's just another tool for them, efficiency tool. Efficiency tool. Okay, now we reach this kind of governance part, ideal for CISO level discussion. And since you're the only CISO here besides me, I think we have to flip the switch and tables.

So Kost, why don't you ask us these questions? Sure, sure. Okay, it will be hard ones.

So, Hruway. When I assisted decision leads to wrong call, a missed breach, a wrongful escalation, who is accountable? You? The analyst? The other lead? I don't know, incident response lead, the vendor? I don't know, AI? So just blame it on AI? Or is DNS? D-NS. It's definitely DNS. Ultimately, legally, It's always the board, right? But I am acting for them. So you can say that I'm accountable. But that doesn't let everybody else off the hook. When we're looking for vendors, if I'm able to choose, I've always picked the vendor that can explain their AI model and that we can tune it, that we know how the decisions are made. The analyst is responsible for verifying AI output, definitely, right? And

Instant response lead is accountable and responsible for verifying what the analyst did and what the AI output was. So ultimately the board, then myself, then everybody else. Okay. Okay. So, Andro, this is for you, the special question. Oh, God. It's not here.

What should be documented for AI assistant? I mean, incident. We are doing analysis, I assisted analysis on incidents. We want to satisfy the auditors, regulators that we've done the right thing. So what we should document when we are using AI for such things? Oh God, I hate auditor questions. Well, I think I would agree with Herbe. He said this is just another technology. So auditors have already in the past, you know, understood that we are not using manual labor for everything. There is some kind of automation, there's some kind of tools that have detections. So if in my mind they are okay with us having a vendor having some kind of detection engineering embedded into that tool and something is flagged, something is not, they're just looking

at whether or not something that was flagged if someone is actually triaging as a human. So if there is a So they're not asking that much, is this tool flagging everything? It's just the thing that is flagging, are you checking it? Are you doing your part of the diligence? If you're missing something and you checked everything, then they're blaming the vendor, then you're blaming UNT, so again, for actually buying this tool, which is not good. So I would say that I would not expect, but I have not receive yet any auditors requesting AI evidence of AI, whatever, how they are processing. But I would suspect they would just have these things document, have this approved technology, and whatever it flags, you check for false

positives and true positives. And in the end, feedback loop. If this turned out to be an incident because everything was missed because of technology, then you change the technology, I would say. So I'm not expecting something much, much different than any other machine learning-based security solution that we use for a decade or more.

OK. I'm hoping, at least. So, Bojan, have you seen cases where teams started blindly trusting AI? So not questioning the output, just, oh, looks great. We'll follow it. It looks very trustable. Okay, so the answer is very short, fortunately. No, I haven't seen so far, fortunately. And I hope it will stay the same in the future. Do we need to train this skepticism or it's becoming by default? We need to train the skepticism, definitely. It's a part of the critical thinking, which is a very important skill nowadays for all of them, for all of us. Very, very important. Should it be taught in schools as a part of AI? Yes, probably. Or elementary? Where is the right fit? Elementary school,

definitely. As a starting point, definitely.

For you again, tough question. Okay. Has AI changed the skill profile you look for when hiring? I think it changed a lot in hiring. So we hire a lot. So every year I have a bunch of interviews and for the longest time we had like a first interview, then a written assignment and then the final one. We have seen, first of all, that those written assignments that used to show how capable someone is to do something, we can no longer trust many of these assignment results because a bunch of these things are AI generated.

Some people don't even know what is in there. Some candidates know exactly what's on the page by which you just deviate just a little bit to the left to the right, but anything, it falls apart. So AI has changed how they go to interviews, how much they can produce writtenly. So basically, we are now switching back to more verbal, quick on your feet, real time things. But in the end, we're also questioning if someone is super sufficient and efficient with AI, Should this be a blocker? So if he is able to do all of that with AI, is that person bad to hire? I mean, we want to train him to know more, but if he's super efficient with AI more than me, he

can do that for every day, for every day, day-to-day stuff. So it does change. So I would think that in the future, one big part of it is how proficient is someone with AI. it will probably still have some kind of benchmark about basic skills, but this is definitely a factor to factor in for new people. If they can do with AI, why not? Are we coming to the point where we are actually, there's people now who cannot do anything without AI? I don't think we're there yet, but it might come to that. Then it's a decision whether or not you can just, have a driver or you need an engineer. He just drives the truck but doesn't know how to change the oil. I don't

know. I would always prefer that someone knows how to change the oil because if something goes bad, someone has to add oil to the engine. But I think things are going in that direction that nobody requires you as a Java developer, for instance, to know assembly. This compiles into Java bytecode and it works. Nobody checks Java bytecode after it's compiled. You trust the compiler. So we don't trust AI yet, but it will come to that point where we trust it so much that, ah, if you know how to talk to AI, then it's okay. We're not there yet, I'm sure, but I think it's going in that direction. So that means we will now be bragging about guys who cannot do work without AI. It's

similar as previous generations were bragging We cannot fix the car. Exactly. Exactly. Okay. And for you, last question. Nice. What do you expect regulators will ask about AI? What to use during post-incident review in the next few years, next year, this year, coming years? I would say lucky I don't have to deal with regulators. That's the first thing. I'm happy about that. but I think it will be, I think we touched some points, touched some of these points already. It'll be like, where was AI in the actual incident? Like where was it used? Who was the decision maker, right? Because we go back to the accountability part, which you talked about, Herba, and it's basically the AI cannot make decisions and pull the

trigger and be accountable. So there has to be a human behind it. You will have, to prove like the audit trail, so you'll still have to have logs, right? What did the AI recommend? Who took actions based on those recommendations? When were these actions taken and things like that? The other part that I believe at the moment organizations are forgetting about is DLP, right? Or data in general, because AI has access to a ton of data and I believe organizations at this current stage are not ready, like they don't know what what access the AI has, right, to what data it has. So if we're talking about something like PII data being accessed, who accessed it, was it through AI, and all of those questions, right? So that would

be probably something that they'll be looking for. Yeah, but for example, do we need to, for example, just, I see a lot of people getting different inputs, blah, blah, blah, outputs, so on. Do we, for example, need to log temperature, we have asked AI? or something else, I don't know, you know, it's a stuff. It depends on the case, I would say. It depends on the case, right? This is from a case to case basis, but you would need to have some form of logging, like we have logging for everything else, right? Like authentication logs, you wanna have that, then we come to the problem of what Herbuey mentioned, it's about the sheer amount of data that you have to log. So knowing what you have to log

exactly, do you log the temperature, the prompt that somebody has asked, and what the AI kind of gave back. So it's going to be a challenge in terms of storing that somewhere so that you can pull it when you need it for auditing purposes. But then again, we have the loop, you know, asking again to store it and then AI to analyze it, right? Yes. But we'll leave this for the other time. Since you see I'm bad at this one, I will hand back this to Andro. OK, thank you. Thank you for your assistance. To wrap this up, let's shift gears and look at the offensive side maybe. Eversar is using AI against us. So, you know, there's had been a lot of

talk about weaponizing AI first for like massive and better phishing, recon, you know, social engineering, now wipe coding, entire malwares, you know, using Cloud Code or Cloud Bot or some of these similar tools as a new attack vector, you know, leveraging it as like the next generation of living of the land attacks. So are we actually seeing this? So maybe I want to start with you based on your TTPs of adversaries. Are you seeing adversaries using AI more and more and which of these like weaponizing domains? Oh, definitely. What we are currently seeing

pretty, let's say, big increase in terms of financial frauds especially because with AI it's very easy to adopt, you know, phishing mail, for example, to individuals plus deep fake techniques. built it in financial frauds especially all you have to do is go to the YouTube record a part of someone's voice plus photo and you can generate very easily a video you know disturbing video to use as a for social engineering inside of the financial fraud so definitely financial frauds are the predominant

techniques, at least in Croatia, and this is the pain point especially for financial institutions, definitely, and clients. Vlado, so there's been a lot of talk about how threat actors can have persistence for like days or months in the company, or they can basically get in and get root access and escalate to domain admin within minutes, domain admin before launch. So with AI, is this thing getting faster or it's basically the same from your experience? When you look at the timelines from your reports, is it shrinking or expanding? I would say, well, it depends on the actions on objectives for the threat actor. That's like what are they after, right? I think domain admin, like before lunch, that is still a thing and it's going to be a thing. And

this is not something that, in my opinion, AI is going to solve in the short term. Because if you give domain admin to everybody, AI can point it out, but if you don't do anything about it, it's still a problem, right? That's one thing. So depending on the threat actor, if they are after the domain admin and kind of ransomware activity, then that's going to still be a thing. AI is used in kind of producing the ransomware, producing the, you know, developing the malicious code that is there to exfiltrate data, but in a way where it's actually using AI, so it's querying the AI to pull in, hey, give me the commands that I will, you know, use to exfiltrate

the data instead of embedding those commands in the malware itself. So we've seen that kind of in the wild as well. So this is where AI is definitely kind of used for circumventing the defenses and also speed. And then the other side of that coin is you basically have malware which is again produced by AI. So this is very quickly being done as opposed to having full-time coders that are doing this and working on it. You can have AI which Again, his presentation showed that you can shorten the time of finding something to a couple of hours, something that took what? Days, months? Usually it takes days or weeks, such things. Exactly. In the past for me at least. Maybe someone is faster. Yeah. So

the other part which I would say is it's the identity or where it's access to data, right? So basically you don't have to get domain admin. to get access to data. And we've seen that with all the exfiltration and kind of there was a recent case I think back in maybe October, November last year where AI was used to basically do the whole attack, right? To plan out the whole attack and then they just carried it out and got access to like 150 gigabytes of data on people of a certain country, right? So this is definitely, you know, it's speeding up, absolutely. Regarding speeding up, so for like a decade now, we are saying that it's very asymmetric between the

attackers and defenders. We have to defend all the attacks. We have to be successful 100% of the time. They can be successful 1% of the time, and they're in. They can be drive-by shootings, in a sense, cyber shootings, and if they get in, they get in. So is this still very asymmetric even with AI, or are we fighting back, Is it canceling out now? Both sides are using AI, so basically none of them have the advantage or they still have the advantage? I wouldn't say it was asymmetric to start with. I mean, we can catch attackers at many layers. Attackers have to be right most of the time. We can catch them, right? The honeypot stuff we talked about is a great example of that.

I'd say that both sides have new tools. Right, so we can scale at both ends. I don't see much innovation happening. Okay. Only scale and both sides have that same advantage. So I think we were even, I mean, we will be even soon. From my gut feeling is that the attackers are faster at adopting that technology, but we are getting there. Okay, I get it. Okay. Regarding that adoption, so Boane, So you're automating a lot of stuff. So in your sense, is there like a fear of missing out in like our community that even though we might fear or not know where that this is good enough, we just don't want to be left out because the attackers are using AI, vendors are pushing. So we have to

do basically the same, not to miss out because this is an arrays head to head and we have to be competitive. So is there a fear of missing out and you have to use AI just because of it? Yes, definitely. And I think, at least in my opinion, there is a lot of companies right now trying to justify big investments, you know, and they are trying and forcing usage of AI in every single corner of our lives and trying to solve the problems using AI for everything, and this is impossible, right? So yes, I agree with your question. If I may, we see that all the time, right? Yeah, we do. They basically have KPIs that say AI, literally just AI. We had a talk with

some customers talking about like Active Directory hardening, tier model, phishing resistant MFA. They go like, okay. What about AI? That brings me to my next question. So for like, I think in more than decades, some technology becomes a thing and then becomes very quickly, even though it's not mature, it becomes super quickly. Even before maturity has become like a compliance checkbox. DLP, zero trust. What is zero trust? We don't know, but it's a checkbox. We have to put it in there. Is AI and raise your hands, do you agree or disagree? Is AI also becoming a potential security checkbox? Yes. How about you? Is it a checkbox becoming? Yeah, that's my big pain. Okay, to close up. So,

if you want to send a message to everybody here, If we sit here in two years' time, what do you think realistically where AI would have helped us the most in those future? Where is it going and where does it have the best capability to really influence security? What I see is this speed of doing things is definitely on the offensive and defensive side is really going to shorten, as you can see. Other thing is, you know, there was some time ago initiative where you had like an infrastructure as a code. Then you didn't look at the infrastructure hardening, but at hardening of the scripts which are doing our policy as a code, whatever, right? Now probably we'll see this changing

with, you know, we will exchange agents and then reviewing is agent doing this stuff. So I think this is where most focus will be and will be exchanging these agents. But all other, I hope it will be on the humans to decide, especially for any new things, new attacks. This is where I think it's still at the humans, even in a few years. But let's see in the future. What future holds. It's hard to predict, right? VLADIR KUMARZAKOVANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJANIJ

So we want to prevent hackers hacking us. We want to survive with all of this digital exposure that we have. If you could send like one message, what we should stop doing without AI and where should we put AI to defend ourselves better from your instant response kind of view, what is at least one thing where we should definitely stop what we're doing right now and try to infuse AI into it? I'll give you an example and I'll say what you have to stop, well, in my opinion, what should stop, you know, organizations should stop doing. So did a recent incident where the customer was not doing the basics, what he said in terms of like

tiering model, you know, phishing resistant, MFA, things like that. And their question was like, but, you know, you kind of invest a lot of in AI, like, what do I need humans for? Shouldn't AI take care of that? So stop doing that. Stop replacing humans with AI. Humans are still there to make decisions, as Kost mentioned. Humans are still there to kind of trust the AI, but verify it, make sure that it's doing the right thing. So stop replacing humans with AI and just blindly trusting it. And then in terms of what you have to start doing is, I would say, Educate everybody in your companies like organizations why organization-wide not only like in security but organization-wide because every person in the company is part of the

security of the company on how to use air responsibly so those will be kind of the two things famous last spoken words absolutely Everybody give a big round of applause to her boy, Vlatko, Vlado and Bojan

And now we have a well-deserved lunch, a two-hour break. Don't forget, two hours back here or on the fourth floor. Bon appétit.