← All talks

Gen AI in SecOps: Hype vs Concrete, Practical Use Cases - Jason Keirstead

BSides Fredericton50:5026 viewsPublished 2024-11Watch on YouTube ↗
About this talk
Gen AI in SecOps: Hype vs Concrete, Practical Use Cases - Jason Keirstead at BSides Fredericton 2024
Show transcript [en]

last uh last talk for lunch and we have Jason kerad here to talk to us about generative Ai and how we can actually use this insecurity to uh to be better at what we do um and so I asked I asked Jason what's something that most people don't know about him he said that he told me that he once sat in the command chair of the Starship ands wh please welcome Jason

thanks everyone and uh it was a it was a lifesize replica of of the bridge they used to have Las Vegas and unfortunately they'll got torn down it was really cool because you don't when they used to have that you weren't it was part of this the World experience and you weren't supposed to be able to go and s in it but it was my bachelor parting let me do thanks um so I'm here to talk today about generative Ai and llms and um you know like we were saying earlier I know everyone's kind of payed out but I I really like to en comment this gr point of view you know people who know me I'm

a really pragmatic guy I uh I'd like to take the spin out of things I've been doing this for a long time and I know that uh you know people kind of have a hunger for What can I really do with this stuff like what what is it let's get the on the marketing and that's what we're going to talk a little bit about today um and assuming that my Advance okay uh so just a very brief um on me and my background um I came from I when I graduated from University I joined a little company called q1 lab was way back in 2003 for people who aren't familiar it was one of the

companies that got it got bought by IBM in 2011 and I stuck around with the company um for a very long time and after about 20 years I exited there last summer and at the time I was CTO of threat management so I own the worldwide uh technical side of the threat portfolio for to the Sim this where the EDR exp force all those different prodct and I joined a company called sare um sare is a fantastic company that focuses on Collective defense Collective defense is an area that I'm really passionate about it's p passion project of mine you can kind of see the the little logo spattered through the the middle of that slide those are all organizations that

I'm a member of that are part of collective defense and what Collective defense beans is I'm really passionate with this idea that as an industry we all need to work together a lot more than we do break down the barriers share more information share more detections share the threat contain share my knowledge because you know over the past 22 years talking with countless customers I just see like a lot of repetition a lot of people dealing with the same struggles all the time and you know they I could I could do a whole presentation about this but we're not going to talk about this today and uh and now I'm with a startup called Symbian and at Symbian we focus

on this idea of leveraging AI to accelerate cyber security and we do that by building AI agents um so with that what is let's try to level set first what is what am I even talking about what is AI what is what does this mean chat gbt how does this work right what what are the basic concepts that kind of Str what is a large language I think that'll be helpful I'm not going to go into like all of the the thesis and details about it but I think it really helps to make sure everyone in the room has kind of a fundamental understanding of how these things work llm Poe are doing something very

simple they're guessing the next word in any given sense you give it one word and then it looks in all of the billions of what's called parameters inside its model where it's been trained on countless billions of pages of English or whatever language text and it it predicts if I was looking at that word what would the next word in that sense most likely be and that at his core is what all llms are doing that's also by the way why an llm can't actually autonomously just decide to go and do things because unless you give it what's called a prompt The Prompt is the the question you POs to it or the initial sentence it

literally can do nothing because it has to predict everything is based on prediction and until you give it a prompt there's literally nothing to predict based on so that that at its core is what an llm is so there's another really important aspect of llms when it comes to cyber security and it's called um the the r or the retrieval and augmentation gra so what is that so if you just imagine an llm and it's just predicting the next words in a sentence that's kind of interesting and when chat GPT first came out that's what everyone was a standed with this whole chat ball that you would talk to right but the problem with that

is they they are stuck in time they're stuck in the time that their their training data was R into and that's why when the initial chat GPT came out it could only answer questions up to around 2019 even though it was 2021 you'd ask it things and it would give you answers from the past because it had no way to access current information well the solution to that was this technology called Rag and what a rag is is it's a very simple architecture that lets an llm access current upto-date information and I'm not going to go into the the technical details about what an embedded model is and how that works and all this but just think of it as a layer

that lets the llm it it lets when you put a prompt in in a question that prompt gets pre-processed and in and changed using current information that's stored in what's called a vector database and that change prompt is what actually goes to the llm and that's what allows the llm to give you the information back so for anyone who's used say chat gvt Pro with the plugins or if you've used Bing right so Bing search has a rag built in so was Google Gemini so does meta and that's why when you go into Bing or the search Eng you called perplexity business too if youlex that when you use these search engines they're able to go up to the Internet

and pull live information back and they it this is this is how at a fundamental level all of this works the reason I want to level some with this is because this is very very important for cyber security right because if we don't have a a rag in as part of our system it becomes almost useless because we need to if we're going to use an AI model to go and do things like understand threat intelligence or help us triage an incident or help us do vulnerability priorization that llm has to be able to reach out and do stuff and get some of that context from your local environment and that's how this works right so I

just like to L up that because this is kind of what a fundamental why um a lot of these cyber spr specific llms and AI models are important and why you can't just use something off the shelf necessarily to help you do a lot of these use cases you don't have this piece and it's it's perfectly possible to build this yourself as well open Source but it's kind of complicated obviously you need to have a lot of expertise that be a lot smarter than me to go and do that so that's why people are starting to leverage these commercial Tools in these Frameworks that kind of dual that for you all right so that's the level setting now we're

all kind of on the same page what's in L what is a how does it work right so let's get to the hype and the buzz around around this right um first first thing I like to kind of get to the heart of is when when a company or a product is talking about we're using AI yay first thing you got to ask is are they talking about machine learning or are they talking about AI as in generative Ai and large language markets because these are two very different things they're both incredibly important but they kind of tackle very different use cases right so if we look at machine learning right um machine learning has

been in use of cyber security for you know decades and what machine learning means is you take information in build p a model around that pattern and then you can make predictions based on that and that's kind of how all Network Behavior anomaly detection works it's how Ed s work at their core it's how user Behavior analytics work it's all based on you know I'll I'll use the word traditional here traditional machine learning um you know even though there's still lots of innovation in this space and and like I say it's incredibly important today so let's don't take me using the word traditional and saying oh that's yesterday's news it's still incredibly important but I'm just trying

to distinguish it between um what when people use the word AI today they usually mean something else versus what we call machine learning and the reason that this is important is because up until up until December um 2022 when open AI launched mgpd to the world everyone was already using this word AI but they were using it to refer to machine learning right and now people are using the word Ai and using it to refer to something else and these terms all get intermingled and and quite confused in the marketplace so when you're looking at a product and they're talking about how awesome their AI is one of the things you should kind of try

to understand is what they're talking about because it's important if if your if your product is say a firewall an IPS an IDs EDR um you need that machine learning and in fact I'll argue strongly at the end why um llms are near usess for you and anybody who expects that large language models are going to help them detect threats in their EDR or detect threats in their Network are are smoking something it's not made to that use case can't operate that way it's way too expensive right so you know I'm not sure who in this room has experience with like the Pro Plan for jet gbt or the open Ai and how expensive it is to

call these these things on a regular basis if you're calling it you know say dozens of times a day to go and look up information imagine trying to call into these systems hundreds of thousands of times a second that's what you would have to be doing if you were trying to detect threats on an endo or a network it's completely unrealistic they're not built to do that um and in fact you know what I'll call traditional Mach learning is much more suited to that use case so I don't believe that large language models are going to have any kind of fundamental impact on detecting threats for for a long time when we get to the

end slide I'll kind of go back and say that I was wrong but so third point you know there's a lot of hype around you know how important these llms are going to be in changing the industry you know I don't have a crystal ball I can't make any predictions I will say there's an incredible amount of uh Venture Capital funding flowing into this space um there's new companies starting up all the time focused on this area one of the reasons is that it's viewed as being able to help tackle What's called the skill shortage in cyber security and there there's a lot of skepticism in the industry and how real the skills shortage is especially

with a lot of the recent layoffs of big Tech over the past 12 months you know is there really a skill shortage is it a buzz work I think the the reality is it's in the middle right I think that what's lacking is there is a shortage in terms of very senior well-rounded sber professionals who can actually you know the threat Hunters who have been doing this a long time and can go and find those Advanced threats those are uniforms they're hard to find right so there there is a shortage in that area and everybody you know kind of wants those people to materialize it of thin air and it's challenging when you've now got all these layoffs and there's not a

lot of ENT entry paths for some of the younger folks to come in and learn those skills so I think it's a mixture but no matter what way you cut it um the view is that llms can help with that Gap and the reason is because they can be a big accelery llms are not in a place today where they can just completely replace a human analyst but they can definitely help a human analyst do things a lot faster than they can with um the other thing I'll just uh add here is that I believe that llms have the we have the opportunity to finally look at resetting the defender dilemma so so I don't know who's

familiar with this concept of the Defenders dilemma it's it's called the the the phrase is that the Defenders have to be right all the time the attackers only have to be right once right because you have to protect your environment 247365 they only have to find that one chicken the armor to get through that's called The Defender dilemma what we what we have with llms and generative AI as it starts getting rolled in integrated into more of these products you you potentially have the opportunity to reset the defender dilemma the reason is because For the first time in a very long time this is a new technology where the advantage goes to the Defenders and not the attackers so

attackers do can leverage llms to uplevel their attacks and we're seeing a lot of a lot of enhanced dis emale compromise attacks and spear fishing attacks using um generative Ai and you know it's kind of scary how people can generate entirely fake Zoom calls and go in and do a be attack now and steal millions of dollars because they look like the CEO and sound like the CEO and they're saying I want to use this water Transit and there's almost no way to authenticate it anymore but that said you know there's a lot of danger yes but as Defenders we are now going to have access to this technology that help us defend the environment

where scaling that technology horizontally is phenomenal it is not a nominal cost so if you're a company and you're using AI to defend yourself and the attacker is trying to use AI to attack you well that attack is very expensive because AI is not cheap to run right so this is the big difference when an attacker today trying to compromise uh swath and people can create a spear fishing campaign and send that out to 10,000 employees and it cost nothing literally nothing but if you want to go and create a BC campaign with llms and you tell L to Target 10,000 employees well guess what that's actually probably going to cost you like thousands of dollars compute so now that

you've shift You' shifted some of the equation right so I I'm kind of an optimist but I think that this is one of those technology moves where if we do things right we might actually come out on the other side ahead of it so let's kind of separate get into this separating the fact from fiction you know what what are realistic capabilities of llms where are they not suited so I kind of got into a little bit of this earlier that you know if you hear of companies and they're kind of pitching you to this idea that they get this new llm and it's going to come and detect all these new Advanced threats that's not a realistic capability other

because it doesn't scale that they are expensive to run and it doesn't matter if you're you know even if you're running it locally you now have to have a huge GPU Farm of immediate gpus and a lot of people don't have access to that so you're usually paying for it Outsourcing it to a company like open AI or AWS or a so they're expensive to run as a result they're not really well suited to real time threat detection threat monitoring running on the end point you know they're not going to result in in huge ski changes in detecting threats um another challenge that you have with those so we talked a little bit earlier about how Rags work

and access to that data so because that local context is so incredibly important what an llm can and can't do Depends a lot on how much it can have access to that Lo context that you have so if you're relying on these Cloud systems that and those systems don't have any of that access to that local information to build that context they're not going to be able to make the best decisions we're going to be kind of doing like generic response plans and things like that so that's another another challenge um and then finally we've got the this this whole C of thing so I've got a quote there from AR Veno he's the CEO of Salesforce obviously he's kind of

a competitor in Microsoft so you got to take this with a grain of salt but he says copilot is basically the new Microsoft clippy you know if you've ever if been using co-pilot buil into windows it's you know kind of questionable how much value you're actually getting out of this thing it's kind it's entertaining but beyond that like you can't it doesn't it doesn't have the ability to go and do stuff right and this is the challenge with a lot of these initial rings generative AI in cyber security is they're all these chat Bots right so all the every single cyers product now has an AI chat bot all and I mean on the previous slide you kind of saw the names

of something right they've all have their own name but at their heart they're all kind of the same thing they're these you know clippy like chat boot you can go in you can ask it some questions it gives you some answers and it can help it can save analyst time no doubt it can accelerate your analyst because you can ask get information about your environment and pling English and you don't have to learn all these special search queries and Linko and it can give you that information back it's an accelerator but the vast majority of these co-pilot systems they can't proactively go out and do anything right so when you kind of look at okay what is

the value I'm getting from this thing that I'm paying for it's not actually able to do anything it's just something that you know kind of helps me answer questions and like is this really the the the promise of AI I thought that AI was going to come and put my breakfast and make waffles and instead it's just like a chat lot like this is you know we had chat boot back I was in high school usinga back in my bra ha um so this is a little bit hard to read when I um if people want that they can down can send people this deex you can read it in detail we also have this

up on a Blog on our website but I I kind of break down what are co-pilots good for in cyber security what are AI agents good for in cyber security and these co-pilots they have a lot of very strong use cases but because they're not proactively going in and doing stuff they have a fundamental difference to them and what's called an AI agent an AI agent is something that it acts more autonomously it acts more like a person it goes in does things comes back takes the knowledge it gain from that thing and goes and does something else more like a person and the way that those are built super quick is that rag picture we

saw earlier so imagine there's a whole bunch of those and they talk to each other that's how an agent works so because it's got the feedback loop it's able to a person this is a key thing to think about when you're looking at you know how much value you're going to get out of this thing or are like people just going to be sitting there chatting with it all the time is this actually helping all right so we we deconstructed a little bit of the height so what are some real case studies like what are people actually doing this for today and you know as we go through this I'm going to talk about as well as some free open

source tools that I know are in use in stocks today that people are getting value of so first case study is operationalizing threat Intel right so we we just had a great talk about threat hunting and one of the key facets of threat hunting as was discussed is to have threat intelligence right if you want to start these hunts we need to know what the threat actors are doing well one of the challenges we have in this industry is a lot of the best threat intelligence is still disseminated in a human readable form right and that's either in PDF store you see it in a lot of blogs right so a lot of companies will publish the research

from their Intel analysts on their company website or a Blog and they don't necessarily along with that blog provide like a machine readable format like sticks or um sticks or CSD or XML they they often don't provide that all they have is the blog because the you know the reason those companies are publishing it is to kind of you know show the skills of their people they're not necessarily trying to give you a of free stuff so how do you take advantage of that well one of the things that large language models are fantastic at is consuming large volumes of text and then letting you work with that text so taking threat intelligence from a human readable form and then take

and then extracting the indicators are compromized the TT and and the ttps the threat patterns so that you can go and then use that to start those threat hunts is a great use case and there are free open source tools I've got some of them uh linked here that can help you do that uh and and there are there are socks that use these today in production I've heard of a an AI special interest group at first. where we talk about this stuff so people are are building and experimenting with this all the time and uh there's some commercial tools as well that'll try to make this a little bit easier but the the other thing I'll just

highlight is that there's there's been some capabilities like this for a while um but they used something called natural language processing NLP is uh a kind of a precursor let's just say people were SHO for saying this it's not a precursor to llms it's a totally different technology but NLP is kind of the the way until recently that all machine Learning Systems read and consumed R fillings of text we got get faster all right use case number two enhancing answer response and threat intelligence so this is where you can use llms to help you build and improve your threat detections so again we just had a great presentation on I feel like some in some ways that presentation kind

s me up here so we just heard about Sigma and what Sigma can do for threat hunting well these Sigma rules that you're trying to use yes there's a big open source repository that's on the internet but what about when you want to build your own right LMS can actually help you build and improve Sigma rules and detections and there's a tool here I've got a link at the end of the deck that you can go to give that tool free open source tool that helps you take threat intelligence so you've got that piece of threat intelligence that were talking about and then you've got a signal Rule and it can help you kind of

craft and curate that and and make sure that that rule will match those detection you Hammer through leveraging the power of llms to help you do that another use case that we see LS being used for is to automate incident triage and response taxes so when you have that incident come into the sock and you're not sure is this a false positive or A true positive this is what consumes like 70 to 80% of the socks resources is just trying to do that level one investigation of you know I I'm getting I've got these 200 alerts coming into my sock today and there's only two that are real all of these are probably false closet but I can't be

sure so we have to go and do an investig preliminary investigation to determine whether or not they actually are F Well if you have those steps well if you've already figured out what those steps are supposed to be for your organization to determine whether or not those things are true or false positives you can let use a agents to completely automate that problem and it's very simple to do it's a lot simpler to do than the toic you know sore use case that having to build these big complex playbooks and know how to write python stuff was one of the cool things about llms is they're very good at generating all of that FL they're extremely good at generating

these flows going and automating things so this this incident triage to get from the you know again it's not about replacing the human analyst it's about throwing that gasoline on the fire so that we can get from l0 L1 so much faster right and finally um generating fr hunting flows so again we just heard about about hunting right when you have a threat actor that you're trying to go and look for and you're trying to when you've gone and done that first investigation right so I've got this thing I'm searching for I've got this Sigma rule I go and search it and now I get back this these things right so I I I get back these events for my S or get

back these logs from my EDR what should I do next what what's the next step in my hunting hypothesis basing on that information what's the next thing I should go and look for that is an amazing use case for LMS because this whole iterative process of remember how they work with prediction prediction based on the Corpus of of all human knowledge that it's read so that's the exact same thing that somebody does they're developing hypotheses they're doing something like that so it's a great puse case for llms and we're seeing a lot of success building things along this line you know to build the threat hunting hypothesis chain to help new hunters learn how to

hunt because hunting is one of those things that you know again back to the skill shortage if you've been doing this for 20 plus years you kind of intuitively have those hunches and those gut feelings about what should I go look for next but if you to the field you don't have those hunches so a tool like this can help you build that muscle memory because it can help suggest those hes for you final case study and this is another thing being used in the wild is this idea of automating compliance tasks so cyber security questionnaires right they're the bane of a lot of people's existence anybody who must to sell anything knows that nowadays because

cyber security is so important at the top of everyone's mind you have to be able to respond to these questionnaires that your your customers will send you right and they come in these spreadsheets and everybody has their own spreadsheet because there's absolutely no standard so you'll get this spreadsheet Excel spreadsheet can you answer all these questions and they ask you things like what's your password policy are you suck to compy do you have this you know where do you hire your people from blah blah blah all this stuff right and when you're when you it's easy to answer these on off but when you're a company that has to answer literally dozens of these a

week which is common like that's not an outlier if you're if you're selling a lot of things or you got a lot of potential deals you have to answer a lot of these it's extremely time consuming and what is a great task for llms well llms can be given access to your corporate compliance information read it all and once it's read all of that it now knows the answers to all of those questions so it's really easy now to have an llm automate answering all those so somebody sends you this spreadsheet you put it to the tool the tool automatically answer all the questions gives it back to the G for a final double check it's a lot easier to

verify something than actually go through and fill it that a lot faster and this saves this can save ooodles of time um be a huge acceler so this is another thing that's you know being rul there right now been great success and there's there's a lot of different companies that are enabling this thing there's some open source tools too all right get through the risks um quickly so there are this is not a Panacea right like I said at the beginning I'm I'm a realist I always want to make sure everyone has all the information um there so what do you do about data priv issues right um there's a couple of different mitigations to

that first is make sure that your tool your system has we call a zero pretention license and access to these these uh these large language mods what that means is when you go and submit it it a lot of people have this this fear that when they submit questions to AI systems that their confidential information will then be used to train um the model for the future for other people and that is true if you're using the free versions but if you're using the paid for license Enterprise verions it is possible to get these zero retention licenses so that your information is guaranteed to never be used for training in the future and that that's an important thing for people to

realize like when you're using the free version something to be concerned about you're using the pay version generally it's not a huge concern always read your T TS and C's the generally it's it's not as big of an issue the other you know if you're really paranoid about the other thing you can do is you can run this stuff locally and you can use local models Source L ET hallucination Hallucination is another big problem with generative AI because it's all based on that random wheel prediction it can kind of just make up stuff we've all seen this so I don't need to go into the details all the funny examples um but how do you

mitigate that especially if you're using it cyber security so one mitigation is you can do what's called you have a source of Truth and if the system you build as a source of Truth in it it can go and it can double check what the output is from the agender of AI they can use AI to check the AI against the source of Truth and make sure that the outut makes sense and agrees with the source of Truth so it's kind of like a double check in Balance the other thing that's really important and most of these systems still have today is human in the loop so while you want AI to be able to go and do things for you and not

just sit there and wait for you to ask a you know questions you still want that human in the loop right now you want the AI to be able to accelerate the human and then the human makes the final decision but having that human in the loop you can make sure that it's not going off for being T1000 and bringing down your and then finally you know continuous Improvement right so when you create these feedback loops the the hul naations actually start to decrease over time because as the system learns more about you telling it hey this was actually wrong then in the future for your environment that will end up being less likely to be stay in the

future finally ethical considerations I I already kind of touched on that but you know there's this concern that oh man ai's come and taking it all of our jobs right so the main the main the most uh important ethical consideration here for me is if a comes and replaces the if AI helps accelerate the entry level tasks in cyber security too much then how are we going to grow people to do the more advanced tasks that's something that people have brought to us and it is challenging but one of the things you can think through is just as AI can help accelerate those uh those entry level tasks it can also train entry Lal people right so we have

the opportunity to to have ai systems be able to build training to help people actually accelerate their career and get from Junior to senior so much faster than they would just kind of toiling their lives away I'm gonna skip this time so future Trends the future of of AI and SEC Ops so a lot of the stuff that I kind of talked about today you could the you do using sore security automation response so you know it's an interesting question if AI is able to come and automatically essentially build playbooks and automatically go and do things do we still need a sword you know again I don't have a crystal ball um but this is something

that I think is going to be playing itself out the next 12 182 for all the sore vendors are integrating this technology when their products are ready right so you know is there a future like are we still going to be building these drag and drop you know workflow playbooks to automate Security in in 24 months why would we do that but we can literally just type in a box and say I want you to do this then do this then do this if you see this then do this other thing boom then just J you know why why would we go through all that the AI can do all that for for us it's a very very

question um the final one is so earlier in the presentation I I kind of hammered on the fact that uh llms are are not good for detection that's true the generative AI so the difference between generative Ai and llms right is think of generative AI as a superet so generative AI is more the super set of AI that has to do with Neal networks that llms are based on so you can have gender of AI systems that are not large language ons right imagine training a generative AI system not on language but on all of the crowd steg events that happened in my Enterprise in the past year now that system is very likely to

be able to predict red actor behaviors right so there is a potential to leverage gender of AI for detection when you take it outside of the L level and there's some you know very high level research going on in this area um I haven't seen any real commercial projects that have come out using it at all but it is something we can hopefully look forward to that you know the ability to use this technology to help not just accelerate response which is what most of this presentation but to help detect threats as well because you can kind of anticipate so that's that's an interesting all right with that I I don't know if I have any time off you

all but uh conclusion before we open up questions this presentation you know this space is moving so incredibly fast right so everything I just C of it is going to be completely out of date 30 days and if you want to keep up to speed on what's going on in generative AI at se Ops I've got a couple of links here so we have this security accelerator St community that I've stood up um it's it's completely vendor agnostic it's just an open sharing Community we're be talking about everything that's going on in this space um we're also going to be having a podcast that starts up uh in conjunction with this community very soon there's a link to my company there

that's working the AI agents and there's this other fantastic resource from this fellow named Dylan Williams Dylan is the guy that created that Dana open source project I talked about with the um using AI to help your detections he has an amazing start me page if you go to that page it has 200 300 different links all categorized having to do with AI and cyber security and you can go all the way from zero to n you can start from the very beginning and learn you know all kinds of different stuff all right so I don't do we have any time for questions minutes over so all right um I know that there was like talk

microphone if

question all right maybe maybe they're earning a question h

did everyone hear that or does it she use the mic so as an offensive security what sort of I be leveraging aayi for yeah so um for red teamers you know so this this talk has mostly been to the blue team um for the red team there's a couple of different facets to look at right so first first kind of question is are you trying to Red Team your classical infrastructure using AI or are you trying to Red Team AI right and both of those are very important because you know so any I didn't I didn't so whenever a new technology comes out in the market I I think think of it as like this

three-legged stool right so you can use that technology to attack us you can use that technology to defend us and then you can use that technology in the business in which case we have to defend it right if you think about that you know that was true when Cloud you know Cloud came in oh what does you know we now have to defend the the Cloud's going to be used in the business now we have to defend the cloud oh how can we use the cloud to improve cyber security oh how can attackers use the cloud to attack us so you know it was the same thing with blockchain and know it's the same thing with Al right so you got to

kind of think about those three things this talks me mostly about how do we use it to defend us we're talking about attacking red te think of the other two leges the St right so attacking the AI and using the AI to attack um let's talk about using the AI to attack so there are a whole bunch of op Source red teaming tools that leverage llms to launch attacks and there's also tools that are using llms to find I don't even like to use the word zero day it's like negative one day vulnerabilities and Source codes there was actually I don't know um if y all caught the news because I mean the elections been drowning at the news but

like yesterday um uh Google project zero announced so they had the first ever zero day vulnerability completely discovered by AI published by them so they had an AI that was scanning thousands of Open Source projects and

the no there was a what was the project that F really in it was like um it it doesn't matter so there was a major open source cont that everybody here would know ask you light yes it was SQL who said that yeah um so they found itq all light and the vulnerability was in a poll request that had not been merged yet and was not in any release right so think about that like a developer wrote some code they put in a PO PR never been released it's just sit their AI finds it immediately they go and dispose it to the team the team's able to fix it and before before it ever gets released

right so it's kind of a pre zero day vulnerability so that's the good guys right that the attackers are doing the same thing right so you know Google project zero they they do have access to compute resources and Google but other than that like the the stuff that they're doing anybody can be doing it so like threat actors are already using AI to find zero day vulnerabilities in code on the internet at scale right so that's one thing we got to be extremely concerned with and conversent of so as a red teamer you should be looking into that like how can how can we do that to get ahead of it in our own C base and

pointed out you know projects that we we leverage the other um part of using AI for red teaming that's been happening is you know just um so those are those are scanning source code but you can also um use llms to automate discovery of vulnerabilities through like web scans and scanning um the attack surface of the application and there's projects that are already doing that as well so the the interesting thing though is there's a lot of hype around that around AI around AI and having these Year Day attacks but that's not really what everyone should be most concerned with what people should be most concerned with is what I talked at the beginning which the business email compromise was

be ficient right um so we got a minute to talk about it right so business email compromise for anyone who's UNF familiar is when you impersonate a corporate officer or somebody else in the company and you go in and you send an email or a slack message and you say hey I'm the CFO I'm Bob um we need to do this wire transfer to pay this vendor and we're out of time you know can you please accelerate this and it's typically to some they send this message with somebody they know has authority to do it like Deputy CFO or and you know classically this was done via a lot of social engineering right so the

attackers would go in they look at all the LinkedIn profiles in the company they kind of build up a repertoire socially engineer okay this is exactly if I craft this email this way it SS exactly like an email from the C CEO and I know right now he's on vacation in Hawaii so he's not he's on a plane in the way to I know he's not going to see the email he's not go C so they send it now and they put a sense of urgency it'll probably go through and occasionally those went through well what happens today is that is now able to be scaled unmask with generative AI can read all these social media profiles

and build all this knowledge and you can now do what was previously a spear fishing campaign that took a couple of weeks of research you can launch on mass to 50,000 people in any company any you want and then if they take the hook and they say oh I want to actually you know can we have a zoom call to verify this we you just spin up your generative AI generated video of the CFO impersonate him on the zoom call and they say oh looks good let's do that that wire transfer and that's literally happening it's happened you know tens of millions of dollars have already been stolen this way um the scary thing is business email

compromise has overtaken ransomware already in terms of our profit Center for threaters and this Technology's only been around for what six months a year where and the cost to use it's dropping in the expert growing so like that this is what I'm most concerned about you know how do you defend against right so a lot of people are saying we kind of go back we got to get back to Old School sneaker net we have to have code words right and you know when you're on that Zoom call and you're not totally you know something fishy about this Zoom call ask someone a question where the only way they would know the answer is if you had met face to face

previously um hey you know where where is it that we went to dinner last time when we were at uh that event you know like I know a you know I I have u i I haven't done this myself yet but I have friends that you know they've actually set up a thing in their family there's a secret word in their family and if if somebody calls they say their mean they have to know this secret word otherwise you can't trust it there's there's been people who have had uh this is this gets me worked up there's people who have had think kidnappings with thread ACS right they because they can imitate your voice so

easily based on information on YouTube and they they'll download that information on YouTube they'll generate the sound of your voice they call up they call they'll generate the sound of your child's voice they call you and they try to distort you say you know you're you've been kidnapped help me Daddy says I need to pay him $100,000 it's all fake there's no actual kidnapping but you know this is your kid on the phone right what are you going to do if you don't have this code word system in place and you know it's scary but what scares me more is like this is now able to be done at scale this is what everybody doesn't totally get it's

the scale that scary because when when it was done by humans you couldn't scale it when it's done with AI you can a few switches and I'm way over [Applause]