← All talks

Threat Actors Interest in AI - Separating Hype from Reality

BSides PDX · 202447:34118 viewsPublished 2024-11Watch on YouTube ↗
Speakers
Tags
CategoryResearch
StyleTalk
About this talk
As artificial intelligence (AI) continues to reshape the technological landscape, it has caught the attention of not only innovators and businesses but also malicious cyber actors. This talk presents cutting-edge cyber threat intelligence (CTI) research examining how threat actors are engaging with, exploiting, and targeting AI technologies - and how that is different than the hype and sensationalism in the media and from vendors. My research delves into: Trends in dark web discussions and marketplaces related to AI tools and vulnerabilities Analysis of the development of dark LLMS Emerging tactics, techniques, and procedures (TTPs) leveraging AI for malicious purposes The general skepticism and discussion observed from threat actors discussing AI We will present findings from our year-long investigation, discuss newly discovered TTPs and the uneven AI uplift across types of threat actors. This presentation is crucial for cybersecurity professionals, cyber threat intelligence analysts, and AI researchers seeking to understand and mitigate the growing intersection of AI and cyber threats. Rachel James is a cyber threat intelligence expert who co-chairs the CTI Program Development Working Group in Health-ISAC, and an active member of Curated Intel. With a rich tapestry of expertise spanning over a decade in cyber threat intelligence, threat hunting and incident response, Rachel stands at the forefront of defending against digital threats and enhancing security frameworks and has been recognized for her outstanding research into threat actors, cybersecurity and artificial intelligence. --- BSides Portland is a tax-exempt charitable 501(c)(3) organization founded with the mission to cultivate the Pacific Northwest information security and hacking community by creating local inclusive opportunities for learning, networking, collaboration, and teaching. bsidespdx.org
Show transcript [en]

[Music] uh so my name is Rachel James I'm here to talk to you a little bit about threat actors and their use of artificial intelligence uh trying to separate a little bit of what we see as the hype and sensationalism from what's reality why should you bother listening to me well I've been in the industry for a little bit I've got some letters that are after my name the most recent one the GLE is the gak machine learning engineer certification um I've uh kicked around in cyber security mostly in healthcare I've worked at some of these local organizations currently work at novaris Pharmaceuticals which is a heavy AI adopter not spilling any tea that uh you

might not already know by Googling it um I also do um a lot of involvement in the community so I Mentor in the PDX cyber Camp I've been in uh a presenter at both Sim Portland and inac um I'm in the curated Intel Community uh heavily evolved in Oasis which is the organization that um sets the stick standard the structured language for uh threat intelligence exchange uh and I'm in oasp so um wrong way so I am a core team member of the oasp top 10 for llm and gen we just added the Gen recently I am the lead for the prompt injection rewrite for 2025 so in the next couple weeks you'll see the new top 10 um I was

the the lead for that um prompt injection entry uh I also um lead a cyber security uh guidance for Defenders layer as part of that OAS project so I have a research partner named Brian nakama and we are producing a a number of guides one of them that's already out there is the preparing and responding for deep fake events guide pretty interesting uh we also have a couple of incident response guides that are coming about um to help people prepare for attacks against Ai and attacks that are uh happening to you that are enhanced by AI right uh and I am one of the authors of the Red Team guide uh for AI testing significant contribution there around uh

threat modeling for for AI pin testing uh I co-chair an AI working group in h ISAC and uh I also co-chair a CTI program development uh working group for H ISAC and is little to say I do a lot with AI and cyber security uh a lot of different groups um spending a lot of time in this area uh one of the nicest things anyone can ever say to about about me is to call me the voice of reason right so uh a while ago I I created this project so you can go out you can uh you can Google cyber shojun and GitHub and you'll find it um so I I started this project that was basically

I really wanted to document exactly how uh thread actors were using artificial intelligence not the way researchers hypothesized they were using them not the uh potential ways that could be used that were really interesting right like Defcon and black hat but how do how are they actually like the documented instances of how they're actually using it so I created this GitHub and to my surprise it got a lot of traction so I've signed research licenses now for um miter and tidal and another number of organizations to use this I've been told is the um only centralized source for this activity mapped to ttps some of which were um basically new ttps uh that were proposed by Microsoft and open AI

um and some of those ttps that were uh suggested by myself um that are now going to be used in miter Atlas uh that is all to say for all the sources that I speak about today if you want direct links you want more information about this research you want to be able to speak to this yourself this is my GitHub you're welcome to go and take a look at that so really what I'm been trying to do uh about this is because of the sensationalism with AI we there's a lot of conversations out there like these thread actors now that you don't have to have like any technical knowledge and then they're going to use AI to like

attack our companies in ways that we're totally unprepared for and like cats and dogs are falling from the sky and I was just how how close to reality is that right and I think like so much of what we do in this community is to be that voice of reason so that's really just what I want to help people do today right like I'm just want to give you the same resources so that you can become that voice of reason in these conversations right so I'm going to talk a little bit about threat actors and sort of their interest in attacking artificial intelligence systems um how AI uh how attackers might be using AI to

further their cyber threat activity existing um the malicious or dark LMS or sometimes called like black hat gpts the development of those um and of course no cyber threat intelligence presentation would be complete without some wild uh predictions that I'll I'll have to make um so about threat actors actually attacking AI um it's really interesting because when you uh so to to to back up and just be clear um my background in cyber threat intelligence is includes a number of years of counter threat intelligence and what I mean by that is that I have personas that I have developed over time that have access to some trusted Community spaces so I can observe these threat actor discussions

um and some of them you'll some of the the um work that you'll see on my GitHub is actually screenshots where I've removed my persona's information of some of these conversations um but that's largely what I'm I'm characterizing to you today uh and yes uh most of these images are also AI gener at which later you'll find a few of them probably funny like I did uh yeah so actors discussing AI is really interesting so they they're talking a lot about basically companies are really going at break neck speaks nothing we don't already know to uh adopt Ai and they want to be they don't want to be the ones out of all their competitors

it's like the last getting the value out of AI so they're moving really really quickly and they're favoring Innovation over security and this is going to lead to like misconfigurations and security not being embedded in the processes and that's going to create vulnerabilities and opportunities for us right uh they think you know the way the nature that artificial intelligence systems work right like you kind of have to think about um expert systems before we had AI right was kind of based on a set of rules right I kind of I say to the computer if you see X and you see y I want you to do a b and c right AI is different in that we are

giving them data and we are giving them labels and the AI is deciding the rules A and C and because of its constant feedback with the environment that's what that's what makes it deep learning right that feedback from the environment the constantly changing also makes it a moving Target for security just think about it like what is what has been like our uh sort of go-to our default about like when when do you recheck and pentest and and apply security whenever there's a major change but if you have a system that's constantly changing that move that Target for security has now just got we already struggle with like regular sort of application development to make sure

security is inserted in all of these places and now we've just made it a whole lot harder and the thing is they're largely right like my own observation of the community my own observation at my company I would say this is this is a large characterization and and you know nov arst Pharmaceuticals is a large global company has been in the AI business for a while and we still struggle with these so uh what is the big this is my one question okay so what is the biggest weakness of all organizations don't overthink it yeah people humans right yes yeah so uh social engineering is the biggest problem right so the other discussion that I see a lot in thread actors is

like we really like social engineering like it's been the initial aess Vector well except for last year when like moov it made like cve exploitation initial uh like the initial Vector for one year but other than that it's been pretty much Social Engineering right and that is primarily now how we see thread actors using AI right a lot of fishing emails things like that um but so most people think about like the ji fishing emails but what people don't think about is we're actually conditioning our users to be really comfortable giving sensitive information to Ai and then trusting what AI tells them to do and just doing it like giving them the authority so again

nard is heavy AI adopter anyone else have like an AI chatbot as part of their like help desk yeah anyone have it for like recruiting right why are we putting these in all the places where social engineering is already a problem this is what appalls me right like so we're now conditioning our users like what do we tell our users about fishing like one of the most like prevalent ways of people being social engineering is around the help desk and now we've just like put this bot in there and they're like trusting actions like the AI bot tells them to run a command they just do it right so it's amazing to me um so you know the thread

actors are definitely aware of that and that's that's enticing to them because that's always that's always already been really successful for them right and uh there was a paper I think it was last week it might have been two weeks ago Microsoft put out that I loved um again so being really heavily in the The Prompt injection space lately what I loved about this was that they were talking about something that a lot of researchers in the prompt injection space have been uh trying to discuss and bring awareness to which is basically due to the very nature of the way llms are made right the way that they're made is that prompts the language you're

using to interact with it has a stochastic influence on the model's Behavior itself so the theory is we probably can never actually get rid of prompt injection like there's a lot we can do to try to like lower the risk but it's probably due to the very nature of llms impossible to eliminate entirely which is fascinating and Microsoft came out with this paper that basically said yeah you know what prompt injection and jailbreaking is actually like we're social engineering AI is that not terrifying if humans are our greatest weakness and now we've just created computers that can be socially engineered oh man right like yeah this guy's face right here he's like that's how it felt man that like

it's right in the stomach like that gives you that and and it's true right like it's it's really true like the we have created artif we have created a computer system where our very language that we use to interact with it affects Its Behavior affects the generative space and the generative trajectory it's really fascinating so you really have to be careful when you're thinking about the use cases where you apply these things and and how much you trust it you basically created the like least trustable computer ever it's amazing to me um so this is one of the things that you know they're they're very keenly interested in because it's been so successful for them so you know one of the things that

I do like to point out is that while there is um many discussions on thread actor forms about AI there's there's just very little evidence that like there's a collective uh effort to specialize in this area right or to coordinate uh attacks against artificial intelligence um most threat actors actually seem to be using it for just like individual gains and like knowledge uplift which I'll talk about in a minute um and and people ask me about this all the time they're like are you sure and I'm like well yeah you know there there's definitely probably nation state activity that's happening behind the scenes that I'm not aware of so I always like to to caveat that um and just say

you know I I also do not dive into the Mis dism information side of things because that's kind of its own topic all of itself it's its own specialty um but when it comes to to to this you know this is this is I've been around in cyber security for a while and uh I always talk about Sim swapping so Sim swamping hopefully you know most people are aware of this but it's the what a lot of thread actors are doing now to be able to take over your phone to bypass multiactor authentication right the thing is when I was you know fresh about a college uh I worked for a telecommunications company and they were

um uh investigating an Insider threat situation related to sim swapping that was being uh done largely for uh long-distance fraud purposes so Sim swapping as like a malicious activity tactic had been around for years right but it wasn't until we had MFA in practically all the places that you started seeing it more commonly applied that's kind of what's Happening Here they're pretty successful without having to use AI they had record-breaking ransomware payments last year right uh pharmaceutical company this year had a record single record-breaking ransomware payment right so don't have a lot of pressure to adopt uh AI in this ways um there are a lot of discussions about jailbreaks but if you want to get

ridiculed on a dark Market Forum go in there and Sh share your jailbreak um most people point out like yeah you can get some model to give you a drug or a bomb recipe but there's like a million guard rless models out there like a flow GPT or a Dan or like what do you what do you care right so you'll get you'll get made fun of um actually one of my uh my my favorite insults uh that I I saw was um so a lot of the the uh thread actor groups that I'm on I don't speak the negative Lang the native language right so I'm using translators um so I'm on this restroom forum and somebody is

coming on and they're asking for access to worm GPT which I'll talk about in a minute um spoiler alert by the time they were asking about it it didn't exist and the uh the threat actors in The Forum started insulting him and saying and this is an important piece like I'm not just saying this because it's a funny story although it's a funny story uh I'm saying this because there's a cultural issue here right so this guy comes on he starts talking about worm GPT and wanting access to it and he starts getting made fun of and he's like oh you need AI to help you get like help you make a script you need AI to help you do

fishing and then called him a mama hacker and I was like something's wrong with my Russian translation cuz I've been I've seen a lot of Russian insults and that was a new one on me so I called contact my friend who's a native Russian speaker in in the intelligence Community I was like what is this and he told me essentially what this means is the only person who thinks that you are hacker is your mama and I was like that is the best so uh that's one of my favorite insults that I ever saw in there so but they really tells you a lot about like the reception people get and one of the reasons that I get so frustrated when I

see these like kasperski release like this like article it's like there are over 10,000 posts about threat actors discussing artificial intelligence and I'm like did you tell them that 90% of them are people dissing them like or that like the that 90 and literally like 89% of the posts are like selling access to chat GPT subscriptions like your Netflix subscriptions like and they're characterizing this as like oh you know be the voice of reason be the voice of reason um so they're very extremely exaggerated right they'll they'll do some like statistical analysis but like by far most of the conversations are just like selling subscriptions um they are using it for a little bit of uplift right uh I have

absolutely no basis for 8020 okay uh little hint 70% of Statistics Are All Made Up On The Spot uh but this is why I use the 20 because it feels like to me you have to have about 80% domain knowledge in whatever it is that you're trying to get it to do for you to get that 20% uplift okay so if you want to use it for privilege escalation you want to use it to help craft a loader for a piece of malware you have to already have about 80% knowledge to walk that thing through to a successful output that is going to help you so that's really important to think about the 8020 lift right so we do see them

using like fishing emails there's this like crazy uh graph where they're like it's been a th% increase but you always have to be like very skeptical about this because uh this is part of the problem is a lot of these statistics are being put out there by the by by security companies and by vendors in the space right I don't doubt there has been a significant volume increase a thousand just makes me look cross-eyed right um we do see a a definite increase in using uh deep fakes to for bypassing again social engineering right very successful let's keep going with that tone uh recognizance and research basically like lazy Googling right I do it I'm guilty

of it um and scripting mostly Powershell which I find really interesting uh so one of the pieces of research that my team is involved in is we got a a license from the University of Illinois if anybody saw the research paper um teams of llm agents are able to successfully exploit zero day vulnerabilities if anybody saw that research paper we got the license for it um so that we could do some in-depth Research into lm's ability to generate exploits one of the things that I find really interesting interesting about this research and uh we'll have some preliminary findings like in December so like check out our obos website little plug um but it's really fascinating

because different llms are better at different exploit Generations so chat GPT happens to be really good at web application vulnerabilities in fact if you go and look at all these papers about llms uh doing exploits almost all of them are web applications uh vulnerabilities which is fascinating uh but llama the Llama model uh tends to be favored by thread actors um for Powershell uh and most of it is like recognizance and uh opusc trying to understand the way certain Technologies security technologies have like default detections um ways that you can do certain actions without them being logged so just kind of personal like research and uplift 8020 right uh we're getting really close to one of my favorite AI generated

images I'm excited uh so Tren micro has this and again this is like all my GitHub if you guys want to like go out um but I like to cite the sources uh like any good unbiased cyber threat intelligence person do I like to cite the sources that prove my point um and Tren micro has a really good one actually uh so they they talk about like the these thread actors are kind of lagging behind adoption for the rest of us and I don't know that I've ever given a talk where I have said threat actors are lagging behind us adopting a technology that's really unusual and I'll talk about why that's like exciting for me in a minute um and uh you know

for them again this is like the evolving nature of cyber crime they're trying to learn how to commoditize it is it worth the investment I mean they're already doing pretty good without it right again threat pressure is there any PR pressure for them to change um but they are doing now we're seeing like these deep fake Services which is kind of that like ransomware as a service fishing kit model that we have seen so much get commoditized in the space now we're starting to see that for like fraudulent identification and deep fakes um and this is ranging from like getting past the know your customer requirements for banks uh which kind of you know reaches

back into my uh ancient days being a money laundering investigator um uh as well as uh there's a a fairly big trend of dprk agents that are getting jobs at companies um Verizon data breach report also was pretty good about talking about this there's just like kind of limited uh interest um most are about selling account access like I said there's just not really this urgent need uh to use AI to be successful um and then you know again the Deep fake technology social engineering right that's that seems to be where it is this was my favorite though was when sofos came out with this I love this quote because it's so like if you spend the time in the actor

forums and you see how they talk about it this is pretty good it is overrated it is overhyped I think it's redundant and probably unsuitable for generating the actual malware and again they're largely right having spent a lot of time working with these uh these systems but there is some known use there is definitely some known use and and I want to make sure that you are aware with those and you're going to see like you're going to be probably uh a little surprised at like how sort of narrow that scope is and I'll talk about why that is so we see right a lot of AI generated social engineering so spinning up websites very quickly spining up the

emails very quickly uh chatbots um that we've actually seen some uh lookalike llm activity where they basically try to uh impersonate your llm to get people to interact with it um and commit fraud uh Powershell scripts like I said um a little bit of backd door and supply chain activity I even like hesitate to put this in here because this was a jrog finding and even jrog said We Believe actually most of the back doors that were created in the hugging face models uh were actually created by security researchers there was only they they discovered hundreds of them and like only two of them they were like these these actually look a little suspicious but this really like freaked people out

but if you notice like this isn't too different than the world we live in today right these are still threats we deal with and have dealt with um but what you'll see is like when you dive into like the Nitty Gritty of like what is being generated and code in these scripts it is like super narrow very specific spefic use cases it's not like polymorphic malware right like so what's not uh oh excuse me uh I think it's on the next slide um it'll be like I'm going to like the the LL and gated piece will be like pulling down a file from a URL and launching the executable right or it'll be just uh a service principle

enumeration of your environment it's not like Mass polymorph per malware that's being generated by Alim as far as we can tell it's like very very spec it's like me using it to blunder my way through API development is what it reminds me of um so you'll see them using it for like coding and debugging and like understanding technical errors like a lot again how I use it um and there's a little bit of like recognizance and knowledge which is interesting like one of the um uh one of the the cases that we'll see is they'll like research the company that they're in uh they're attacking or their their line of business there was an AP uh group that

actually used it to understand like satellite Communications right uh to to further their objectives um and as I said before they'll they'll ask some questions like we've seen um them asking questions about like net development and very specific questions about tools so really this is around like opusc detection evasion and living off the land right these are all techniques that like they kind of been around right and yes does llm provide an uplift absolutely but it's an 8020 they already have to have that 80% knowledge now does that 20% matter to us absolutely if they're able to execute their objectives in a shorter dwell time that is Meaningful for us in fact last year was

one of the first years that dwell time was reduced specifically in healthcare organizations that resulted in a reduction of the cost of a data breach for like the very first time and that's going to basically be erased that little bit of gain that we got is going to be cut into by the fact that they can get their answers faster and that they can uh commit their objectives faster now is it revolutionary is it brand new attacks is it allowing kindergarteners to like take down the dod no right but does it matter yeah absolutely but again let's be be the voice reason about it right um so yeah this is what I was saying is

like this is what's not happening there's no like and this is what a lot of people will like ask me about like oh there's going to be like hyper polymorphic malware generated by Ai and I'm like have you ever asked AI to opusc your code has it ever worked after you did it I can't even get AI to like convert a Unix timestamp reliably like people who tell me that I'm like you really have not used an llm to try any of this have you um but they don't really they're not deploying like completely novel ttps that we're unaware of right they are using a lot of like living off the L uh land techniques

right and like asking AI about that but like there that's been a problem for us already for a while right that's been really popular uh and there's no like AI to use like Mass hacking of zero days and and hopefully if we got but if we got time I'll talk to you a little bit about what it actually looks like to try to get AI to successfully generate an exploit um one of the pieces that we're doing in that uh uh exploit generation is um creating an AI that will be able to exploit Juice Shop right because BR a wasp it it's a lot more work than you would want it to be uh so here it is

here is my favorite AI generated image of almost all of these were AI generated by by the way though even the cartoon one that was a A co-worker of mine who decided to AI generate my avatar I if you notice it was like on the second slide there's like a ton of Star Trek stuff in the background and like a can of Red Bull and I was like you got me you got me um so nation state use of AI right so we see them researching again companies and cyber security tools debugging and generating scripts likely using some content for fishing campaigns right and remember these are the most well resourced well uh uh sophisticated

threat actors and this is what they're using it for um salmon typhoon write translate technical papers receive some publicly available information right enhanced Googling uh some assistant with coding again common ways processes can be hidden on a system uh so other nation state activity I like to actually call out this one because it's kind of interesting it relates to another section that I'm going to go to uh oh do I have a oh look at that guys I have got a laser watch out uh so brief subscription to worm GPT use unknown so there is one threat actor group that we saw get a subscription when I say brief this was less than 30 days a subscription to what is a one of

the very few true dark llms the thing is we don't actually have any evidence that they ever used it so the the mass hysteria around dark lolms is the one that makes me chuckle the most um because this is probably like the scariest instance we had and like there's really nothing to talk about uh this is one I mentioned earlier about Research into Satellite Communication protocols and radar technology but again basic scripting tasks right understanding some vulnerabilities this is nothing super crazy a novel uh so here are all of the LL or excuse me the ttps if you go to my uh GitHub these are the ones that either uh are credit to me or uh Microsoft and

and open AI these are uh ttps that we are now sharing with u miter Atlas and attack um so you'll see them and you you'll also if you're a title customer you'll see my work there um and then I just like I put this up here because like it's been a couple times now that we've seen like specifically Powershell now there is an observer problem with this and this is one of the things that we're hoping to fix with the guide that we're putting out in no ASP is how many of us like actually know what to look for to know whether or not a script that was involved in one of our incidents was

used with AI there's not a guide out there I know because we're drafting it Powershell comes out because what is really uncommon in Powershell grammatically correct comments almost nobody actually comments their Powershell you know who comments Powershell exra a lot is llms so it stands out like people who are used to seeing this like you can tell that it's like really well commented and I'm guilty of this like even my I like to think that I do very well documenting my code but I'm telling you there's not capitalization punctuation and there's a ton of misspelling right so like when you look at code and you see it extremely well structured and you see a lot of comments especially in something

that like Powershell is not very common for people to comment out you probably have ai in your hands um and it's really fascinating too because there's some words that AI likes to use in comments that are very repeatable so you start to develop kind of like a a smell a code smell for when AI is being used uh so the UK also um talked about this but I think this is this is a great uh sort chart that they put up this is what I mean by like the 8020 and this is where I get to like into the meat of this conversation is that the the thread actors that are like let me back up AI

is very data hungry right to to Really develop a well formed artificial intelligence system you need a lot of data and you need a lot of labels so when people talk about threat actors developing dark llms or customized AI think about the data that they need now us on the defender side we have a ton of data about campaigns and about attacks we have labels about which we're successful which passed our security tools which were not successful we have an incentive to share that data with each other so that we can train our AI models threat actors have zero incentive to share their data right their campaigns which ones were successful even if they know do they know which

ones bypassed security tools do they even know no they really only know the ones that were like truly successful and like ended up contacting their C2 or falling for the scam right so the am the amount of data that you actually need to have as a thread actor in order to really use AI right develop a dark lolm however you wanted to talk about it those are the thread actor groups that are already wildly successful right if you have that much data as a threat actor group do you really need AI right so this is kind of what the UK's national uh cyber security Center said was like the ones that are like sort of

the most well positioned or these really well-resourced nation state actors and we don't really see that they have a whole lot of pressure to adopt AI now again do I think that they're doing it probably I actually think China is probably most likely because they're like Mo as thread actor groups it's like tons and tons of data Gathering and then correlation right and they do it with their own people that's why they have the um uh what's it called the social score right so they're taking all this behavioral data and like correlating it into predictions and like they like we have a credit score they have like a a social score and things like visiting

your grandparents can affect it so that's always been like China's MO is like being really good at like massive data collection and then correlating it for like unique intelligence insights when they hit um Anthem Blue crossb Shield they hit United Airlines and they hit the office of personal management all at the same time Chinese hackers did what did they do they correlated the information about Med uh medications and Medical Treatments to people who had received background checks for government clearance to flights and they identified us intelligence operatives from that correlation we had to pull people back because their lives were in danger that set us back as an intelligence Community like a decade it was devastating so they have like their

MO and like the way they structure their operations is to get as much data as they can and correlate it so I absolutely believe that they have developed this so that when they do massive exfiltration from companies they're able to identif identify the things that are the most valuable to them um and correlate that with other information but that's happening on the back end right um I don't think we will see evidence of that for the next like two or three years uh so there is challenges here right in confirming AI us the first is that um the reporting like the organizations who are in the best position to report about this like open a now now credit

where credits due open Ai and Microsoft been very transparent even when they didn't need to be right but a lot of these uh companies that have these systems they're not required by regulation to tell us this stuff is happening and a lot of times they sort of have like an incentive not to tell us right so I think you know part of the challenge is we won't we won't always always know and like like I said a lot of us um if we might being attacked we haven't been trained to to to notice to have the evidence that AI was being used um a lot of reports about attacks and C uh like and campaigns using AI are

really spoken about uh hypothetically or they're being done by researchers uh so really just is just to say like the project I did was really just focusing on the confirmed reports I absolutely believe like you know doing this CTI thing I have M it to high confidence that there's AI use that's out there that we're just not aware of right but rather than to talk in hypotheticals I'm just going to give you what actually we have confirmed so just keep that in mind when you're looking at uh at my project my get Hub and these pieces um it is it is what we know to be confirmed all right dark LOL Lim sorry already talked a little bit about like

how you kind of need like this like a big amount of data right for these like dark llm developments um there's like very few conversations um on the the criminal forums Telegram talk spaces that are about this uh they're they're um a few of them are talking about like using it for like fishing emails but there's very few discussions about actually like investing and developing these these um some some users so there's I will just say there's a lot of scammer scamming scammers I know that was hard to get through but uh that's the best way I can can Define it you go out there and they're like we have worm GP see 3.0 and if you look at

the comments and again this is one of the things that enrages me about this topic being covered is that they'll be like there is there was uh 20 dark llms that were just released last month and I'm like did you read the replies because like almost all of them are like this was a scam I gave the guy my money and I got a chat bot and telegram that did nothing or they paid for money they get into the interface and they realize like all they got access to was like a flow GPT that had a system prompt that said something to the effect of like generate professional sounding emails right so I go through on on my my site

and actually label these as either scams or low efforts right um but the community just thrashes these people and a lot of times the thrashing the calling out at being scammed even when these people get suspended for for scamming on the uh forums that's not like CED in the the sensationalism um so there's a very small set of examples uh so two that's how small set two examples and this these really crack me up um as far as I can tell like actual llms trained on actual malicious data for the purposes of being used maliciously there was two and both of them thank you security uh research organization created by were based off of model uh created by

researchers uh they were subscription based this is the one ap43 briefly signed up for it briefly because if you will notice July to August and Krebs God bless him he docks to this guy uh so he turned out to be a he was actually like genuinely on the forums he had done some uh successful malware development before which is you know gives him a little cred in that area uh a lot of these other like scam low effort GPT black hat sales like they don't have a good reputation in the Forum uh but this guy did um and uh he was docked and right after he was doxed he pulled the plug he's like everybody is mischaracterized

my intentions and so I'm going to take it down and I'm like dude you've been developing malware for years I don't think we misunderstood your intentions you just hate that you got doxed and that's it that is it all the other ones are selling you access to flow GPT or like a do anything model right so this has been extremely hyped but yeah it's basically like a lot of scammer scamming scammers uh so yeah just to wrap up really like there's a lot of skepticism a lot of like doubts about it being anything it generates being fud right fully undetectable in delivery culturally you're going to get called a mama hacker uh and like yeah definitely

like there's some commoditization happening with like deep fakes but like what's really interesting to me is like you don't ever see the uh AI models that are being sold like with proof that it actually successfully compromised an organization which is very different than when you see ransomware as a service or fishing kits in the commoditization space they almost always always have a screenshot of proof now they have plenty of screenshots of like the functionality but they never show like an organization that like paid out because they were attacked by this tool so you know take that for what it's worth if they can figure out how to monetize it if they can figure out how

to develop a dark llm and monetize it they will just hasn't really happened yet so why again just no pressure to change tactics we actually have the data Advantage right like I said before we are the ones with the most labels we are the ones that have an incentive to share our information we have a unique opportunity as these guys lag behind us in adoption and we have an incentive to share information we have a unique opportunity to actually use AI to get ahead of them which I don't think I've been able to say in my career in cyber threat intelligence ever right so I'm I'm pretty excited about that uh AI predictions about the robot war wo uh

right impacts on on Cyber tax I think what we will see is just like kind of an evolution of the ttps that we've already been seeing little bit of compression in the objective time frame and and the dwell time that people have um but I think you're going to see like a slight increasing of that in like the next two years um but mostly like the threat in the next year is just going to come from evolving ttps that we're we're sort of already familiar with um yeah and I think uh you know we are going to have some event in the next like two to three years where we'll be like oh wow nation state such and such

was using Ai and developing this for like a while like we're going to become aware of it we just don't have the visibility now I think that'll happen in the next two to three years and like definitely like as they can configure uh figure out in the like criminal commoditization space how to uh make it a you know dark llm as a service they'll figure it out they'll people are certainly willing to pay uh so looks like we have all right perfect so 12:15 or 12:14 I left just enough question uh time for questions um if you do not get a chance to ask a question if you do have a question please come up to the mic in the middle

so the uh folks online can can hear you if you don't have a chance to get in a question reach out to me on LinkedIn I'm always happy to chat never hard to get me to talk about this subject it's hard to get me to stop Yes clap for we're all introverts we're like yeah do it I'm Not Invisible um so um thank you for saying that we're ahead um I keep trying to explain to my friends they should not be worried about this stuff and it's hard to get across to them when they fall for the Deep flaky stuff like uh there's arm militias attacking female workers in what North Carolina um they swore it was a real

news story and um so I'm constantly having to tell people not to get so upset about this stuff and I don't know the words to say to them because I'm the only person in my peer group that knows how this stuff works yeah yeah deep fix deep fix and misinformation are really fascinating right because it's we used to operate off of well we'll I I grew up in Missouri it was the show me state right show me a video of it then I'll believe it that is the way we've operated for so long and it's really hard for people to accept i t when I when I was creating the Deep fake guide I talked to to researchers in this area

researchers are telling me now I I work with deep fakes all day long I can't tell the difference now like it's gotten so sophisticated I can't tell in fact we announced our deep fake guide with a deep fake podcast and even the people who who I worked with on the guide when I gave them the Deep fake podcast they were like who'd you hire for this and I'm like that's synthetic audio friends like it's terrifying because we have gotten into this place and that's why I think it's such a critical time to be the voice of reason right to help I'll keep trying to calm them down but it's not easy yeah I mean uh sometimes because because when we get

into a fear State neurologically our logic disappears try try with humor show them some deep fakes of like Morgan Freeman like make it make it funny and approachable so the fear starts to back out of their brain but show them other examples of deep fakes to make them realize how easy it actually is to create something convincing but if you can kind of do it in a humorous way where you're you're pulling them out of the fear space they can start to re-engage their logic okay thank you yeah thank you it was a great it was a great example too because like what's at the bottom of that is like social engineering fear uncertainty doubt

that's why that works right yeah oh okay time thank you yeah we're out of time but definitely please please reach out to me either on a GitHub on LinkedIn you can find me roaming out in the halls I always love to talk to people about that thank you [Music]

[Music] w