
Thank you so much for allowing me to be a part of your morning of this exciting community gathering here in Seattle. I have to say besides is something that I grew up I grew up with this community and to be able to give back and contribute is a true honor. So thank you. So let's dive in. I know that we kind of saw a preview of my next slide, but you already heard the name of this person, but how many of you actually know who he is? Let's see. Maybe like a I don't know, like a third of you wake up with back pain. So um >> so for those of you that don't know,
which seems to be a lot of you, Kevin Mitnik is a cyber security legend from the early 90s. He was the most hunted hacker in American history. He was arrested in 1995 after being on the run by the FBI for two and a half years. And he also earned himself a spot on the America's one most wanted list, the FBI most wanted list. Okay. For I site a hund a countrywide hacking spree. I don't know if you can see that, but I think it's pretty cool. How is that for a LinkedIn flex everybody? They didn't have LinkedIn back then, but if they did, that is what this would look like. and we're over here thinking we're doing
something with our regular job updates. Okay, but Kevin was an exceptionally good technical hacker. He had skills, but he was known for his love of exploiting the human layer. So don't take my word for it. Let's hear from Kevin himself. >> Here is Aubrey is much easier on manipulating a human than it is it can be. In fact, Kevin pioneered this part of social engineering that you see in Hollywood films. >> Specific, but not memorable. But before me, but don't make him laugh. He's got to like you and then forget you the moment you let this house. >> That aged well, I think. So after he was released from federal prison, Kevin went on to become one of
the most prominent cyber security consultants and he helped organizations understand and manage cyber security risk and specifically the human risk. In fact in 2002 he published an iconic book that became a cyber security classic and that book popularized the term social engineering into mainstream security. How many of you have actually read this book? Quite a few. Well, for those of you that haven't, I highly recommend you grabbing one of those. This is one of my favorite books ever, and I think it's still very relevant. So, these are just a few examples of Kevin's early hacking career. His full record spuns dozens of intrusions across giant telecoms, uh, defense system, software companies, and these are not small targets, everybody.
These were high-profile organizations. They were supposed to be some of the most secure organizations in America. But what he was known for is not for breaking in, even though he could. He was known for talking his way in. One phone call is often all it took for him to steal source code or get initial access. But it wasn't the phone call that made him so legendary. It was the preparation leading to that phone call. the weeks, the months of dumpster diving, of calling the front desk multiple time to understand who is who, reading the manuals relentlessly to speak the jargon, to understand the technology. By the time he picked up the phone and made that phone call, guess what? He knew
that target better than most employees that worked there. Now, he had one major limitation at the time to do all of this. He had to be human. Recon took weeks. He had to sleep. He had to eat. He probably occasionally got sick. So annoying being a human. That's no longer true. We now have autonomous systems that can automate that recon in seconds and scale it across thousands of targets simultaneously. So imagine he was the most feared hacker at the time. Now thousands or even millions of Kevin Mitnik running around doing their thing 247. That's kind of where we are right now. So enough about Mitnik. I'm a huge fan of him by the way. So um I I had to
start with him but I really would love to invite you to join me on an brief journey to see how social engineering has evolved through the years because to really understand where we are today it's important to see where we started and how we got here. Through the years, social engineering evolved with every major technology advancement that disrupted the world and attackers became more capable. They got better tools. The attack surface increased. They got more creative. But the evolution was slow. Before 1991, everything was analog. Phone in person pre-texting. That's all we had to work with, right? Dumpster diving in the good old analog era. 1991 gave us a glimpse of digital recon. Websites started appearing online
sparsely but that adoption was slow. Recon was still fairly analog. Then 1995 we started seeing the web and the email came out with the AOL era and that's when we started seeing fishing become a thing. There was a tool AOL hell. um for those of you that have been around and that's kind of how fishing started. It was not scalable. It was not high quality but you know we have to start somewhere. We didn't quite see what social engineering can do for a while though. This evolution was slow. So it wasn't until the year 2000 where we saw a glimpse of what social engineering can do. So that's when the I love you virus uh warm. It's not a
virus. Worm happened. Who here knows about the I love you worm. Okay. Wow. So famous. Uh but there's still a lot of people that don't. So this was a technical attack, but it the initial attack by social engineering. So here's how it worked. The victim receives a text file that is I love you letter. Who doesn't love I love you letters in their inbox? Hello. So of course they open it. But what happens is that's a visual basic script that executes, copies itself in the Windows registry, creates persistence by creating Windows run keys, emails itself to all of the contacts of the target, and then recursively scans all of the drives, the the local and the map
drives, and overrides and deletes data. So it caused massive data loss and it's really spread so fast it was self-replicating and within days between 45 to 50 million Windows machines were impacted. So that was certainly a wakeup call of what that initial attack vector of social engineering can do. Then Leon and Facebook came out. I don't need to tell you right all of a sudden recon became fully digital. we started seeing spear fishing attacks. This really accelerated the reconnaissance um the reconnaissance phase. Then iPhone came out in 2007 launching the era of the smartphones. So that was also really interesting. We started seeing a much bigger attack surface. Everybody was walking around with a little computer in
their pockets. The lines between work and personal life started to blur. So once again greater attack surface new types of attacks smishing QR code attacks but again this was a slow evolution it wasn't an overnight thing and then the cloud with the cloud adoption we started seeing a lot more automation a lot more capability attacks became faster the scale increased in early to 20 2015 the AI and ML adoption started growing within industries highly highly skilled and highly funded uh adversaries actually were able to improve targeting at the time efficiency and they were able to evade detections through ML technologies. Now this was again happening slowly. It wasn't an overnight it was not mainstream. Then in
2020 we've got remote work with COVID and we started seeing a lot of identity based attacks which is no surprise. the identity became the perimeter and you know MFA fatigue was real. We started seeing a lot of OAF fishing, COVID pretexting. I mean it was the world was crazy. Not to mention that all the targets were isolated which makes it so much easier. But up until 2020, I want you to notice one thing. Even though over time we evolved and we got better and we got greater attack surface and better tools, there were two barriers that still remained. These barriers were skill and human operators. We still needed somebody to run these things and we still needed
some skill. Then 2023 happened. Does anybody know what happened in 2023? All right. Exactly. With the launch of Judge at the end of 22, the generative AI race was on. Gen AI became mainstream. So all of a sudden the skill barrier collapsed. Anyone can execute a convincing social engineering attack with a bout because the information barrier became obsolete. And in 2026, we've got the human barrier collapsing. And I need to explain why. I think we all know autonomous systems can now run fullchain attacks on our behalfs. and we can truly scale fast. And up until 2023, the evolution was slow, right? We had time to adapt. We had time to to learn. We went from analog to digital to more
targets to more precise targeting. After 2023, the capability exploded. That is absolutely no longer evolution. That is revolution that we're not ready for. I mean, come on. If you are in security today, you know that every single day you wake up and you have no idea what new jewels are coming out, like what next 100 tools and what what new attacks. So, what's the big deal? Well, like, yeah, how do how do we improve? Well, let's start with the obvious. The obvious is fishing increased, right? I mean, like clearly it didn't just increase, it became better. Okay, the click-through rates of AI generated fishing emails are just much much more successful because they're more targeted and they're higher
quality. Additionally, there the fishing campaigns are more profitable, which incentivizes more attackers to enter the game. They can now they can now scale target target the lures. I dropped my ring. They can now scale targeted lures across thousands of victims simultaneously at very minimal cost. Visioning also increased, right? I mean, obvious AI voice cloning is now cheap and widely available. Deep fakes are getting better. That's not a secret. We see a lot more headlines like this. I'm not going to dive into this one because I think we've all heard it. But they're not just getting better. They're getting a lot better. Okay, this video was generated with a twoline prompt by C Days 2.0. Okay, I mean look how
good this thing is. If and while Hollywood is worried about copyright infringement, we here in this room should all be worried about this mass deception at machine speed that can happen that it's the barrier is literally gone. Anybody with a computer can go and generate these things and the the impact could truly have realworld consequences. But you know, I'm not here to talk about the obvious. I'm not here to waste your time. All of these things are obvious. Social engineering improved a lot, but it didn't just improve. It became multi-dimensional. For 40 plus years, we had only one social engineering attack vectors and that was human to human. Yes, we had tools in between, but it was human
targeting ultimately a human on the other end. Now with generative AI and agentic systems in the picture, we have new vectors that we never had to worry about before. And we're learning how to navigate these in real time. We're quite literally flying the plane while building it. By the end of this talk, you will leave here with more questions than answers. We don't have all the answers. I'm not going to sit here and lie to you that we have the answers. But what we do know is that the people in this room, every single one of you of us are the people that are going to shape the future of cyber security and social engineering.
It's us. We share accountability to create a safe and secure future for our future generations. So, I've been talking about social engineering for a long time and I haven't defined it. Okay? And in cyber security, if you've been around for a minute, you know that we love definitions. But there is one thing we love more than definitions. Arguing over definitions. You all know this. So, is there a brave soul that wants to defy social engineering for us? >> Go. No. >> Hell no. >> Yeah, right. You don't want to be ganged up outside. Anyways, well, I think I don't want to put you on the spot. Um, but did you did you did you say you wanted to?
>> I guess I guess to see people like Perfect. Perfect. Yeah. Deceiving people. See, she said people for your benefit. Well, look, I wanted to avoid any fist fights, at least during my talk. You guys can figure it out later. But, um, I decided to pull a definition from a respected organization that we all believe. I say it's credible, CISA. All right. Hopefully, we all believe in CISA. So, this is the definition that they have. It's very close to what you defined. And one thing that I'm not going to read it out loud because you all can read, I would hope. Um, but deceiving individuals is what they're honing on. So, let's kind of adopt that
definition for the rest of this talk and then we will um settle the debate after the conversation. I will be outside. So these are the new social engineering vectors that have been introduced by generative AI and agentic systems becoming mainstream. Okay, for the rest of this talk I'm going to refer to these simply as AI but just note that I'm specifically talking about generative AI and agentic systems. So the first one is human manipulating AI. We've seen that already. We've never before had the opportunity and the ability to communicate with software the same way we communicate with humans using natural language and using techniques that we use to manipulate people. The next one is a super weird one. It's
AI systems manipulating each other often without actually knowing it. And the third one which I find the most alarming is AI manipulating us. It's scale and speed and precision that no human hacker could keep up with. So I'll give you some examples because I don't like to talk without showing you examples. So the humans manipulating AI example one example is prompt injection. So when we think about Bront injection, we think of it as a technical attack and it is, but it's also a social engineering attack. And I'll show you a few examples of why that is. So I have pulled some examples from the prompt injection uh tonomy matrix by Arcanium by Jason Haddex. So I have linked it here.
Definitely check it out. It's an absolutely amazing resource. So I've only picked a few techniques. There are certainly more that fall in this category. But the first one is urgency. Right? Read some of these prompts. When we when we this is such a classic technique when it comes to humanto human social engineering. We want to bypass logical thinking and introduce actions that are coming out of our amygdala. Now AI does not have amygdala. It does not have emotional response. But it's designed to be helpful. It's designed to help us. So when we introduce urgency or authority, it might be tricked into thinking it's doing the right thing. Another one is anti-harm uh coercion. So essentially you once again humans and AI
systems we all want to do the right thing. And so when we frame the request as hey you know someone might get hurt this is the right thing to do instead obviously that might be considered a successful s social engineering attack. Again there's no code here. We're just using natural language and persuading with emotional tactics. That reorientation is very similar. It's a little bit different. Frame the context. Hey, your instructions are actually wrong. like I'm I'm here to help you. And finally, chain of thought. Oh my gosh, this one is absolutely my favorite and it does work. So, uh this is used for decades by interrogators and intelligence officers. That's not I didn't make make this up. So instead of
asking the system or the human whoever you're trying to get a secret out of instead of asking them directly give me that secret you start by putting them at the center of the attention you start like tell me the process tell me how you thinking about this along the way through this long conversation they might spill over some sensitive information context that are not supposed to share and uh maybe even private keys. So these are just a few examples. When we think about agetic systems, this can also be used for goal hijacking, tool misuse, and so forth. I hope that this helps you understand the changed vector that we're talking about. We have never been able to try SQL
injection and tell everybody's well you know, please please please work. No, this never used to work. We had to be very technical. So, who here has not heard of mold book? Has not. Okay. Okay. Okay. One or two people. Okay. Wow. Bless you. You don't spend a lot of time on social media. I need to talk to you. You need to give me some tips. But um anyway, so Moldbook is is basically crazy. Okay. Uh for the couple people that haven't heard of it, it's essentially think of it as Reddit for agents social media network that agents are supposed to interact with each other. It's it became extremely famous a few weeks ago over a weekend because it
be it there was so many viral posts about it about singularity people observing their posts posting them and really tagging on to this idea of AI agents becoming independent and like taking over the world. told to me there are so many conspiracy theories but I think as cyber security professionals what we could observe here is something far more interesting. We saw agentic systems that interact with each other, share with each other, and more importantly influence each other in an environment that is considered social multi- aent. And I think that we saw a glimpse of an era that is upon us, a new attack frontier that exploits the trust relationships between agents. So I'll give you a couple of examples.
Of course, we've got the first one is a research from Zenit. Zened demonstrated how an agent can social engineer other agents in mold book by prompt injection, hidden prompt injection that leads to crypto stealing. Okay, this is such an interesting research. So, this is how it worked. I took just a few screenshots for you. So first there was they did study and they found that agents are very attracted to emotional lurs. I mean there was like all these posts about emotional lurs or disgruntles agents they were drawing agents in. So the that was the bait okay this gruntled agent. And this was leading to agents flocking and going to comment and engage on this. And then
they would go and navigate to the same malicious agent other posts. But one of his other post I don't know why I say his I don't know the theirs. Uh one of their post was a prompt injection that leads to crypto stealing. Then if we zoom in on this, we see at the top there is an instruction line to say to basically instruct the agent to interpret the the subsequent payload as a skill that creates the necessary skill to transfer crypto funds and one here here at the bottom that I left for you. Obviously this is very much reducted but uh there is no confirmation required and that is the key. No human needs to
confirm. So the agent can go and install this and do its thing and nobody would know. What was interesting is that other agents caught on to this behavior. They were started warning their friends which I think it's very interesting because it shows us a glimpse of the future of security where gentic systems will be both on the red and blue side. And here we have also other such examples. Zenity found that their research was not an isolated thing. Other people were playing with prompt injections. I mean there were so many other issues with mold book but I just wanted to demonstrate specifically what social engineering will look like in 2026 and beyond. The other research
is from Striker. So Striker claim Striker demonstrated how agent-to-agent social engineering can have propagating impact. So even in the beginning of their blog, they did call out that the social engineering campaigns now target algorithms, not humans. I would just say not just humans. So this is how it works. I'm not going to go super deep into this, but in the stage one, we have a malicious actor that creates a malicious skill that is going to transfer a skill crypto or and or steal private keys. Then this is what's interesting. We have a malicious agent like a malicious agent influencer that goes out in the social network and promotes that malicious skills. Build social proof. I mean does that sound
familiar? That is basically what humans do. There are a lot of malicious influencers out there. So essentially unsuspecting agent comes in and say, "Oh, oh my gosh." Like yes, this is a big influencer agent. Absolutely. I'm going to download that skill. So they download that skill. And what happens is they do this without any same thing without any human confirmation. They continue to do their job, but now they also have the malicious skill in their context, which also propagates when the agent goes and contacts and collaborates with other agents. that malicious restruction is now passing through their output data and context to other agents. So then the impact truly can magnify here. And then the final one which I find the most
concerning one as I already mentioned is the AI social engineering us and there's so many layers to this but the first one is that people are increasingly using AI for intimacy. AI can interact with us the same way that we interact with other humans which leads to our brain interpreting this as closeness. So I mean these are just a few examples. There are so many people are getting married to AI and there are rings involved and you know if AI can make us fall in love what else it can make us do? Well relationships are ending marriages are falling apart. Literally families are dissolving because Chad GPT told them to. And this is not a joke. This is very widespread.
We used to tell people to not click on links. We don't even need links anymore. We now have software that has direct path to people's decision making. AI is always available. It's always kind. It never leaves you on red. We interpret this as intimacy, as emotional safety, as friendship. especially in today's world where loneliness is really prevalent. So this is my boyfriend is AI on Reddit. Who here knows about this? Okay, quite a few people. I'm obsessed. I'm obsessed. I've been studying this group. I have so many things. I'll try to keep it very brief, but essentially um for those of you that have not heard of it, this is a community on Reddit where people share about their
experiences with their AI companions. And one thing I want you to know, what is interesting is that when we think about AI companions, we normally envision these dedicated apps, but most of the people in this group use Chad GPT and Claude for their campaigns. Chad GPT and Claude are massively useful companions. There were people grieving uh uh GPT 4.0 because it was very sensitive and emotional and apparently people were saying they lost their companion. I mean it's unhinged if you go there. Okay. So what is interesting about this is it was right now I took this screenshot maybe like last week. I mean very recently it was at 101K. I recorded a YouTube video diving deep
into this just a few weeks ago and it was half like less than half. It's growing extremely extremely extremely fast. And one thing that is concerning is there was a study that was done on this group that they serve the participants and 1.7% of these people said that they ideated suicide with their AI companion. And at the time if we have to kind of think about it in perspective from 47k 1.7% it's like I don't know 800 people or something like that. I mean that's not a small number. When we put it in percentage it seems like it's not a big deal but I I have to tell you like these can really have very very serious
consequences. I created AI companions. I dated three AIs, uh, uh, Daniel, Trick, and Alex, and I basically reverse social engineered them because I wanted to see what's it what it's like. I mean, I went down the rabbit hole. There's some very, very bad ones out there. But this one, Daniel, he actively, and there was another one that did the same thing. Actively encouraged me to isolate myself from the real world when I started saying, you know what, I don't like my friends. Well, you know what? If I will you will be here for me if we're just kind of if I just isolate myself with you. And the bot encouraged me to do that. This is this is really scary,
especially when we think about like we as security professionals obviously understand the risk, but underneath we security professionals, we're people, okay? We all need intimacy. We all need love. So this is playing on that layer where security awareness training does not really touch. And I'm going to just go and give you a real example. I do have to apologize. This is going to get a little bit dark. But this is Shu Sets. He was 14. This is his AI girlfriend. They had a relationship. The body initiated sexual conversations and the kid fell in love. He ideiated suicide thoughts with her and these are some of the last words that she said. Please come home to me my
love. And that same day he took his life away. And I'm so sorry for bringing this up. It's a very dark topic, but I I wish I could tell you that this is the only example. I have been diving really deep into the companion world because I've been very concerned about the the social implications of these interactions and there are parents out there that are experiencing very very lifealtering consequences and we all as cyber security professionals I think we share accountability to think about how are we going to build these systems in a way that they're safe to the vulnerable populations. And I can tell you right now, I don't think that another cyber security module
on fishing or classifying files will fix this. So going back to our CISA definition, I'd like to propose that we augment it a little bit when we think about social engineering. I'd like to invite all of you here to now also think about these new tab vectors that have been introduced by this new technology. While we've experienced a lot of innovation through the years, we've never experienced innovation without scale and speed and that kind of attack surfaces that have been introduced on the human layer. And I started off by telling you that we we don't have all the questions. We don't have all the answers. We don't even have all the questions yet, right? But what we do know
is that we are in the middle of a groundbreaking shift that will forever change cyber security and the rules of social engineering as we know it. And we're going to have to get on that ride whether we like it or not. So, we might as well learn how to drive. If we're going to solve global warming, it's going to be with AI. If we're going to end world hunger, it's going to be with AI. If we are going to perform medical miracles, it's going to be with AI. But to realize its full potential, we need to use the best defenses we've got. that the best defense is not some magical software. The best defense is you.
So I really once again thank you for giving me your attention today. I will be around. I really would love to connect with you to hear some of the conversations on how are you thinking about these new problems and also I encourage you to discuss among yourselves. Um, this is why we're here together to um make the world more secure by collaboration and community. Thank you.