
All right. So, um we're going to talk a little bit about uh some of my favorite subjects right now. And a lot of it has to do with social engineering. If you read the description here, uh I am a huge fan of social engineering, the psychology behind it, what happens with people. And I'm a huge fan of AI. Like, I'm an AI geek. I'm a nerd. I've been playing with uh like Deep Face Lab and some of those uh uh AI tools for like five years back when they were still actually developing Deep Face Lab and it took an entire day to do like a 30-se secondond clip. U but it's always fascinated me and what got me into
computers was actually uh I used to be into photography back when we used to actually do film and then go to the dark room and do all that and then I was in the Navy. When I was in the Navy, one of the guys that I worked with had an Omega 500 and he showed me that you could scan a photo and digitally change it and I was hooked. That was it for me. I'm not going to tell you when that was because that was a long time ago, but I got into it because of the graphics and stuff, which is why I really love some of the AI stuff that we have now and some of
the tools that we can do, especially in the Gen AI side. But I'm fascinated by how this is being used against people. Now, I I'll say this. Uh if you've talked to any marketing departments and they talk about AI, it's usually surrounded by some terrible FUD or fear, uncertainty, and doubt. And I don't like that. Like it's I don't know. They're they're trying to to sell you everything. I don't know if you've noticed, but anymore if you say AI, the venture capital companies want to throw money at you. So everybody is AI everything, right? Um it's AI on the blockchain with zero trust or whatever. Okay, they just start throwing stuff together like that to get funding. It's
pretty amazing. There are some folks doing really cool stuff with AI. Um and not to plug too much. I don't work for them or anything, but Gardier down there, if you've talked to them, they do some really cool stuff with aentic AI. So, we're going to have this stuff in our lives now. And it is going to change our lives fundamentally. And I say this, it's the same way that the internet changed our lives and how we live every day. It's the same way computers changed our lives. If you were around in those days, I went to high school and they taught us how to type on a typewriter, folks. Okay? There's a couple of us around like
that. Okay? Um, our lives have been through turmoil a couple of times. This is just another one of those times, but it's going to be part of us moving forward. There's no going back. The thing about technology is we do want to embrace technology. Um, my wife is a technophobe. She hates anything AI. She's like, there is no Alexa downstairs. There is no AI. I I run home assistant for a lot of things, so I have my own voice thing going on downstairs now. And she's like, I don't want it down here. And I keep reminding her that um there was a time when elevators were technology. People are like, "I'm not riding that thing. I don't trust it."
And at some point in time, a chair was actually technology. It's in our lives now every day. We're going to see more and more of that with AI. So, my name's Eric Cone. I'm a security awareness advocate for NO4, if you know who they are. Um I've been gosh, I've been in the the industry since the 1990s. Uh my first network was Windows 3.1 on a Lantastic network which was very very long ago. Uh but I've worked in lots of different areas. Uh I spent 10 years with the Department of Defense where I ended up the security manager at what's called the second regional cyber center for the US Army. So we did the network
infrastructure across all North America for the US Army. I helped deploy the US Army's active directory. I was a domain admin there. Um, and I've I've been in the trenches for a long time. Now I get to go tell stories and uh and do things like that. So I love my own voice. It's why I'm here speaking. So enough about me. Uh, I will warn you all this is only the second time I've done this slide deck. So if it goes horribly wrong, my bad. Okay. Um, but this is what we're going to talk about. how it's being used in real world attacks, the differences between it, all the different stuff and practical steps. So, first of all,
social engineering. Again, huge huge fan of social engineering. The fact is humans are targets. It doesn't matter where you look or how you look. The stats are kind of out there. So, it's an issue that we really have to pay attention to. And it's one of the things that AI is really good at exploiting is our people side of things. Um it's always interesting. This is from uh this year's uh uh Verizon DBIR. Now they're saying like 60% involved a human element. But then they all also go on to say things like oh x number of percent involved credential reuse which is a human element, right? And things like that. So people are a problem. Of course
uh vulnerabilities are exploited pretty significantly. There's lots of things we got to do. It's not all just a people problem. Um but the people problem is being greatly hindered by AI. So social engineering is a key attack vector, right? Uh we kind of talked about this um QR code fishing. How how many of you know about all the fishing or the ishings, right? There's fishing, fishing, smishing. And someone used the word quishing. And if I ever find out who they are, I'm very upset with them. I hate that word. I cringe every time I hear it. So QR code fishing, not quishing, folks. Okay, remember that. Please don't perpetuate quishing. >> It's what >> toad.
>> Oh god. Telephone oriented attack delivery. Okay, whatever. Whatever. I'll stick with vishing on that one. Okay, I'm not going to run around saying toad and have to explain what that means. Oh, you were attacked by a toad. The hell's going to go for that, right? Okay, so here's where I think the big problem is. We're we're having all this fear and certainty and doubt about like deep fakes, right? Everyone's heard of deep fakes, voice deep fakes, video deep fakes. Oh my gosh, they're going to change the world. And yeah, they're a problem. But here's where I think the bigger problem comes into play is the efficiencies that bad actors are gaining using AI to generate attacks. And I'll
tell you right now, when it comes to like a fishing email, does it matter if it's AI generated or human generated? Not in the least bit, right? The only difference is you can generate them faster and I have an example of it here. You can translate and localize, which is a gamecher. So when we talk about people, there's a couple reasons why we fall for this. And I'll tell you in in all my years of doing this, in my younger years, I used to have that mentality that was uh you know, the users, oh god, the users did it again. Oh my god, the users are terrible. It's you know, we we've gained a bit of a reputation like that. Y'all
seen the Saturday Night Live skit where the guy walks in, he's like, "Move." Okay, we're not we're not always the best people people and we tend to look down on on users at times, but but here's the deal. There's reasons why people fall for this stuff. Our brains work in a way. This is through uh Daniel Conaman explained it this way with the systems of thinking. And system one thinking is what we do when we're kind of in that autopilot mode. Anyone ever drive on a long trip and you get there and you're like, where'd that five miles go? Like you're steering the car, you're following the speed limit. Unless you're in Florida or drive a BMW, may
you may have used your turn signal. Right? If y'all have driven around here, you know what I'm talking about. It's like people like, I'm not telling them I'm going that way because they will get in the way, right? Um, but yeah, you know, we we we go into that mode where we're actually able to do stuff without thinking a whole lot. That's system one thinking. The thing about system one thinking is it takes it takes a a very little amount of energy. And believe it or not, our brains are a huge consumer of our calories every day. We use a lot of calories in thinking. our bodies, we want to stay in a dumb state as much as
possible. Well, system two thinking is where we get out and we do the uh the the problem solving, the critical thinking kind of stuff. All right? And why this is important is system one thinking is where we fall back to when we're in a heightened emotional state. It's our fight or flight. It's our flee type stuff. It's what we do when we're under a bunch of pressure. The problem is we make a lot of mistakes when we do that. Who here has made big decisions while they're really freaking out and it worked out well? Yeah. Probably didn't work out well, did it? >> Did it? Okay. Well, fair enough. Sometimes we get lucky, right? >> Yeah. Yeah. Like Colin said, that
clock's right twice a day. It's Sometimes we get lucky with that, but for the most part, we tend to make mistakes with that. It's the critical thinking part where we figure out stuff. This is where the social engineering comes in. the fear, the emotional tweaking and all that is designed to get us into system one thinking where we make mistakes and we transfer out a bunch of money. That's what it is. That's why I don't know about y'all, I've never seen a fishing email that said, "Hey, whenever you feel like it, you want to go pick up some gift cards or if you get around to it today, you want to wire some money out, right? That
that doesn't happen. It's always like you got to do this now or else." That's because they're pushing us for that. So, it's important to understand why we make the mistakes. This is also why throughout my career when people have made mistakes and I go and talk to them, I implore you all to be reasonable with them because we may look at something and we may go, "How did you not see this for what it was? It was so freaking easy to spot, but you're not the one that's under the pressure." And timing plays a lot to do with it, too. When I first started speaking, this was about nine years ago, I join joined No Before and I reported directly to the
CEO. I was like two months into our 90-day uh um what do you call it? Yeah. Anyways, yeah. Trial period, probational period. Yeah. All that kind of stuff, right? I was easy to fire, let's put it that way. And uh I'm at the airport getting ready to get on a plane and I get a uh a meeting request from our CEO and it says, "Eric, I need to talk to you about some of the things I I heard about your presentations." And I could not hit accept fast enough. I mean, this is the boss man. So, I'm getting ready to get on a plane. I hit accept and my own team Rick rolled me, which was just wrong.
Okay, it was one of the simulated ones that they were doing to me. At first I was mad. I was like, "Oh, come on, man." And and then I started realizing, yeah, my presentations were starting to show up online. They could see where I went. They could see who I reported to through LinkedIn. And all that information was actually publicly available. But I was like, "Man, this stinks." One of my other colleagues, her name's Anna Colard, incredibly smart gal, she uh she actually started uh popcorn training, which we acquired, but now she's on our team. She got into a car in uh I forget where she was speaking. Uh but she gets into the Uber and they're taking off and she
gets a message on her phone that says, "Hey, there's a problem with Uber and she clicks on it and sure as hell it's our team. Just happened to get her at the same time and hit her with the simulated one because of the timing." The ironic part is she had to do the remedial training which is a training course she designed. Okay. Yeah. It doesn't always work perfectly, but what's that? I think she did pass. I think she did. Yeah. Yeah. But it goes to tell you, I mean, even those of us that are in it every day can get messed up by this kind of stuff. And AI is making it easier to do that. So,
use cases for fishing for AI mostly can be used to generate those fishing emails, but it's the translations and localizations that they that they do some cool stuff with. um building profiles of victims. Now, a lot of times we have these breaches and you know they always gloss over the part where they're like, "Oh, there were no social security numbers or credit cards in this breach. Yay." And then you start looking at it, you're like, "Well, yeah, but this was a medical breach." And now if I'm a social engineer and I know you had a procedure on such and such a day in such and such a hospital, if I know that information about you, man, I can run some amazing
pretexts based on that. Okay. If you've ever had a surgery here in the US, it's like for months you're getting these letters like, "Oh, this is the lab guy. I took the lab from the blood from the phabotamist, walked three steps, handed it to this person. We're not in your insurance. That'll be $300, right? Why wouldn't I just throw myself in the mix and start doing stuff like that? The ticket master breach. Okay, people were like, but it was like hundreds of millions of people, right? If I know you went to the Taylor Swift concert on this and this a day and you were in this row in this seat, how hard would it be for
me as a bad actor to put something together and say, "Hey, I saw that you were at this uh we're having a Flash concert that we haven't released yet. 500 bucks gets you tickets. I know you were on this seat and da da da da da and you're a Taylor Swift fan. People are going to jump all over that. It's it's money in the bank. So that information that's getting stolen is actually extremely useful. And the more we see the AI out there putting these things together and pulling from different breaches and combining this information, it's pretty wild what some of the OENT tools will do right now even without full agentic AI doing this. But
you can pull together an amazing profile on a human like that. And if I know some things that you believe are somewhat private and I reference those, I have your trust. So we got to be careful with some of that stuff. Um, this was interesting to me. This was uh done by the X Force, which is an IBM thing. And uh they were talking about here, this is a X Force click rate 18% on their average fishing that they did in red team. Um, this was the company average click rate. Okay, cool. Um, this was a human designing it. Just a normal human, not their X-Force because they're super great at 4% more. You all remember that if you want to
sign a check, 4% better on X-Force. Anyways, the AI one here was 11%. Okay, so AI wasn't as good at fooling people as some of these other ones. But this is what was interesting. They said it takes about 16 hours to craft these emails. The AI one was done in five minutes with five simple prompts. That's your difference. That's where it gets scary. I really believe that humans are going to be able to think a little bit better outside the box and put together some of these if they're doing the attacks like that. But the sheer volume that it's running at is huge. Um there's been a couple of the fishing as a service groups now that are offering
AI generation and translation as part of the fishing as a service um offerings. So it's coming down to the lower level folks and being offered like that. Vishing of course the voice fishing it gives you very small hands. I'm referencing the picture. Come on. It was kind of funny. Okay. Anyways, um, how many of you have seen the things like there's a warrant out for your arrest or or heard of people getting those calls? Yeah. Yeah. My stepdad got one of those. My stepmom was in surgery and he got a call like that and they uh he he was already under a lot of stress. They got him to log into his machine and download some software and they took
over while she's in surgery. And then uh they told him uh he was going to go to jail if he didn't pay the fine. So they walked him through logging into the bank. The only problem was with being under so much stress and he had suffered a stroke about a year before that stress he couldn't remember his password to log into the bank. And they got a call saying his wife was out of surgery. He said, "I've got to go." Hung up on him, took off and uh told my sister-in-law about it at the hospital. And she was like, "Hold on a second. Wait a minute." But the only thing that saved his bacon was he couldn't remember his password to
his bank account. And that's scary, man. That is really scary. But they're also doing the stuff we've kidnapped your family member. And yeah, they are using some AI stuff to do that in the background. But that's been going on since before the deep fake stuff, too, where they have a lot of noise in the background and it sort of sounds like a kid. And when you're under emotional duress, especially if you like your kid and you want him back, which is sometimes optional if you have teenagers, I'm sorry, but sometimes you're like, "Can you keep him for a couple of days? I I'll throw in a few extra bucks if he'll, you know." Um, but
this kind of stuff is happening. That's scary. I I hate this one. Um, but yeah, it's going on. There's a new There's not a new one, but getting messages. I worked with a reporter here in Tampa, actually. They got a message that said, you know, um, we turned on your camera while you were an adult site. We're going to drop it to all the people in your contacts, blah blah blah blah blah. But what was interesting about this one is they said, just to show you we're not screwing around, here's your address, and they had an address in there, which happened to be the reporter's parents, and a picture of the house that they pulled from Google
Street Maps. Now, you want to talk about something that's going to freak someone out. You You show them a picture of your house and say, "You pay us or else, man." But all that stuff can be done manually, but you can also use AI to go out and do it. So, smishing, of course, that's text message fishing, same thing. Generating new stuff, um, doing that, building the profiles. And what I've seen that was interesting is some of these can be used when they initially contact you and you people get back in contact with them. That'll be run by an AIdriven chatbot at first until they've got the person hooked and then an individual will take
over and continue the scam down that way. So it's an easy way for them to make that a little bit more efficient going down that route. And these chat bots are pretty amazing, man. I mean really amazing. I'll talk about that in a second here. And then quishing the I just hate that word. Um, you know, spinning up the websites behind it, stuff like that. Of course, if you travel a lot, you see these all the time in the airport on the tables and stuff. You know, what's to keep me from going in dropping down another QR code on top of things. Um, nothing at all. And then I capture your stuff. We see them on the
parking meters, uh, stuff like that. This is going on quite a bit. And I saw a very interesting one in Southern California where they actually sent letters to people in the mailbox letters. Remember remember that thing in front of your house that collects Yeah. garbage? They actually sent a letter and it said, "Hey, this is from the city. You were parked on the street overnight when we were doing street sweeping and you have a $53 fine." And then it gave a QR code and said, "Scan here to pay." So boom, they're scanning it. And there's some psychology behind that. 53 bucks is enough that a lot of people are not necessarily going to fight that. You know, they're going
to grumble about it, but you're not going to spend a day going after something like that. It's believable. It would have happened overnight when you were probably sleeping, etc., etc. So, there's a little bit behind that and we can break that down more if you're bored later. Um, but that kind of stuff can be done. And then the the Osent stuff. Fun story about these. I live in a gated community up a little bit north. It's called Trinity. And uh we had new gates put in. They were horrible. It was a horrible mess. So what they did is they put up a big sign outside the gate that said to learn how to use the gate
system, follow this QR code. And I was this close to Rick Roll in my whole damn neighborhood. I figured if I did that though, the HOA would be out there measuring my grass every day like, "Oh yeah, you want to play games, huh?" Right. But I mean, people are trusting these things. Y'all remember the Super Bowl where it bounced around for 30 minutes while there was a collective sigh of horror through the security community? Yeah. So, they're out there. And then this I thought was really, really cool. How many of you heard about this? $25 million transferred out due to a Zoom call. Okay. What was cool about this is we're on the cusp of being able to have live
deep fakes that are believable. there's still usually some movement, some problems with it at this point, but give it a year and maybe even less and it's going to be something that's happening. But in this one, what they did is they devised a plan to do this Zoom meeting and they scripted the entire meeting. They went out, they got um video and stuff from these executives and they built deep fakes of each of these executives and they included one other person, a finance person in the call. So, they spun up this call completely scripted, people talking back and forth about this transfer they needed to do. You had the one guy in the background who had nothing to do with talking to
anyone, which is my favorite type of Zoom meeting. I don't know about y'all, right? You're just like, "Yeah, you do it." Well, they cut the Zoom meeting off early and it was followed up by a message to that finance individual and said, "Hey, you heard us talking about this. You heard us discussing it. We need to get this money sent out. We're not going to bother spinning up a whole new Zoom meeting." And we've pretty much all been on one of those meetings that's ended in a dumb way, right? So, it was very believable. And what did they do? They wired out $25 million. But that was all like a movie script. with Zoom. I thought that was incredibly
clever to be able to do that. Now, soon will we be able to do this in real time? Yeah. Yeah, we absolutely will. But yeah, this was a lot of money. And then um if you know anything about No, before we sort of hired a North Korean guy. Whoops. Right. Who hasn't? Okay. No, really. Um what happened here is this person um they interviewed four Zoom meetings. They were on four Zoom interviews. The guy really knew his stuff and I was hoping my buddy Colin would be here. I think he was involved in the interview process when he worked for us. Um he said this guy's smart and Colin knows his stuff. So we it was for an AI
position, higherend AI developer position. And we did background checks on this guy which all passed etc etc because it was a stolen identity. And one of the things he did is in his resume and the stuff where we were talking to him um he provided a photo of himself which we later found out this is a photo he provided but this was the original that they used. They did an AI little face uh fix on there to make it look more like him. Um we hired them. One of the things that he said, which we now know is a big time key, is he said, "Look, I'm uh I'm moving up to Seattle. Can you send my
laptop and stuff up there? I'll pick it up up there, and start work up there. I'm in the middle of moving." So, we ship it to Seattle. Somebody picks it up there, takes it. As soon as it comes online, and I mean as soon as it comes online, weird stuff is going on. And so, red flags up. We called the guy and said, "Hey, man. what's going on? You know, what what's all this stuff you're doing? And uh he said, "Oh, well, I'm having problems getting connected to the router and someone said this stuff will work it." We're like, "Okay." We hung up and they're like, "Yeah, not believing that." And they cut his access. Okay. It
was like 12 minutes and we cut him off the network. Um and of course, I will say this because we have to we're talking about this. Um he had no he had no access to any real information. It was our training during that time. So, for the first couple weeks, we don't get anything. So, nothing was stolen. But that's where we found out about this. This is going on all over the ca all over the place. Okay. And I like this quote. If you're hiring contract workers, you either are interviewing or already hired a North Korean. It's going on all over the place. What they did here is this laptop went to a laptop farm in a in a uh apartment. They used a
uh Raspberry Pi that they would VPN into and it acted bas basically like a KVM for them. So we couldn't see a direct remote connection to the machine. Okay. And there's been some buzz of these laptop farms lately. Sometimes up to 70 laptops sitting there running the whole time. And what they're after is money. They're not even trying to steal information. I mean, if you throw it in their lap, they're taking it. They're rolling with it. But they're funding the regime through the finances of being on payroll. And I'm not kidding you when I say I mean I've talked to a lot of people since we blew this one uh wide open and started talking about it
openly. Um but I've talked to a lot of people that are like they were great employees until we found out like they're really trying to hold on to those jobs and they're doing good work. Sometimes like on this high-end AI thing, we think what what they probably would have done is throw it off to somebody else just to put a butt in a seat. They would eventually fail. Um, but this other guy is running around doing interviews. And we also believe that in a lot of cases during the interview, they have another person sitting there. So when you ask them questions, it may be a little tricky. They're furiously firing it off into the AI looking up stuff through ChatGpt so
that that person looks really smart in their responses. So, this is a real thing and there's a lot of money being made with this. They believe there are thousands of people that are working in our companies that are actually sending money back to North Korea. So, pretty >> What's that? >> No, they're not really even spying or stealing. >> It's just about money. >> Yep. Yeah, it's just about the money because they're so heavily sanctioned. So, it's it's pretty crazy how much stuff is going on and how much they're using this. Um, if you're talking about like OSENT and location finding, I did this um we were at a conference and one of the guys a couple days later posted
this up. Where in the world is so and so after an awesome whatever. I was off to this location. Names have been removed to uh you know protect the innocent. Uh so I took three of these photos and said where the hell is this? And chat GPT nailed it. Toronto, Canada, talking about which square they're in. And even they give it gives a total breakdown of it. But image one, it said we know there's a bar district, but based on the other photos that you provided that turned out to be in Toronto, we believe this was in the bar district in Toronto. So, it's actually looking at all this stuff together. I mean, there's no way I
would have figured out where that person was just based on that. It took like two minutes and I knew exactly where that individual was from these pictures. So, this is the kind of stuff where it can be a little bit dangerous and a little bit scary if you start thinking about it. Um, have you ever llmmed yourself? You know, this is kind of like the Googling yourself and it will or won't answer sometimes. Sometimes you can push it into it. It answered for me because I'm considered a public figure because I do public speaking, I guess. But I did some other people and it wouldn't answer that. I I think I probably could have broken it by pushing a little bit
further, but at least there were some guard rails in there. And it came up with all kinds of stuff. Um, one of them it said I was a US Army veteran, which I wasn't. I was a contractor. I was I'm a Navy veteran. So, it's not always right. But the information it pulled up, this was like pages of stuff about me, stuff I don't even think I knew. Like I was like, I sound pretty damn cool, man. I was like, but try this sometime and see what it comes up with about you because you might be surprised. Now, if I'm just a bad actor, I'm trying to put stuff together for a pretext. This is great information.
So, AI platforms, some of the common ones, LLMs, we all know about that. some of the threats that we have with that uploading of sensitive data in your organization. If you don't have a policy about what information you can put in to be summarized or looked at by an LLM, you're doing it wrong. You need to fix that because uh it does an awesome job taking the quarterly finances and fixing them. But you really don't want to be throwing that in a public LLM, right? Um hallucinations are poisoning. So skewed results, they can absolutely create wrong results. And if you're not careful, like that one guy did uh with his he was a lawyer and uh yeah, the
judge wasn't real happy because it turned out these cases that he referenced were they didn't exist. So little lazy. I think the guy might have actually faced disbarment over that because he went back to the judge and went, "No, no, this is what it is." Because he asked chat GPT to give more details. And it was all BS, guys. like there was nothing even close to that. Um, but also poisoning. And I was just at a conference not long ago. I was talking to somebody. They were talking about one of their co-workers that decided to mess with things and they started posting up pictures on like social media and stuff. It was a llama with their name on with it. So, ChatGpt
started seeing them as llamas and associating them with a llama, which is fantastic. But you can poison those results sometimes by doing stuff like that. Okay? So don't trust it. Um, of course, the information and data gathering and the chatbot, the human stuff, and I we'll talk about that. Genai obviously is creating stuff. Yeah, we got to worry about the audio and video uh impersonation manipulated documentation. So I'm seeing where folks are going in, they're, you know, swiping a bunch of stuff. you know, the old Walmart shuffle where it's like one for me, one for nothing. Um, and they're taking stuff out there and then they're using AI to generate receipts for these items and then returning them.
So, they'll steal stuff and it's not always like Walmart because they tend to have some better things in place, but a lot of smaller organizations, they'll steal it, they'll shoplift it, go back with a fake AI generated receipt and return the stuff. So, that's one of the things that's going on. face or voice swooping. Yeah. And misinformation, disinformation. If any of you are on that social media stuff, you might have heard about. It's terrible. Don't believe anything you see there. Um, and then agentic AI. This is the stuff I'm really interested in. The difference between agentic AI and the other AIS is agentic AI, you throw a goal at it and say, "Get out of here and go do your
thing." Where with like LLMs, it'll ask you different steps. Do you want me to add this to the picture? Do you want me to add that? With the Gentic AI, it just goes out and does things. This is like wild wild west craziness out there. Um, it can do some really cool stuff. I've been playing with a couple of different ones. Um, and and they can do some really cool stuff. You can do automated recon and penetration testing, persistent threats due to evasion. It can evade things very easily or or help evade things within networks. Um, credential stuffing. It can take all these passwords from usernames and passwords and throw them at stuff to try
to log in in a very automated