
So hello everyone. Thank you all so much for coming to my talk. I hope you guys are having a good time here. Um my talk is called the algorithm of deception. My name is Amamira Muhammad and I'll be talking about inside AI powered social engineering. uh basically going to talk about how I feel like AI has been supercharging social engineering in a way we've never seen before and kind of just diving into the technical process of that and the human process behind it. So quick little who am I before we get into the meat and potatoes. So getting to know me, I'm from Silver Spring, Maryland, born and raised in the DMV. I currently work in DC at the Smithsonian
Institution as an IT specialist. But you know, I love Seattle, so I flew here just for this. And yeah, I'm Sumakum Lady, graduate from Talson University in Baltimore, Maryland. And I've gotten some security searchs, so I'm I'm pretty into this kind of topic. Um I'm co-authoring an upcoming Red Team Handbook right now. Um, it's basically talking talking about labs and just the behind the scenes of what red team or Red Hat attackers are looking for in your vulnerable systems and how you can protect yourself from that using that background knowledge of what they're looking for. And some of the labs have already been published in the NSA cyber security curriculum repository. So really excited for that to be coming out
around next year. So let's hop into it. AI for social engineering. So, the AI has been a huge main topic. You might think it's going to destroy the world. You might think it'll save it. That's kind of up to you right now. But me, I'm kind of in the middle. I see everything that's going on with these chat bots that can like answer any question you have. Uh deep fa uh deep fake modulators. You can pretend to be anybody, a movie star. Voice modulators, you can sound like you're Will Smith when you're really just in your room and not Will Smith. So, yeah, it's getting kind of crazy. we're getting a harder time trying to
understand what is a scam and what kind of is just that just doesn't look right. So yeah, it's completely changing social engineering game. So for those who need a quick reminder on what social engineering is, it's just the act of manipulating others into revealing sensitive information or taking an action that would compromise your security or your data, your personal information. So, while social engineering has always existed and humans have been manipulating each other for pretty much as long as humans have been around, um AI has supercharged it in ways like we've never seen before. I mean, there's been times where you can kind of pretend you do a lot of deep uh deep recon to kind of convince someone
you're someone that you're not, but to show your face and show that you are someone that you're not is a whole another level. So, let's get into some articles from the last year. I wanted to show you guys some of these because you're able to see how AI is able to scam people more efficiently. So, starting with the AI voice deep fake of a US secretary of state that triggered a whole global security alert. Now I wanted to include this because a lot of time when we think of AI social engineering we think of only financial fraud and that can be the case a lot of times like we'll get into with this huge financial fraud but it
also is just a threat to national security. It can be a threat to diplomacy things like that. So it's it's a very wide array of what it can handle. Um the second one kind of the one that's most interesting to me. LA woman loses life savings after a scammer pretended to be um Steve Burton. I don't know if you guys know who Steve Burton is, but she definitely did. And she she loved him and he loved her back. Well, that's what she thought. Um basically, a scammer was using a deep fake of Steve Burton. He used hyperrealistic facial um deep fakes. He used voice modulators and he fully convinced this woman he was Steve Burton and manipulated her into
selling her own home and life savings. Now, it's one thing to love Steve Burton, but would you give your life savings to Steve Burton? That's another that's another question, but it was definitely huge in the news. And you know, catfishing has always been around. I don't know if you guys used to watch that show. I know I was a big fan when I was younger, just seeing like, okay, like you just throw a fake Facebook photo and steal something from someone else's account. Avoid calls. And you could still do calls like this, but like probably not FaceTime calls. and you can kind of get through it. But with AI, being able to have that voice modulator
be that person on a call, I mean, are we surprised she gave Well, we're a little surprised she gave all her money anyway, but are we that surprised that she believed that it was him when she's seeing someone who looks and talks exactly like him talking directly to her? Um, in the last article, the CEO was convinced on the phone that their boss by their boss was telling him to give $243,000 to a Hungarian supplier. Now, the mindset behind the CEO was kind of like, "Okay, well, my boss is German. This guy sounds German. Yeah, I'll give him 243,000." Like, that sounds like what he's supposed to do. That's what that's what I'm supposed to do. This is someone I trust. this is
someone with authority over me, which we'll dive into later. But yeah, let's do it. And then he realized, wait, my boss's number is coming from or my boss's call is coming from an Austrian number. Let me spin back and double check on that one. You guys think the money was still there? Money was gone. Money was in the wind. I'm talking multiple different accounts, multiple different countries. It was out of here just like the scammers. So diving into that psychology of deception, the technical process. So basically this is the behind the scenes. It all starts with the data, the public posts, voice clips, things that feed the system. Um that can be your profile pictures, basically any kind of digital
footprint that you guys have laying around. That's all being used to feed the system. Um that moves on to models. So the AI tools then use that data to learn the tone, the emotion, and the context. And then you can create your message. Now, this is where the big part of the scam comes in because these messages aren't hard to create with prompts. Unlucky for us, lucky for the scammers, AI doesn't need to think. AI doesn't have to stutter. Everything's at its digital fingertips and it can make a prompt. If you guys have ever used AI or chat GBT or anything, seconds, literally, you don't even have to wait a minute. So, if you have the right prompt
in there, you can convince a lot of people. Lastly, it'll go into manipulation. So emotional triggers being able to replace our logic which we'll also dive into a bit later and it creates a sense of urgency, fear, trust and that's where they get us. So AI fishing before and after. This is a real email that I got in undergrad right here. Um basically everyone in my entire undergrad class, my cyber security class got it and it was supposed to be from my professor Willie Sanders. Um, unfortunately that looks nothing like an email from my professor Willie Willie Sanders. Um, the heading says Professor Willie Sanders, which is not where names go, honestly. Um, the name is #treat asurgent underscore. I
mean, in my opinion, they basically should have just like made it scam. Like, this is scam likely. Like, we we get it. AU as the profile picture and it's pink. It doesn't really give my professor's vibe. AU. I don't know. That's not his initials. So, I'm not sure what they were going for. Um, this is my friend Sitta's phone cuz I was like, "Did you get this?" And it just said, "Sitta, available. Reconfirm your private cell number now." Exclamation point. Exclamation point. And I was thinking like, reconfirm? Why would we he why would our professor have our number in the first place? Like, what are we reconfirming? So, but it's the urgency, the treat as urgent now. And
then I was like, "Okay, well, let me just put my phone away because I'm tired of scams. I don't think a second thought about it." Then I went to class the next day and I asked my project partner. I was like hey you know we just did a whole coding lab together. Did you see this email? This crazy email. She was like yes I got $3,000 worth of iTunes gift cards and I gave it to professor like he said he needed it. I'm thinking like whoa am I in a dream? Like our prof you gave $3,000 worth of iTunes our professor had a doctorate. We were in undergrad. I was like, "You have $3,000? Like, where are you getting this from?"
And then he gave it to him. Like, what are you thinking? She was like, "He said he needed it. He said it was urgent. You want to see it?" I was like, "Oh, I saw it. Trust me, I saw it." So, as poorly made as this was, someone who was also getting their degree with me in cyber fell for it. It's social engineering at not not at its finest, but definitely social engineering. Um, moving forward to a post AI world where you're able to kind of make anything you want look as realistic as you want. Here on the right side, we actually see a cloning tool used, a a cloning generative AI tool being used to make a nearly identical
copy of a previous email that was being sent to this person, and they just filled all the links with malicious links instead. So, it was kind of just like, okay, well, I don't really see anything too off with it. It's not something that, you know, like maybe if I realized like I didn't have a FedEx package waiting for me, I would be like, "Oh, maybe something's a little weird about this one." But in general, doesn't really look iffy. Doesn't really look as suspicious as treat as urgent. So, why do we keep falling for it? So, social engineering works because it targets our instincts and not actual software. um people are the primary entry point and AI is just being used as
the newest method of exploitation. So social engineering has always been around. This is just a supercharged version. So to the human process, which is kind of my favorite part about it, um it starts with authority. What we're going on, what's inside of our minds when we're dealing with these crazy attacks. Well, authority. Okay. Well, this is coming from someone that I usually trust. this is coming from or this is coming from someone that I usually take orders from. Maybe a government agency, maybe um a boss, someone we kind of don't question like, okay, well, I'll just do what they say. That looks official. Um, another thing, urgency. I have to act fast. It says urgent now. It
says exclamation point. Exclamation point. Well, I mean, I have to. Um, yeah, that's kind of just like a thing that they use to get into our heads and know, okay, well, let let me just do what they wanted me to do and lose out on all my personal data. Um, familiarity. So, I know this person. Uh, a lot of times people pretend to be your mothers brothers sisters uncles and because you already have that trust built with that person, you automatically have that trust built with this person pretending to be that person. So, unfortunately, they use familiarity against us. Um, fourth, scarcity. If you feel like you have a limited time and a big risk of loss, if
you don't immediately do this thing that this scammer that you don't know is a scammer, but is a scammer is telling you to do, um, it can be really scary and you can think like, okay, well, I just need to hurry up and do it now and it'll end up in a worse position than they found you in. Um, lastly, emotion. This feels personal. Sometimes they show like an AI like sick dog and that's just that gets us. It's unfortunate, but that's what they know that we respond to. So, when logic goes out the window, this one of my favorite slides. So, um, how many of you noticed that as your strength of emotions goes up, your
ability to think logically goes down? Yeah, seeing some hands. Exactly. So, that's kind of like a common thing like society has kind of decided like, okay, well, when you're thinking more rash, you're not thinking as logically and you're more likely to do something that's probably not the best for you. So, um, here are some popular statements made in cyber threats that they're hoping leads you to fall fully into that path of let me just rely fully on emotion and not think twice about this. So, one being update needed, verify your payment information. That can trigger um responsibility, authority, uh maybe even anxiety because you're like, "Okay, wait, like they don't have my payment information. Okay, well, I need this
product. Let me put it in again." And then next thing you know, someone else has your payment information. So, um, second, you've been hacked. Please change your password. That's a scary one. They like to give us a lot. And they keep doing it because it keeps working. Honestly, people keep falling for it. You get the sense of fear. You feel urgency about it. And you think, okay, well, let me change my password. Like, I care about my safety. There must be something really important in here. This is the old password. Here's my new password. And then you just gave away your password. So, try not to fall for those. Um, second, third, your message wasn't delivered. That doesn't create as
much fear and urgency, but it creates a lot of curiosity and doubt because, well, why wasn't my message delivered? Wait, who am I supposed to be sending my message to? Let me just send it again. And then, who did you just send that to? Kind of leads you down the wrong path. Um, the last one, your mailbox is almost full. Increase capacity. Now, I kind of get this a lot. I've had my email since I was like 10. So, I have this, it's constantly on 97%. I don't think it's ever been below it at this point. Um, uh, my solution to that, make a new email. Don't give them no money. Like, even the actual Google, don't give them
no money. Just make a new email. I swear like I I don't Luckily, I haven't filed for that yet. But I know some people, they're like, "Okay, well, I applied to a job and they haven't responded. They're going to respond to this email account, so I need this email open." But I swear, don't fall for it. So, I know AI is talked about as bad and good or evil and world crushing or amazing, but I don't think it's bad by any means. I just think we need to learn how to use it for us because these attackers will continue using it against us unless we actually do something about it. So, I created this defense framework to
help um teach people how to protect themselves. Uh basically falling for social engineering of any kind. have been going around to schools and colleges teaching people about this defense framework, but especially with the rise of AI supercharged social engineering, I think it's more handy than ever. So, yeah, it starts with recognizing when a scam is present in front of you. Um, followed up by verifying that these are the actual people and the actual companies you're talking to and working with. And lastly, defend. And luckily, we have a lot of cyber security tools now to defend ourselves. and more importantly AI tools which I'll get into that we can also use to defend ourselves. So diving into recognize uh spotting the
setup. So noticing the emotional triggers like what we noticed in the email that I showed you with that poor poor friend of mine who lost out on the 3K. The treat as urgent. Is this causing me to feel urgency? Am I feeling a lot of guilt or is there a reward? Is there some type of emotional pull that's happening? If you notice, hey, this is really relying on me to think emotionally, maybe that's something that can help you recognize. Um, second, ask yourself, well, why me? Why now? Is this just something specifically targeted towards me, or are people kind of throwing things out, seeing who bite the bait, and I'm just the person with the
$3,000 worth of I2 gift cards today? Um, third, uh, AI crafted messages often feel very polished and too personal. And AI crafted messages kind of feel too perfect. So if something ever just feels, you know, this is too close to home to be real or too good to be true, that might be the beginning of a setup and it's worth looking more into it. Second verify. This one is the most important one in my opinion. Um, test first then trust. You have to verify, verify, verify. If I could say a fourth time, I would. Um, basically, uh, you can start if somebody's asking you to confirm your payment method over the phone and you're like, I don't even know this number.
Like, maybe start by calling the actual number. Going online, going to the official site. Um, going in person to stores. I know like some of the older generations don't don't even want to go to the site. I'm just going to go straight to the person inside the store and ask them if this is something they do. And it works. That works. It's a second factor authentication which is super helpful. Um third or second hover check or pause before clicking or transferring anything over. Um a lot of times these scammers are hoping that you don't notice malicious links because in plain text it looks nice and like hey this is safe. This is actually really great. No
do not trust them. You should take your time. If there's anything I learned, take your time because these scammers are relying on you not taking your time, not double-checking, just just clicking, clicking clicking boom virus boom where's my bank account? So, yeah. Uh, third, slow down. Like very much slow down. Um, AI succeeds the most with these scammers when they're hoping that you don't double check. And if you verify, verify, verify, like I'm trying to beat into everyone's heads, um you're less likely to give them access to your passwords, your bank accounts, even on accident. Um when you test something before just automatically trusting it, it keeps you from walking into things blind. And that definitely
helps. Okay, third, defend. So this is my favorite part as a cyber person actually. um building smart systems. So thankfully there are a few ways to defend yourself from attackers and it can start by building these smart systems into your workflows by adding things like friction or dual approvals on different devices. I know at my job we have like you have a few um sorry I said at my job I was meant to say like at the Smithsonian so you guys trust it a little more but at the Smithsonian definitely have multiple devices that you have to use in order to sign into your account. It's not just going to be a oneandone. It's kind of the safest
way. And if you're using things in your personal life that you want to keep safe, kind of take that advice and that hints that you're getting from these big companies that aren't just letting you password one, two, three your way in. So yeah, um multiffactor authentication, um alerts between different vital actions that could occur when it could all add a layer of friction between your risky workflows and ensure that they don't get into it just kind of easy peasy no hassle. That's kind of what we're avoiding here. Um second, so report and share suspicious activity. So let's say you came to my speech, thank you. And your friends didn't um they're missing out on pretty
important information and this is stuff that you could like okay well here's a scam and then you learned something at beside Seattle that they said okay well here try to avoid this this is going around lately but the people in your community maybe your family members they're not aware of it so you can be that person to spread that knowledge and pro potentially save somebody else from that harm and I know we all like to think like we're the smartest cyber hackers but um cyber is definitely not a lonewolf game and it definitely helps when you're working with other people and sharing that knowledge. And that's the best way honestly to get each other to protect each other as much as we can,
protect our our systems, protect our family members and our friends who don't know as much about it. Um, lastly, using AI to flag anomalies. AI is actually pretty good at being able to flag unusual tone, timing, or volume if you just ask it to against it's all it's all prompting. So if you just make sure that you're using this AI and not letting the AI use you because it will use you um you're able to protect yourself in a way that well I'll just show you in the next slide using AI responsibly for defense not deception because we are blue team. Um, so basically a lot of companies who speak about making AI responsible are
talking about it in the context of how can I make an ethically moral and um like nice agent who's helpful and like are they actually doing that even like I'm not sure because I don't know if you guys seen like the Tesla AI agent who just like says the most heinous things like I'm I'm not sure if these companies are succeeding in that ethically moral thing but definitely they're making the AIs. Oh, they're making they're making the chat bots. So, yeah, no one ever really talks about it in the way an everyday user can use AI to help benefit them and help protect themselves. I mean, social engineering isn't just attacking Microsoft and these big
companies. They're coming to your doorstep. They're texting you that you forgot to pay your car pass and you crossed I95 the other day and you owe us $300. Like, those texts are coming to your phone, not just Microsoft's. So we need to learn how to protect ourselves. Um luckily um big companies like Microsoft actually um have been putting a lot of money into AI as you can see by the layoffs. Um they're putting a lot of money into AI. So the benefit that we get out of it at least is that there are a lot more AI tools out there for us to be able to protect ourselves with. And yeah, so that's kind of what I'll get
into here. So, we could start with using AI powered spam and fishing filters like in Gmail's AI filters. Um, they analyze the tone, urgency, and intent and can kind of give away if something's a little bit scammy. Um, using AI language models to recheck strange messages. I know I've done this before. Like, is this a scam? Like, this seems kind of like your people to I just say that to Chad GPT. Like, this seems kind of like something your people would say like and and they kind of let me know like, "Yeah, this kind of doesn't look great, honestly." So, uh, third, using password managers and MFA tools to integrate AI for threat detection. I know some of
these password managers can say like, "Hey, you're signing in from Peru, not usually where you sign in from. Um, you're in Seattle right now." So, yeah, that's one that we can use for us. Um, as much as there are voice um modulators and deep fake modulators, there are also voice clone and deep fake detectors that are run by AI and they can confirm if audio is fraudulent. I know I've tested it out. I've shown fake audio to it and it confirmed it kind of like how chatbt like if you put it into an AI checker like colleges will show, oh, this is 99% AI and you're like, oh dang, that was my whole essay. Um but yeah that will
definitely help you. Um lastly try AI based cyber security assistance. Um I know Microsoft has co-pilot which we integrated at my company already and Gmail has AI filters and they can help automate threat detection as well. So some key takeaways. So AI isn't really the big enemy. I don't think um complacency is. We need to keep our natural curiosity and our skepticalness and that can be key in protecting us from a lot of these ever evolving attacks. Um secondly, people are still the strongest link as much as we can be the weakest if we don't focus on it. Um AI manipulation is best handled by human judgment, teamwork, and awareness. And these are things that we can all work on
improving within ourselves. Um and these are skills that we can rely on when it comes to not getting scammed. And third and lastly, uh recognize, verify, defend. If you can remember my framework, it's I try to keep it simple, easy, but it works. So, um you have to understand that deception like how it operates and you need to remember to slow down and use these tools both human and AIdriven in order to protect yourself. So, yeah, that's my slideshow. Thank you all for coming to my talk. Here's my LinkedIn. you want to follow me on LinkedIn. I appreciate you all so much. And here I also have that QR code that they mentioned for feedback.
>> Yes, they're not scams. I promise they're not malicious.