
Good to go. Thank you, Paul, for that really wonderful introduction. I'm thrilled to be here today and with all of you and the Bides Charleston community. Thank you for hosting me. So, a few weeks ago, I was traveling overseas. I was the last night I was with my colleagues and I was ready getting ready to go out to dinner. I received a phone call and I looked at my phone and I saw, okay, this is from my area code. Normally, I don't answer calls I don't know, but I thought, okay, I'm on travel. It could be important. I answer the call and the voice on the other line says, "Ma'am, there's been an accident and it involves one of your family
members. My adrenaline shot through the roof. I physically braced myself against the wall, bracing for impact. And then it got weird. They couldn't answer any of my questions. Who are you trying to reach? Which family member? Where are you? And then they they all they did was they put a incomprehensible distraught woman's voice on the phone and I thought, "Okay, this is this is this is obviously a scam." But my adrenaline levels were still super high. You know, it's that visceral fear, that reaction, that flight versus fight versus freeze. And I hung up the phone and uh I called my mom who was taking care of of my kids. And I got this super cheery response like, "Hey,
how's the travel?" And uh and and I and I knew and even though I knew that it [clears throat] was a scam, I knew that this was all fake. It took me a while to kind of come down from that. The collector in me is a bit disappointed that I I didn't stay on the line and figure out exactly what they wanted. But then the emotional side of me is just like, I don't need this. my emotional response was kind of already kicked into high gear. So, let me set the stage for you a bit. Watching and tracking threat actor behavior has been a a theme throughout my career. Early in my career, uh I spent six years
uh focusing on Latin American security issues. My job was to help organizations and individuals manage extortion, kidnap for ransom, preventative security measures. I started my career in the intel community. I'm trained in elicitation tactics, counter intelligence interviews, source validation. Today I run a team that has that deals with social engineering on a daily basis, just a different application. These tactics are not new to me. It was a bit of a harsh reality to realize even when you have that training, you have that background, you can still get a reaction out of me. I was more vulnerable than I thought I was. It's a tough pill to swallow.
So what is social engineering? It's the use of deception of manipulation of an individual to get a desired end state. What they use will depend largely on what their goals are. We all know about fishing. We all get these emails. We now have most or enterprises at this point have a pretty good um training program in place. Okay, I think most of us can can easily recognize that they're smishing. We also get these, right? We get them on our phones. Everyone's got a FedEx package that's gone missing. You know, there's credentials that need to be to be reset. This is not unusual for any of us. What I'm going to focus on a bit more today is vishing. So using
your voice to manipulate people over the phone. So what are they trying to achieve? It could be fraudulent, you know, sending money transfers. It could be to gain access, have you forcing individuals to reveal their credentials, luring victims to click on something to get access to their network. It could be also just to gather information that they could then use for a future operation. This is a this is um on a scale uh more opportunistic obviously to more sophisticated. It's cheap. It just you just need time for it. There's no sophisticated malware necessary. There is a lot of trial and error that goes into it. And each time that they fail, each time that they succeed,
they're learning something. They are collectors, too. And this is trade craft.
So, I thought to myself, what if we had some AI chatbot that could do all my screen all my phone calls? How would they respond? or how would it respond? They don't have the problems that any human has. They don't have a family. They don't feel fear. I keep saying they. It's it. It doesn't worry about stock price or reputation. Doesn't travel. The problem is I'm unsure how effective AI would be against some of these trustbased attacks and probably wouldn't help when we're already the latter stages of an attack. Today I'm going to walk you through some of the psychological pressure tactics used by social engineering or those that employ social engineering. We're going to look at the profile of
the individuals that conduct social engineering attacks. We're going to walk through reported estimated costs and the impact that it can have on organizations. And finally, we're going to touch on the importance of validation as we try to track threat actors that are operating in underground communities, especially as the in the future as AI becomes more ubiquitous. There are challenges facing defenders to better combat the adversaries that are exploiting the human factor during an attack.
So how do you exploit humans? using trust and fear have two different responses that they're getting out of the targets. The goal is to destabilize kind of the baseline of an individual. If you use fear, you're raising up that panic to get them to react in a certain way. You're trying to control the victim by inducing that panic. If you use trust, you're lowering their defenses. They're kind. They're friendly. They're polite. And one of the easiest ways to get someone to do something for you or to get them to trust you is to be helpful. I want to help you. Or can you help me? Who doesn't want to be a hero? This is why help desks are perfect
targets. There's inherent trust in something that's meant to be helpful. Elicitation is another type of tactic that can be used. So elicitation is getting information from a target without them realizing that you're trying to get that information from them. also very subtle. And there are some really simple ways that you can do this. When you speak to somebody, you can make a statement. Don't ask a question, but a statement that's somewhat relevant to a topic that you're interested in their and their perspective on. If you make a qu if you if you pose a question, it does elevate their their their response. they think, "Okay, I need to actually come up with an answer for this
because they don't have the answer." If you make a statement, it's pretty benign. There's no reaction that that that comes that comes into place. They're just like, "Okay, this is a conversation that and a topic that I may or may not be interested in." Deliberately providing false information. Humans have the tendency to want to correct. Say something incorrect, they'll be like, "Oh, no, no, no, no, no. You've got it wrong. This is the actual truth. I'm the expert. Asking for support. Can you teach me something? I don't fully understand this. Can you explain it to me? People want to teach. It gives them purpose. Makes them feel useful. Or you can just go silent. People do not
like uncomfortable silence. They'll fill that void with anything. And usually the more people talk, the more they reveal about themselves and and inadvertently expose their own vulnerabilities. Fear is fast-paced. When the fear tactic is fast-paced, the threat actors in control, there's an expectation of the victim to react. Trust is something that we're seeing a bit more of. It's harder to identify. It's they're getting more sophisticated and we're seeing a lot of trustbased attacks. So using social engineering uh to gain initial access and that I think is a key part of something that we should be focusing on. More often we're seeing a a combination of the two being used during um during the life cycle of an attack.
Trustbased social engineering to get that initial access. And as we frequently see with with ransomware attacks, imposing that fear afterwards, if you don't do this within a certain amount of time, we'll dox you. We know your family members are. We'll leak your files. I'll dump everything on on the internet for all to see. This will impact your reputation. Granted, at this point, there may not be any deception involved because you might already be in the latter stages of the attack, but it's still the same fear-based tactic to introduce that sense of urgency. One specific component is time. When you're conducting these attacks, I often tell my team that if you control the time, you control the operation.
The advantage of the attacker is that oftentimes they've got more time than you do. And there's a lot of people out there with a lot of time on their hands. Social engineering is low cost. It's a volume tactic as well. Similar to the call that I received, fairly certain I was just a number on a list. They just went to the next one. They could have been based anywhere, maybe even Southeast Asia.
With more opport opportunistic cases like this, the goal of the attacker is to keep you hooked. They want to maintain that contact with you. The moment you cut off communication, it gives you that time to validate what's actually happening, especially if it's not real. The moment I hung up, I said, "Okay, this is definitely not real." and and that's what they don't want. They want to avoid that. So, they're going to try and keep you keep you hooked because that will occupy your mind, occupy your time, and then that'll increase their chances of success. They want to keep applying that pressure to the victim. Regaining control during the initial stages of a social engineering attack is
critical and a lot of that is about the time. With trustbased attacks, they might keep you on the phone for a long time, but they still want to keep you. They want to keep you engaged. If we can regain control during those initial stages of social engineering attack, you can then potentially prevent the attack from going any further. Combining social engineering or other human-based attack methods in the latter stages of an attack puts the defender, puts the victim in a different situation. At that point, you're focused on how can I decrease the cost. are in crisis management mode. How do we respond? What's the damage? What's going to be the impact on our organization? You're in damage control mode.
And at that point, you need to think about your resiliency. Not that you can't uh do something about it during the time, but that's something that makes it a little bit more challenging to manage. So how do we manage or how do we put a dollar amount on human vulnerability? How much does it cost? This is a bit difficult to measure because this is just one tool in the arsenal. But we can start looking at different trends. The good news is that as an industry, we're getting better as as defenders. We're able to detect malware and other malicious behavior better than ever. Uh IBM reported this past year that the cost of breaches actually fell 9%. In
2025 because of faster breach identification and containment. We've got better recovery and resiliency. So that's the good news. The bad news is that thread actors are adapting. They're looking for the weakest link and they've identified that humans could very well be that weakest link. Crowd Strike produces a spectacular report every year um called the GTR or global threat report. One of the most staggering statistics in this report is that we saw visioning attacks up by 442%. That's not insignificant. PaloAlto also put out a report claiming that 36% of all incident response cases began with social engineering. That's over a third. There are other broader estimates and the cost will obviously depend on the victim, the sector, the company or
organizational size, the region. There's some reports that the that the human element was involved in upwards of 68% of the attacks. Either way, it's significant percentage increase. We need to close the gap on protecting the human factor in our organizations. We're starting to see I think we're just starting here to see how they're getting exploited at scale. So, we need to understand who's behind these attacks and why they're doing it to better protect our organizations from getting attacked. And a lot of that is teaching people how to respond. So financial cost, uh, here we've got some some estimates. It's anywhere between 4.44 million as a global average. The US is higher. But it isn't just a financial cost.
There's also an operational cost depending on your sector. Manufacturing, for example, could have a massive operational cost if they have a ransomware attack. Opportunity cost. When you've got your team focused on recovery, they're not working on something else. Reputation. This can have an impact on stock price. And of course, long-term recovery costs. How long does it take to fully recover? There are some costs involved there, too.
So it's understanding the understanding these players is critical. So at Crowdstrike we've got a tagline. You don't have a malware problem, you have an adversary problem. We've always had a focus on the individuals behind these attacks. As an industry, I think we tend to focus on the what and the how adversaries are conducting their attacks. But when it comes to social engineering, it's important to understand the why and the who. So, traditionally, we track adversary motivation in three different categories. There's financial gain. Give me the money. That's what they want. It's that simple. There's advancing the interests of nation states and then there's ideological or political objectives are activists that are out there. But recently we've also seen a
gamification component come into play. It's also about winning. It's about gaining it's getting that satisfaction, gaining control over another individual and then they like to brag about it and they've won a lot but less so as we're catching up to them. Also remember the financial motivation is rarely about the money itself. It's about what that money can get you. Does it increase your social status? Does it give you legitimacy? Do you just want attention? Are they trying to impress somebody?
Some of these threat actors, and a lot of them are are operating in different underground communities, messaging applications. They care about their personal brand. They're using these different environments to curate an image for themselves. They want to control the perception of who they are. Sometimes they speak to journalists. And somewhat ironically, the environments that they're operating in are based on trust. There's escrow services available. There's arbitration if something goes wrong. There's a reputation scores in some of these forums. Reputation is important for a lot of these threat actors. This is a growing threat and it's probably here to stay for a while. Anybody can go onto these these different communities, these it can be a a forum,
it could be Telegram, it could be uh really pretty quite open and you can see all of the different criminal services that are available at a relatively low cost. And some of them are helping create playbooks for social engineering attacks. With AI, that delta between opportunistic, low sophisticated actors and higher sophisticated attacks is closing. Thread actors can use deep fake technology to better trick the victim. Um, you can create much more believable fishing lures. Language translation no longer a problem. You can seek to find individuals that speak a certain language and have a certain accent from a specific region because humans are more likely to trust people that sound like them. So more opportunistic actors are going
to have more opportunity to develop their own skills here and to apply these tactics with greater success. The more they do the trial and error they're learning each and every time and they're developing their own trade craft. There's also a sense of community with these threat actors. One of them is called the comm where actor or adversaries such as scattered spider that I have featured here. They've got links to these types of communities. It's a lifestyle for many of them and there's power in some of the notoriety that they get from being part of these communities and u conducting these attacks and then being fairly open about what they've done. However, this community is fragile.
There's infighting, immaturity, ego. They're also human and so they're also flawed. And I would guess that for many of them, these communities can fill a social void for them. There's opposing needs by being part of these communities for connection and anonymity. For those of you that far have been following along and and tracking some of these types of of threat actors, understand how they fracture, unite, rebuild, then turn on each other in very public ways. It is chaotic. We can't trust the claims that they make at face value. We can't do that alone. And that's why we need to triage and validate as best as we can.
Threat intel folks are natural skeptics of AI. There's a need for validation and corroboration which is useful uh when you work for a company that has telemetry where you can corroborate some of the claims that are made by some of these some of these thread actors. But from a thread intel perspective, relying on AI alone, you can't risk hallucinations and you can't risk providing erroneous information to our customers. You'll lose trust and you'll lose credibility. Looking at OSENT data alone does create a bit of a credibility paradox. We need the volume to train these AI models, but we also need to be able to trust the data that's going into it. They're unstructured and they're
chaotic. It's a world of posting, bots, trolls, conspiracy theories, misogyny, racism, and worse. It is not a pleasant place. The signal to noise ratio is high. So there is a risk when you base your analysis on OSENT data alone. Threat actors lie. AI is not going to be able to easily identify personal human lies. Depending on what's collected from individuals and what's posted online communities alone, it's kind of like baking a cake, but you only have flour and sugar. pretty good indication they might be planning to bake a cake. But if you're not careful and you don't validate with other information, with other data, you could just create a huge mess and you can waste a lot of time. I'm sure
everybody in this room has gone down some crazy rabbit holes on a lead. Nothing more frustrating when it leads to literally nowhere. There's a huge amount of value and data from underground communities though. It's pretty in incredible what we've been able to do with some of the leads that we've gotten from these communities. We've used this information from these sources to identify some very credible threats. Sometimes we've been able to identify credible threats before they even hit our customers environments. That's hugely valuable. But not all information should be treated equally. You need it needs to be curated. Humans need to operate in these environments to understand the trends, the players, emerging threats and evolving trends in these
communities. you need you need to track it over time building an understanding of who they are and how credible they are on an individual basis but also with some of the TTPs that they claim to be using. We need a feedback loop and that's how you continuously refine. It cycles into a validation process and then helps you focus what you think or believe is going to be important and have an impact on your organization or your customers.
So, we have a challenge as defenders. And I realize I'm making an assumption that everybody in here is a defender in some capacity. Everyone in here definitely is a is a collector in some capacity, but AI is also collecting and our adversaries are also collecting. Social engineering is going to get more sophisticated. The internet is an amazing resource to collect all sorts of personal information on an individual. Social media leaked PII. You can create and craft a pretty believable lure targeting somebody. But humans need to stay in this loop. We still need context. We need intuition. But we also need to train individuals and train people on how to respond to these types of threats.
The focus should be on how to stop the attack early on. How to identify these different tactics that threat actors are using. We should develop our own playbook. We can have a script. We can incorporate AI as a force multiplier.
You know, we used to tell people when I I worked on extortion cases in in Latin America, have a keyword to to have that validation. Of course, that was on a very individual basis. Turning on video to validate that the individual that you're speaking to is actually them. That might get a little more complicated with deep fake technology as it improves, but mostly we need to have better training to help ident have help individuals identify what that cycle looks like and what tactics that they're using. And most importantly, as long as our adversaries are human, we should be too. And I would even argue that even when our adversaries are not human, we still should be too.
So, thank you for listening. There's a great lineup today ahead of you. Uh, thanks again to the entire Besides Charleston community for hosting me. I'm thrilled to be here today. I'm going to be here all day. I would love to speak with any of you that are interested in going into a little bit more depth on some of the topics that I discussed today and I look forward to attending some of the talks. I'm here to help and I know that you would love to hear from you and see how you can help me too. So I hope everyone has a great con. Thanks again. [applause]