← All talks

PEBKAC Rebooted: A Hacker’s Guide to People‑Patching in 90 Days

BSides Las Vegas 202524:3238 viewsPublished 2025-12Watch on YouTube ↗
Speakers
Tags
About this talk
Examines how cognitive science and large-scale phishing data reveal four predictable human biases—optimism bias, Dunning-Kruger effect, technology trust, and system-one thinking—that attackers exploit. Demonstrates that neuroscience-based training using the SCARF model (status, certainty, autonomy, relatedness, fairness) can reduce phishing clicks by 8× and triple reporting rates, shifting security from awareness to behavioral motivation.
Show original YouTube description
Identifier: ZCTLHZ Description: - “PEBKAC Rebooted: A Hacker’s Guide to People‑Patching in 90 Days” - Uses cognitive science and large datasets from phishing simulations and perceptual surveys. - Identifies four predictable human biases exploited by attackers: optimism bias, Dunning-Kruger, and technology bias. - Shows statistical impacts (e.g., tech bias users click 140% more often). - Advocates continuous “people-patching” with neuroscience-based feedback loops. - Demonstrates methods to reduce phishing clicks 8× and triple reporting rates. - Provides ROI justification for shifting from awareness to motivation-based training. Location & Metadata: - Location: Ground Truth, Siena - Date/Time: Monday, 10:00–10:20 - Speaker: David Shipley
Show transcript [en]

Good morning and welcome to Bside's Las Vegas ground truth. This talk is PBCAC rebooted given by David Shipley. A few announcements before we begin. We would like to thank our sponsors, especially our diamond sponsors Adobe and Aikido and our gold sponsors Formal and Drop Zone AI. It's their support along with our other sponsors, donors, and volunteers that make this event possible. These talks are being streamed live and as a courtesy to our speakers and audience, we ask that you check to make sure your cell phones are set to silent. Additionally, if you have a question, I will have an audience microphone so that the stream can hear you. Again, I will have that so you know where it is.

With that, let's get started. Please welcome David Shipley. >> Thank you so much. And uh let's get started. So hello and thank you so much to Bides Las Vegas for this amazing opportunity. I'm excited today to share insights that I and my fantastic team at Bosron have generated over the last eight years through our work on security awareness, behavior and culture. So let's get started. The first thing I want to talk about is actually what the word cyber means because it's important. It comes of course in the field of cybernetics which is the study of control and communications in the animal and the machine. But the word itself comes from the Greek word kyernetes which literally means the helmsman on an

ancient Greek ship and it was chosen with precision deliberately as it illustrated three critical concepts people technology and control. Now the cyberneticians didn't have an order of operations for these complex relationships. They just wanted to study them and in particular feedback loops around governance and control. But for the past 30 years, cyber security has been dominated by approaches to use technology to control or mitigate human risks. And the reason fundamentally comes from a line that I think we've all probably heard way too much. People are the weakest link. But what if we flip that script? What if instead of thinking about people as the weakest link or PBCAC, the problem exists between the keyboard and

the chair and in this sort of clever way of saying people are stupid to people are amazing but we need to better understand them and empower them. The problem with thinking that everyone in an organization is stupid is that if it was true, cyber would not be your organization's biggest problem. It's full of stupid people. But they're not stupid. They're just human. And our human vulnerabilities, as well as our untapped defensive potential, are rooted in important things around biology and in key lessons from neuroscience, psychology, and behavioral economics. It all comes down to how our brains were designed to do things like conserve energy, process our emotions, and to make decisions quickly when it mattered

most. When defenders understand that landscape and actively design programs around it, users can evolve from this perception of liabilities into highfidelity sensors and enabled last line protectors. And while our humanity can be exploited, it can also be leveraged. What do I mean by that? I want to draw first on where I'm getting some of the data from that I'm basing some of these observations. Most studies that are published peer-reviewed academic studies on the human side of cyber security topics like security awareness, fishing simulations, and more are based typically on small sample sizes and usually within university environments. A few studies have been done at the enterprise scale with tens of thousands of people, but typically they're done within the same

organization or same industry. The insights I'm going to share with you are drawn from more than 1300 organizations and over 20 industries. It includes people from around the world. As we are a Canadian organization, the organizations tend to skew Canadian, but we have organizations in the data set from the United States, Europe, and other parts of the world as well. And to my knowledge, this data set is one of the few that have combined learning outcomes, fishing simulation results, real fish reporting data, and qualit qualitative data from how end user surveys are answered that provide insights into attitudes and how they, not just knowledge, shape behavior. So, one statement I'm going to make right off the bat that I didn't think

would be controversial, but it's interesting to debate sometimes. Security awareness and anti-ishing simulation training can work if you do it well, if you understand the limitations. And there are lots of ways to do it poorly. To be clear, working on the human side of cyber zer cyber security will not get you to zero clicks ever. That's impossible. In fact, some of the data here, which is actually drawn from some work with one of our partners at the University of Montreal, Michael Joyce, who's pursuing who's pursuing a PhD, uh drew upon some of our data to actually look at the probability someone would click, not the click rate, the probability immediately after training. And it's one of the first

studies of its kind. And what Michael found was that there's a 3.5% probability of all the people who will click, which is between 3 and 5%. There's a 3.5% probability it'll happen immediately, the same day when they took training. Accidents happen. We are human. 90 days out, there's a 15% probability. 360 days out, it's a 95% probability, which is to say studies that have said annual training doesn't move the needle on on fishing behavior are correct. But we can actually now see there's a risk appetite and maybe an optimal frequency opportunity. And so what we've uh found is that 90day interventions, and these don't have to be all computer-based training. In fact, you know, if you can scale and do other

types of interventions, these can also help, but they increase and maintain vigilancy. Frequency consistency adaptive difficulty, and making something that people can win at, particularly the fishing simulations, not just lose, are all critical. And absolutely, people need to know you do things like fishing simulations, you do it regularly, and that you do it fairly. And how education is delivered when someone makes a mistake and clicks matters massively. Training education delivered via postclick landing page doesn't work. It's ineffective. Studies have shown that. Our data also supports that. But why doesn't it work? Well, actually the median time on a postclick fishing landing page, which is a a deployment method many use but is not the best method. Well, they actually

only stay on there 11 seconds. That's the median time. The means about 14 seconds. Only 10% of people stick around for 30 seconds or more. So, the studies that say this method of anti-ishing education isn't effective, well, they're right. Because people who don't read, watch, or learn don't learn. No kidding. But does it mean that all fishing simulation approaches and postclick education delivery are the same? Not even close. programs that it's interesting to note programs even the presence of fishing simulations helps programs that accidentally pause simulations and we saw this in our data set for three months actually saw their clicks double evidence that vigilance is a par evidence that vigilance is a perishable skill needing continuous reinforcement

so what do I mean by saying doing it well what we've seen when organizations do it well click rates can drop from 35% to less than 5% in 90 days and further gains happen within the first 365 days. And reporting surges, reporting volumes can climb two and a half to three times in the first 90 days and up to 285% in the first year. What's important to know about reporting is that the quality of the feedback people get when they report things, not just simulations, matters massively. You gain new insights when you do it well. For example, one key learning we have is we've actually been able to see what's getting by email filters by looking at what people are

reporting. And we've actually seen email filters from a variety of different providers have a leakage rate ranging from 3.8% to 10%. So almost 1 in 10 fishes can get by. Great programs don't just raise awareness, they raise vigilance. Now what our data alongside what the work Michael had done at the human centric cyber street partnership shows is that a raising awareness efforts do work. Security awareness month actually works but too much information too often can actually cause security fatigue. So you can overfish. And so what our data was was sorted into looking at the frequency of fishing we found that monthly fishing simulations offered the lowest click rate. So, yay. And the highest report rate at 25%. But overdue fishing. And

you see the drop in the positive side, the reporting rate. Now, the next data set is something I'm really excited to share with you. It's the first time we're actually unveiling this. Um, and so we created a survey for people after they clicked on fishing emails. We deployed it in mid December 2024. And this is the data between December 2024 and June 30th. Now, this is 4,594 people from 211 different organizations. And it's of course people who clicked, which is a small percentage of an overall population, but it's still a really interesting um result and one of the largest of its kind. And one of the questions we asked in the survey was, "Do you remember why you interacted with

the email, click the link, open the attachment, or scan the QR code?" And choose the answers that best describes the why. Now, nearly half click because of something we call mimicry. So, mimicry consists of two answers. Uh, those who said they clicked because it looked legitimate and those who said they were expecting something similar. These were very, very close. 25.01% said they were uh thought it was legitimate. 24.75% thought it was something they were expecting. And mimicry is a successful predator strategy in nature. Um, and so this is where understanding the role of biology in things, making sure you don't look threatening, that you look trusting. Now, what I found fascinating was I had originally thought that fear

was a bigger player in the reasons for people clicked. But only 5% of people answered the question with,"I was afraid I'd get in more trouble from not doing the thing than doing the thing." But when we dove into it a little bit deeper, while they're not the largest group, they're the smallest respondent group to this particular survey, their click rate over all time is over 12%. And their report rate is among the lowest at and and their postclick report rate, which is really important. It's a measure of psychological safety because I screwed up. Am I going to tell somebody about it? It was 5%. The average across all of our user base and from other studies like Verizon databach

report 2024 is 10%. So they were half which is interesting. Those that clicked because they thought it was something um that they were expecting, they had a 15% postclick report rate. Now lastly, and I find this very interesting, 21% of people don't even remember doing it, which we think is evidence for system one thinking. And we'll talk about that a little bit more. Now, what about some of the insights we drew from our survey questions? And we found two very interesting psychological biases that we want to talk about. The first is optimism bias. people that don't think that they are uh that they're people [clears throat] are more 37% more likely to fall for a fish if

they don't think they're a target. So optimism bias is the natural uh tendency for all humans to think something bad is more likely going to happen to somebody else, not to me. And so that is fascinating. But we found something we think is even more powerful and we call it technology trust. When people believe the security tools provided by their organization completely protect them, when they strongly agree on a fivepoint liyker scale with that response compared to those who strongly disagree, they have 140% higher click rate average, which we think is fascinating. What's interesting is the number of people giving that response agree or strongly agree is up 25% since 2021. And as of 2025, one in three people in an

organization believe that. So when we talk about what we're talking about in security awareness, balancing people's overfaith in technology, particularly the hype around AI is critical. And that's not the only psychological element we believe we found some evidence for. The next one is something called the Dunning Krueger effect. And for those familiar, Dunning Krueger refers to the process where people with a little bit of knowledge tend to vastly overestimate their skill in a particular area. Ironically, experts are on the other side of that. They tend to underestimate their skill. What we actually found was that people with 35 to 45 minutes of training per year outperform those who did more training because we believe that you can make

people security over confident. And as evidenced by the earlier chart where 3.5% probability of click right after training, people can sometimes feel invincible. I just did my training. I'm good. I'm immune. No, you're not. So taken all together, these three can form a a human vulnerability chain from I'm not a target to the tools got me anyway to I've been trained so this isn't going to happen. And so we need to think about in our security awareness how we're actually not just going to worry about knowledge mobilization and talking about fish cues to helping people understand how their brains work, how they think that they are susceptible and that technology tools are not infallible. Now I want to talk for a minute a little

bit about cognitive load or cognitive overload. We're all busy juggling emails, calls, deadlines, and scammers often choose timing wisely to take advantage of that. You know, they might send a business email compromise email late on a Friday when finance staff are wrapping up for the week or during a peak period like quarter end. The human brain can only focus on so much at once, and attackers aim to slip cons in when we're in those distracted moments. And here's some interesting science. The human brain is only 2% of our total body mass, but consumes 20% of all energy, even when it's just chilling. Start thinking intensely, and that calorie bill, well, it skyrockets like a misconfigured cloud environment.

We developed this amazing battery saving, low energy, automatic thinking mode to survive. It's awesome. And we didn't always readily have food available. So, this was important. And this is the biological basis for the advantage into what is often referred to as system one thinking. System one is heristic, fast, low energy, sometimes emotional and great for many things. System two is slow, deliberate, logic, logical and calorie intensive. So when some when we think about why scams succeed, they don't succeed because people in the whole are stupid. Far from it. They succeed because the human brain is a marvel of evolution. But it's survivalbased improvements create vulnerabilities. And this is why some forms of anti-ishing education we believe don't

hit the mark. If four out of 10 simulation clicks aren't just due to a lack of knowledge, but due to system one thinking either because they weren't even engaging that part of the brain and don't remember it or they on hindsight go back and say yes indeed I was rushing. So using system one then education needs to shift to help people understand how they work. don't just talk about all the signs of a fish and what they missed from spotting if their brain wasn't even engaged at that level. Now, I want to talk very briefly about something that was quite popular last year. It was a Google uh blog post criticizing fishing simulations. And I

and I call this the Helen Lovejoy argument. And for Simpsons fans, there's this moment in a particular community meeting where Helen Lovejoy, who is the uh reverend's wife, was think of the children, right? Think of the think of the people. these are horrible fishing simulations. We can't possibly do this. We're going to hurt people's feelings. We're going to erode trust between it and the organization. And there were five kind of major claims made about this that there was no evidence that it results in fewer incidents. Actually, there's lots of peer-reviewed academic studies, lots of industry data. Our data shows doing simulations well does have an impact. Does it get to zero? No. There's a 3.5% probability immediately

after training someone will click. But does it reduce pressure? Yes. Another criticism criticism was, "Well, it bypasses email filters. You're not testing the filter. You're helping people understand their vulnerability. You're helping raise vigilance." There's another criticism that it causes an increased load on already burdened IR sock teams. Dudes, that there was ever a case for automation and AI triaging responses. And yes, we've done it. It worked really, really well. Um, employees are uh upset. This is my favorite because a study dropped in December 2024 that I found in preparing for this talk and I want to quote it because it's hilarious. Quote, "We could not find evidence that employees feel attacked by their organization as

previous studies suspected. On the contrary, we found a majority, 86.9% have a positive or very positive attitude towards fishing simulations." And that's from the December 2024 research paper, employees attitudes towards fishing simulations. It's like when a child reaches onto the hot hob, which I believe is the British for stove. Um, so it's just interesting on that. Our data shows 70% of people say they learned from the fishing experience in that survey I mentioned earlier. But how do we actually use neuroscience and a model to help us take advantage some of the amazing features built into the brain? And Dr. The David Rock scarf model is a framework that describes five key domains of human social experience

that drive behavior. Status, certainty, autonomy, relatedness, and fairness. And these domains are critical to understand how people can react in social situations, particularly at the workplace. And it's also a great model to create conditions where employees can reduce their risk and thrive in a digital workplace. So status is our relative importance to others. Humans are wired to care about this because this was important to us. If you were out in the inroup back in the day, you were going to die. A perceived drop in status like public criticism triggers a threat response in the brain. Conversely, recognition and praise can recognize it can activate reward circuits. So, in our work, here's how we leverage status. We actually gave people

a score they could see. It got better when they did good things. It closed a feedback loop. It became something people could celebrate. certainty. We hate uncertainty. Our brains ramp up all the different survival scenarios we're going to run, which burns calories, which makes us cranky. Give people something simple, a metric they can understand, and they can navigate and orient to. It doesn't have to be perfect, but what we did was the same as a credit score. Where am I? How do I improve? Autonomy, our sense of control over events. You know when people feel micromanaged actually the prefrontal cortex is suppressed and they actually make worse and worse decisions. So give people more choice in the education they

learn. If we know the optimum frequency is 90 days give them a chance to learn something about home or work at different alternations. Don't just force feed all the material you want to have relatedness. People want to feel connected. So model positive behavior. Show, you know, where people successfully caught and reported fishes or they identified proactively a security problem and show that to others so that they can feel connected with it. And fairness, our perception of fair exchanges, perceived unfair treatment activates a strong threat response. fishing simulations where people aren't told that they happen, where they can't win, they can only lose, click, or draw, don't click, and are are not balanced in terms of difficulty are fundamentally

unfair. So, in the approach that we've actually used, people actually get positive points for spotting and reporting simulations even even after they click because that post-click report rate is so critical. uh as well as real fishes. We need to move beyond walls of shame. I I can't tell you the number of organizations where I've heard IT admins have pinned photos up of people who are the always clickers or the the people they believe is the problem. Get beyond that. Create walls of fame. People are doing it right. Show positive examples. People who believe they make a difference for the organization click 45% less. And by the way, close those feedback loops. People who are given

easytouse reporting tools and meaningful feedback when they report security concerns, they actually report 55% more fishes. So I want to end with this. I talked about PBCAC rebooted. It's not the problem exists between the keyboard and the chair. It's the partner exists between the keyboard and the chair. Spot and quantify cognitive biases and attitudes and build that into your messaging. Evolve from once a year awareness to continuous reward driven motivational loops. Go beyond click rate and report rate for proving ROI to executives. Talk about post-click report rate. Talk about reporter accuracy and talk about filter by fa bypass metrics. Use proven neuroscience-based engagement models to build positive security culture. Cyber attackers exploit the brain. Defenders have to as well. And

what I hope to do with this talk was prove that when organizations align with ideas from neuroscience, psychology, behavioral data, and more with thoughtful programs, people stop being the weakest link and they become the strongest adaptive layer. Thank you so much. [applause]

Well, I think we have a question in the front.

Thank you. That was a great talk. Um, you have a lot of stats in there. Do you have links to the references and such so that we can go back and understand them more fully? >> Yeah, I'm I'm happy to give out the entire slide deck uh to folks just um come see me um drop me a note um or uh I can provide it to the conference as well uh as downloadable material and and please go and challenge the results. Run your own experiments. >> You have links you've got

Does the slide deck have like the links to the papers and such? >> Yeah, I'll make sure that the uh the links are all in there. I just >> Yeah. No, I understand. That's great. >> Oh, there's a question in the back. >> Oh, we're at time. Um happy to happy to chat uh afterwards. Thank you so much for coming. Thank you so much. [applause]