← All talks

Security and Behavioural Economics

BSides Toronto · 201426:44225 viewsPublished 2014-12Watch on YouTube ↗
Speakers
Tags
StyleTalk
About this talk
The session objective is to present the basic concepts of behavioral economics and cognitive biases, and how they affect information security situations. "A cognitive bias is a pattern of deviation in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion" (Wikipedia). The plan is to quickly introduce the concept with two of the most common cognitive biases, anchoring and confirmation bias. The session than quickly moves to some of the cognitive biases that have a strong effect on information security situations. The first cases are situations frequently mentioned in risk related discussions. The next piece covers not the security practitioner point of view, but the point of view of the victims of phishing and other social engineering attacks. The 'experiment of the Xerox machine' will be used as example of how cognitive biases can affect their judgement. Finally, some techniques and recommendations on how to reduce cognitive bias on information security activities. Ideas on how to avoid cognitive biases in the risk management process, improve security awareness training and user interfaces (security prompts, notifications) will complete the presentation. Several industries have applied behavioral economics to understand and improve decision making. Information Security includes several situations where the cognitive biases that are the main focus of the subject play a strong role. Applying the techniques from the field, such as 'nudging', can help security professionals to improve risk assessments, security awareness and many other components of the security arsenal.
Show transcript [en]

Hi, guys. It seems that I have a few interesting challenges today. I think the first one is kind of see you guys very happy with someone doing kind of static analysis of firmware and kind of finding zero days. And then someone speaking about elastic search. And I'm going to talk about psychology. So probably don't hack on me too much. That's the first objective. Next objective is being able to finish on time. Thanks, Charlie. So kind of, I'm going to skip around kind of who I am, kind of et cetera, et cetera, et cetera. You can find it in all those places. So kind of straight to behavior economics. What is this thing? This is kind of psychology underlying economic decision making. So kind of whenever you're kind

of making decisions, you're not being always 100% of the time the most rational person. We are not Spock. Right? So there's a lot of kind of background psychological process running in our minds that will make us sometimes, or most of the time, be kind of not 100% rational. So behavior economics is basically studying those situations. Where that came from? A few psychologists, I added Daniel Kahneman here because he's the most famous one and kind of most of my stuff is based on his book basically. But some psychologists start studying those situations where people behave in a non-rational way. So they started documenting and understanding a lot of those situations. What they found is that our minds normally

work with two different systems that they name in a very creative way, system one and system two. And those two systems do different, have different roles in our minds. One is system one. System one is the background service that is always running in our brain. So it's kind of, it's aware of the surroundings and kind of checking things like kind of threats or subtle situations or things that we need to keep track, but we cannot kind of pay attention to them all the time and kind of have kind of stay concentrated on those things. So it's like kind of a, background service that is kind of keeping track of everything that's going around us. People who normally call that our unconscious. And

then there's System 2. System 2 is a smart guy. So System 2 is the guy that will take a specific task, put that in foreground mode, and keep working on that in the most rational way, trying to find a solution for that problem. Most of the situations where we behave in a non-rational way is because of some kind of odd situation in an interaction between System 1 and System 2. So some things that we need to know. System one is always running, as I said before. And system two is really resource intensive. So kind of sometimes if you're tired, if you don't have enough sugar on your blood, system two will probably not work properly.

And also system two is very lazy. So instead of kind of going straight ahead and trying to solve the problem you want to solve, you might kind of just rely on clues that are coming from system one in order to solve their problem. And those clues are not always 100% right. What are we talking about psychology? Because those guys are just kind of one day they were bored and they started reading about economics. And they realized something very interesting about the work of economics. A lot of the basic stuff that the economists are doing in their basic work had a very interesting assumption is that people will always behave in the most rational way and they will always make decisions that will maximize the output according to their

expectations. Now, those guys have been working for years, for decades, finding situations where that's not true. So they're just looking, are those economists crazy? Why did they have in their most basic work assumptions that we know That is just not true, right? So they even named kind of that, they create some names saying kind of those papers from the economists, they actually, they're referring to a different animal. They're not talking about humans, they're talking about e-cons, right? And some of the work of Kahneman, even the work that he got his Nobel Prize for, is actually showing or kind of improving economics theory using kind of the knowledge that we have about the non-rational behavior of people. About the non-rational behavior of people, those

situations when they are kind of, they present themselves as a pattern, we call them cognitive bias. It's a situation, kind of it's a pattern, kind of in a deviation of judgment. So we normally do that in a consistent way. It's kind of a series of situations or kind of specific patterns that we will kind of take, kind of we're gonna make a non-rational decision. Kind of hard to explain, but kind of. Let's talk about one of the most common ones I think everyone knows about. It's called the confirmation bias. And this is that situation where you will take information, kind of take new facts and new information, and you're going to interpret that information, those facts, according to your previous beliefs and your kind of

accepted theory. So that's kind of when the same thing that happens when you watch someone kind of that is talking about the theory of evolution. with a creationist, right? No matter kind of how many facts you try to show the creationists, they're gonna interpret those facts as a way to confirm the creationist theory. And if you throw something at them that is almost kind of, it's perfectly kind of visual, it's kind of, it's clear, it's evident that it kind of, it disproves their theory, they're gonna disregard that as a one-off or an exception. So that's kind of something that we all do in certain levels in a lot of aspects and it's called a confirmation bias. Okay, this is kind of easy to understand

and probably kind of one of the easy to grasp biases and it's easy to accept, right? We all look at it and say, yes, I probably do that sometimes. Now, let's look at another situation, another bias that's kind of a little more surprising. This is about anchoring. Anchoring occurs when you're working on an estimation problem. So basically, you need to estimate a number, a quantity. And of course, you want to do that. You want to get a result in the most rational way. The thing is, any number that you come into contact when you're doing or kind of right before you're going to an estimation exercise will influence the result of that exercise. What does that mean?

That means that if I ask you guys a question like, or I approach you guys with this question, do you think that Gandhi was 114 years old when he died? Probably kind of most of you are going to say no. But then I will ask you immediately after, how old you think he was when he died? And he probably going to give me a number that would be substantially higher than if I approached that problem differently. So if I came to you before and said, do you think Gandhi was 35 years old when he died? And then I asked you the question, the answer that you're gonna give me in that second situation will probably be a lower number than the other situation when I mentioned

the 114 number, right? Please keep in mind that every time that I'm talking about the confirmation bias, it's coming from scientific research. So kind of your lab, experiments that actually confirm that those things actually happen and it affects every one of us. An interesting kind of, there's also one of the biases that's really interesting that we all believe that those biases affect the others and not us.

Talking about biases, I think that's probably my only slider, it's hard to read because I don't have writing anything else, anywhere else. So kind of there are a series of biases, so there are kind of years of research about cognitive biases and they find kind of very different situations where we behave in a non-rational way. You're gonna find sometimes the same concept under different names. Sometimes biases that are kind of very specific situation of a more generic bias. So kind of you're gonna find a lot of information about that. There's a really interesting book that has a huge list of biases called The Art of Thinking Clearly that I think it's a very good source for those that want to understand. kind of more about cognitive biases.

A few of the biases here that I think is worth mentioning.

Hindsight bias is that thing where everything that we're trying to explain after it happened is really clear why it happened, right? So everyone can clearly explain kind of how the CIA was really dumb that they couldn't find about the Al-Qaeda plot for the kind of 9-11. event, right? Or any other situation that we are looking after the fact, and you can clearly understand why, but you couldn't see how that could happen before kind of the event, right? A few other things that we're going to see later that are more related just to kind of our work with information security, overconfidence effect, right? Kind of when you're given, for example, the estimate about kind of how long it will take for you

to to finish a project or finish a task, you can be sure that you're going to be overconfident. So we are normally overconfident about giving estimates about work or effort or anything related to our ability to do stuff.

What else is interesting here, planning. The planning fallacy is also related to that overconfidence effect. It's about when you're, again, giving estimates, for example, for the schedule of a project. If you're even within a group and if you're trying to be really kind of fit to the ground and trying to give a good estimate of how long that project will take. If you look kind of how the estimates people will give within a group and compare it to the estimates that they will do to the same task, the same project executed by someone else, it will be completely different. Also, I think I can mention alternative blindness. Those situations when you're looking, for example, for a security control to apply

it or not. It has some benefits. It has some drawbacks. You're deciding if you want to do it or not. And suddenly, you're going to have dozens of people deciding if they want to do it or not. And that kind of behaves a huge battle between those two sides. And no one, for one minute, is stopping to think if there was a third alternative. So in a lot of situations when you're looking for something, if you should do it or not, you sometimes kind of, because of the alternative blindness bias, you're gonna close to other alternatives and you're gonna only look into that kind of do it or do not thing.

Getting a little more closer to the situations that we deal with every day when you're talking about security. And I think that if I had to choose one subject,

in our field that is affected by cognitive biases that's risk assessment Why is that? Because we need to make decisions about impact. We need to make decisions about likelihood and Those kinds of decisions are really subject to cognitive biases now We just mentioned a few minutes ago. I mentioned the anchoring biases, right? so imagine you go to a situation that you are primed by a number and And then you have, immediately after, you have to estimate the impact for a specific threat. What's gonna happen? That estimate is gonna be affected by the anchoring bias. Right? And imagine reading a report with kind of very crap information like that kind of report with kind of how much you're gonna kind of

loss per record, that kind of thing. And then you have to provide, you're reading the report, you're laughing, oh, their methodology is crap, the content is crap, and then you go into an impact assessment, and you're gonna be affected by that anyway, because you're not conscious about that. So that's really hard, right? Now, the other piece of the risk equation is really where things get tough. Because when you're talking about likelihood, you're talking about probabilities. And when dealing with probabilities, we suck. Really, the cognitive biases that are involved in estimating probabilities are kind of a lot, kind of, and they really mess up with our minds. Just to think about the work that Daniel Kahneman did that he got the Nobel Prize for, it's

called Prospect Theory. It just shows that when you're dealing with losses and gains, the way that you're gonna behave and the way that you're gonna decide For things that sometimes are exactly the same in terms of numbers, you're going to end with completely different decisions when you're considering losses compared to when you're considering gains. So we can see that we really struggle to work with estimating probabilities. Talking about estimating probabilities regarding risk, how many of you heard about the availability bias is also one of the most famous ones? Availability bias is that one where things that happened more recently, so they are fresher in your mind, will appear to have higher probabilities than other things. So that's the perfect situation where you're planning for a

trip. You can drive or you can fly. And the day before, there is a plane accident. It's all over the place in the news. When you're going to decide if you want to take the car or fly, You're going to look at the probabilities and say, wow, but it's really risky to fly. I'd rather drive those 800 kilometers because it's probably safer to drive than fly the same distance. It's an absurd, right? We know that kind of the situations that can happen kind of when driving are far higher than flying. But kind of it's one day after a plane accident, you can be sure that you probably won't think that clearly. There's another interesting question. cognitive bias that affects when we are measuring the likelihood of events,

especially if you're thinking about how vulnerable a specific platform is, for example. So you really love your iPhone. The design is really sexy. It looks so beautiful. You love Apple, right? So they're probably not vulnerable, right? Because they're so nice. I really love Apple. So no, they can't be vulnerable. That's called the halo effect. It happens because system two The piece that you need to use when you need to estimate how vulnerable something is, what's the likelihood of kind of a specific vulnerability being exploited, is kind of handling the problem. It's going to look at clues from system one. And system one is telling system two, I love Apple. It's so cool. And system two is lazy. So it's just going to take that opinion about Apple

and replace the problem that you're trying to solve, the question that you're asking, is this vulnerable? and is basically replacing that question with, do I like it? And that's kind of when shit happens. There are a few kind of, if you guys kind of remember one of the previous kind of Bruce Schneier books before he started to go kind of too crazy and some stuff not related to security. I think, I'm not sure if it's Secret and Lies, but he ends up kind of listing five rules that he talks about risk, and I like to mention them here because they normally say, People exaggerate spectacular risks and downplay common risks. That's something that we see every time. The unknown is perceived to be riskier than

the familiar. Or personified risks are perceived to be greater than anonymous risks. So when you look at the APT and you talk that's a specific unit of the Chinese army, oh, that's far bigger than something else that someone else that isn't always doing. Whenever you can identify the source or a specific risk, it can be personified. It will be perceived as higher than something that is kind of an unknown source. People overestimate involuntary risks, risks in situations they can't control. And they also overestimate risks that they can't control, but they think they should. So those are kind of things that we should keep in mind when working on risk assessments. After risk assessments, the next thing that really kind of is impacted by cognitive biases

is phishing and social engineering. In fact, social engineering only works because people are subject to cognitive biases, right? After all, when someone is being subject to a scam, what is the rational behavior? It's to not fall for it, right? But we all know that a lot of people fall into those kind of into those camps. Why is that? Because normally the attacker is exploiting cognitive biases in the victims. Why do I have a copy machine here? There is a very famous experiment that was used to demonstrate something that is called a story bias. It shows that whenever you want to ask something to someone that they're probably not willing to do or kind of they're not supposed to do, the chance of them kind of accepting

your request are extremely higher if you provide a reason. Note, pay attention that I said if you provide a reason. I haven't said if you provide a valid reason. And that's kind of where this study really kind of gets funny. Because what they did is they went to a place that was kind of a public copy machine, kind of a library, kind of a copy machine that everyone could use. And one of the researchers would try to just jump the line. So they go to the person that will be the next one to use and say something in order to kind of use the machine before that person. And they wrote a series of scripts to see which version or which situation people will be more

willing to let the researcher kind of jump the line. So there are things just kind of, can I take copies in front of you without any explanation? There are things like, sorry, I'm in a hurry. Can I take kind of a few copies? Things like, Sorry, I have only five pages. Would you mind if I jump in front of you? And there are things like, can I take these copies because I need to take these copies? And the interesting thing is all cases where an explanation was provided, including this idiotic one, the rate of acceptance was higher than asking without any justification. So you can see why things like Nigerian scams or all those other things work. You are providing an explanation

about why you're asking that absurd thing, and people are accepting just because there is an explanation. Another place where those biases have a role in how people behave, user interfaces and security prompts. This is the old interface from Internet Explorer for certificate errors, right? during the establishment of SSL connection. People look at it and another bias comes into play. It's called complexity aversion bias. They look at this, oh crap, so much information, this is too complex, where I click to get rid of it. Isn't that the general feeling? No, that's too much. Yes, right? So there are other biases that will normally affect how people behave when they're dealing with user interfaces and security prompts. There is an entire class of biases that we call the framing

biases. So how information is presented to people that will come into play into these situations. Now, okay, we're now seeing how those biases are affecting people. Now, is there anything that can be done? So is there anything that we can do in security, using the content and all the research from behavior economics that we can actually improve the things we've been doing? Well, there's actually a lot. About risk assessments, we talked about that before. We mentioned about the biases that affect risk assessments. One thing that you can do in risk assessments is break down that risk equation

in more factors, right? Has anyone heard about FAIR methodology?