← All talks

BSidesSF 2025 - Mind vs Machine: The Role of Human Psychology... (Anubha Nagawat, Ashutosh Gupta)

BSidesSF · 202525:56141 viewsPublished 2025-06Watch on YouTube ↗
Speakers
Tags
StyleTalk
About this talk
Mind vs Machine: The Role of Human Psychology and AI in Security Culture Anubha Nagawat, Ashutosh Gupta Security policies must consider human psychological traits for effectiveness. We'll contrast this with security needs for Non-Human Identities and argue that AI has its own "psychological traits" requiring tailored approaches to secure systems against AI-specific threats. https://bsidessf2025.sched.com/event/63a7d8bb8c06bb3234e47254d61b1119
Show transcript [en]

uh introducing Anuba and Ashtosh who are giving a talk entitled Mind Verse Machine the role of human psychology and AI in security culture. Take it away folks. Hi everyone. Thank you for being here. It's 5:00 p.m. on a Saturday. So you guys clearly like security and that makes you our kind of people. I'm Anubha. I work in product security and I spend a lot of time thinking about how human behavior affects security risk. And I'm Ashotosh. I focus on applying machine learning and AI technologies to developing security products. Uh very excited to be here. So we'll try to make this presentation worth your weekend brain power. So let's kick off with this visual audit of how the security control is working.

Here on the left we have a very dedicated security guard. They are doing some deep thread detection just not the threats they were hired for. And on the right we have an employee taking cloud security very literally. He probably didn't notice any S3 buckets, but he's probably seen some good images in clouds. Now, these are funny but painfully familiar because security controls don't just break because of vulnerabilities. They break because of behavior distraction misinterpretation and just doing the right thing with the wrong understanding. So, so because of that when we are designing security controls we need to keep psychology and not just permission in mind and we're going to focus this talk on that. So first let look let's look at

what all what all entities uh behavior affect. So we have various kind of threat actors. They could be human organization or non-human entities like scripts, machine learning tools etc. These different entities have different kinds of behaviors. And in this talk, we'll spend some time first looking at human behavior and how that affects security and then follow follow up with AI behavior and how that affects security. Now when we are talking about threat actors, one thing we need to keep in mind is every threat actor has its own motivations, capabilities and limitations. For example, a nation state has vast amount of resources which a script kitty cannot match. So when we are designing security controls alongside behavior, these are the two

other things we keep in mind. So let's begin with the very first human threat actor. Humans are the system. We build it, we use it, we fix it and sometimes paste credentials on Slack. Humans are tricky because even when they mean well they operate with emotions, habits, blind spots and biases things which no firewall could catch. So when humans are involved in the system we and we want to secure the surface we don't just secure uh it technically against vulnerabilities. We have to secure against the psychology which means rethinking our design policies, controls and experiences to account for those. So let's begin by unpacking some human behaviors which commonly trip up security teams. Just give me access said no one

ever or maybe everyone. elev requiring wanting elevating elevated access, disabling a security header or deplying deploying a quick fix without review. These are not behaviors which are driven out of malice. They are usually driven out of things like urgency, a sense of ownership and just, you know, wanting to avoid red tape. Having a a need of control and autonomy is a very entrenched human need and it connects deeply to our emotions and biases and fears. So over here security has to stop being a wall and instead act as a path. We have to design controls which allow autonomy while making things secure. things like scoped arback roles, environment isolation, just in time access and education which actually

explains why alongside what are really helpful strategies over here. The next one. So many of us have iPhones and in in your iPhone if you have location enabled for your camera then what whichever pics you take they automatically have geoloccation data. Now if you share any of these pics via Airdrop or WhatsApp or any other app your geoloccation information also goes alongside it. Many iPhone users may not be aware of this or even if they are aware they might not take the steps needed to remove this setting. So is this malicious? No, it's not malicious but it's not secure by default. Human beings always go for the fastest and the easiest and the most obvious way to do

things. If security features are slow or chunky or hidden or obscure, then they are not going to be used. So the strategy here is straightforward. Make security default. Have security features and also have security earlier in your entire pipeline into development workflows into frameworks into defaults. Remove the choice and make it automatically secure. Why? So taking a break from the behaviors, can anyone guess why these protrusions or ones escalators are present? Yes, correct. Because humans are creative, adaptive, and they use uh they use features in ways which den designers may not be able to anticipate. And that's where one security responsibility comes in. We are not trying to secure the happy expected path. We are here to anticipate

misinterpretation and novelty. Another example is someone using a casual sharing feature uh like public notes or status messages to as a safe space and share something which is should be actually be confidential or posting internal links. So strategies to defend here include product specific guardrails um having very feature specific guardrails which prevent risky behavior or having nudges which inform the user of the risky behavior when they are trying to attempt it and context specific

education. Another one memory and cognitive overload. Consider a scenario in which a customer support agent is working on multiple cases simultaneously. It's possible that they might share information from one case to u someone who had access for another case. Why? Because we all have short-term limited short-term memory, attention span, and processing capacity. When working under pressure or with unclear processes or fatigue, mistakes are mistakes can happen and the probability of them rises sharply. These are not anomalies and we should not treat them as anomalies. These are a direct consequence of human limited human cognitive power. So there are many strategies here which help work around this which the first one being reducing step complexity having checklist automated workflows and

providing very structured guidance just in time security support again inapp prompts and contextual nudges help impassive uh and passive safeguards like single sign on which avoid a little bit mental friction When failures do occur and they event and they inevitably will, systems must detect them early. So alerts and alarms are our friends over

here. And a very final human behavior which is I think the most underrated security threat is the basic human politeness. So I'm sure many of you have held the elevator door open for someone. Now if it's not a secure building that's okay but the same behavior will come up in secure buildings as well. Why? Because human politeness and nicities are ingrained into us. We are social first and secure second. So if we are going to design controls which assume that people will be assertive in when they are faced with an awkward human situation then they are not controls. They are actually a coin toss. especially if it's dealing with someone who's who has more power or

seniority than you. So instead, we should have preset workflows that remove the social burden or have just in time nudges which inform of um what policy to be followed right at that moment when decision is being made. And more importantly, we need to design our systems in ways that they completely remove the need or at least reduce the need for these awkward situations to occur. So here we've looked at some human behaviors and now let's look at some non-human entities and how their behaviors impact security of systems.

This is uh Chad GP's response for my ask. Give me a picture of non-human identities. Needless to say, my prompting skills need work. But we live in a world of automation. Our refrigerator knows when and where to order our supplies, and our sprinkler system knows when it's supposed to rain. We have seen an explosive growth of non-human identities or users in our environments. These non-human users are usually created by a human to automate a mundane task and usually run as a lambda function in cloud or a chron job on a server. If we if we trust recent studies, the explosive growth of non-human identities outnumber humans by 45 to1. These non-human identities use API secrets

uh Kubernetes workers and hashes which are usually not rotated f fast enough. Usually companies have a very set onboarding and offboarding processes for their human users aka employees. The similar predefined life cycle management needs to be taken care for non-human identity as well. The keys should be rotated frequently enough. inactive key should be deleted and the principle of least privilege is equally applicable here. So far humans were creating these non-human users and assigning work to them. Now let's look up look at a far more interesting kind of machine users. These are intelligent machines and they have access to business critical data and APIs. They simulate humans and these simulated humans need to be governed and educated as

well. Tasks which are traditionally harder for humans are proving easier for these intelligent machines. Let's take an example password guessing. A well-known password breaking systems like hashcat will take passwords from uh databach and apply predefined set of rules for example adding some digits or special characters in known passwords to guess the new passwords. Whereas pass GAN a genai based system which uses an adversial network technique to look at these well-known passwords from data breaches and automatically figures out the distribution and patterns of very plausible passwords. In a study in 2020, researchers found that this genai based system can guess twice as many correct passwords and fast as well. These systems are powered by geni models.

We have progressed from just the textbased model to a multimodel systems which can process text image audio as well as video can converse in a human-like voice with subsecond latency and all these progress has made in just span of few years. On top of that, these some of the models have been open sourced creating a opportunity for adversarials to take it and use it as they want. In a benchmarking study done by OpenAI, GPT4 performed in top 10 percentile of the test takers. This is not a great trend for cyber defenders or peacekeepers of modern world. To keep balance, security systems need to evolve with the pace of technology coupled with still the basic principles like least privilege and

frequent audits. But unpredictability is not the only concern. Sometimes AI confidently generates output which has no bearing on facts. Hallucinations. They are the consequence of the way AI models are trained. their limitation to use or to understand concepts generically and the biases and errors present in the data they are trained on. February this year, a federal court cited three lawyers for using fake case numbers in their argument. These lawyers accepted they used a state-of-the-art AI system to generate their motion. court found that this motion had case numbers, dates, and very convincing and plausible arguments only that the data was entirely made up. Yes, generative AI can be very convincing but can be far removed from

the facts. In yet another incident, a airline was fined for the information which its bot gave to a fellow traveler. This traveler used their refund policy and expected a refund where it didn't happen. Court airline argued that the passenger should have contacted the dedicated human help line to to get the confirmation but court still found them liable for the damages. How can the businesses avoid such risk? First we have to build inbuilt safeguards in these AI systems where they cannot use the data in or unpredicted contexts. Then we need to do adversial training means throw confusing and uh deliberately confusing uh inputs to this AI and expose weaknesses before your product or solution hits real

world. you break it or somebody else will. And in in the end having some sort of parallel verification methods like having another system which uses the same inputs and provides independent output and measuring it can be a very valid way to detect such errors. Such concepts are very well known in the fields of finance and aviation. Yet another bigger yet bigger risk is AI's ability to find loopholes and bypass security in some times. Earlier this year, researchers pitted latest AI technology with a well-known chess engine called Stockfish. This is a safe chess engine if you would have played on chess.com. That's the engine which powers it. These powerful genai engines were tasked to win against this very

open-source chess engine. However, it was noticed that these genai engines resorted to cheating even without being prompted by their humans. They they resorted to cheating by manipulating the files in the back end means down downgrading the stockfish version as well as manipulating their pieces. This behavior this behavior trait exists in humans as well. But persuading smart and dedicated human beings to do immoral acts could be a difficult task. We have to overcome their beliefs, conditioning as well as fear of consequences. But AI on the other hand innately doesn't have understanding of good versus bad and can be trained to do immoral acts without any consequences. And if AI can find loopholes in existing systems, it can

also be used to deliberately influence human beings. That brings us to a very grave situations of social engineering. In a yet another shocking misuse of AI technology, a UK based engineering firm was duped of $25 million using a deep fake video conference. In this conference, in this video conference, there a finance worker received an email which asked him to join an urgent meeting. And though he suspected that this could be a fishing attack, upon joining that meeting, he saw his co-workers which he knew closely with and their CFO uh joining that meeting and in that meeting CFO asked him to transfer some of amount which seeing directly coming from CFO he obliged only to later find

out that in that meeting he was the only real human In order to safeguard our interests, security policies must redefine trust mechanisms. We should require to verify the facts through independent multiple channels and in most cases security engineering sorry in most cases uh social engineering succeeds when somebody rushes or breaks protocol. So in case when the situation seems emotionally charged and things seems rushed, take a pause, double check the facts, perhaps triple check it. Such incidents and be AI capability can paint a grim picture, but there is hope. These behaviors aren't just for the interest. It's essential that we use them to design our security products and use these technologies to bolster them. Humans and AI behave

differently. They fail differently and they usually use features differently. Human instincts can be rushed, distracted or emotional. However, AI system can be very persuasive yet aloof from facts. So, as a final thought, good security isn't accidental and actually the major problems that happen are because they are accidental. Good security needs to be deliberate. A deliberate choice based on the threat actor, their behaviors. A deliberate tradeoff between scalability, usability, business needs, product and productivity, right? Like we can't say that always make the most secure system because the most secure system is a box in a uh is something in a box which is not connected to anything. But we have to make these trade-offs and risk decisions conscious and that's the

key takeaway we would like you guys to take. Thank [Applause] you. We have a couple questions from Slido and then time permitted I'll take live ones and walk around with the uh microphone. So first question given that genai coding has been out been let out of the bottle do you expect applications to become less secure as more and more developers use it? Absolutely. Yeah. It's it's a tool which should be used very carefully. It does not uh take the human out of the picture. Even the code is generated by Genai. One should definitely look at it and make sure that it's safe. Uh one point I would one point I would like to make here is we are used to

policy being very static because human behavior and human changes take time but with genai things change very rapidly. So the whole the entire process of how policy is created and that speed itself need to be rethought of. Cool. And then next one uh do you distinguish between LLMs and AI in terms of threat actors from this presentation? Uh that's an interesting question. Uh LLM has been uh the talk of the town because of the rapid advancement it has seen. But of course we are seeing uh people talking and envisioning uh AGI uh or other broader sets of AI. But yes, uh given that LLM is used so frequently and pervasively today, they definitely pose a higher risk to

security. Yeah. All right. Any uh live questions from the

audience going once, going twice. All right, let's give one more round of applause. Thank you, Ashto Anuba. That was great. Thank you.