
much. Hey everyone, uh it's so great to see you all of you here. Welcome to my talk, usable security, bridging research and industry practices. And thanks to for coming to one of the few talks that doesn't have AI in the title, but don't worry, we will get to talk about AI as well uh at the end of this presentation. you will not get the way. So, one thing you should know about me is that I hate passwords. I hate dealing with passwords. I hate creating passwords, changing passwords. And I really, really do not want to remember passwords. And today I want to talk about well, how do we make security more usable and security better? And I will come back to
passwords quite a few times and then to AI. So a bit about myself before we start. Um I did my PhD in computer science at ETH Zurich and my PhD was on usable security and I consider it my mission in my career as well to make security usable and throughout my nine years at Google and then the four years at Stone Lake as a senior engineering manager. I've always worked on security systems either like end userf facing or for enterprise customers or internal infrastructure that is used by internal engineers and I truly believe that making them secure by default that understanding how people use them overall and adapting systems to people's needs is very important and is the only
way to be successful in in security. I'm going to talk a bit today about some of the research I've done and then also about how I applied uh the learnings from uh you know from my research in my work in industry and one thing that I will highlight is a study for which I received a impact award from soups uh in 2024 for research on usable security advice. So let's get started. Well, I think all of you here know what is security, but I I want to sort of level the field on what is usable security. What does that mean? And you know, what kind of things can we do or do we do to make security
more usable? And I'm going to then go into an example, a case study, the one for which I received the impact award to show how do we do usable security research. And I will in particular emphasize what sort of methodologies did I use in that study whether it's like user interviews, surveys like you know the different different ways to collect information and how do we think about this uh about research overall. I will then deep dive into some of the work that I've did that I've done in my industry life. So coming from academia and you know with the rigor of usable security research you I want to talk about how much of that is still
irrelevant or applies in you know day-to-day jobs when you build systems and maybe you have different constraints maybe you don't have the same like time bandwidth or maybe you're dealing with different problems with less fundamental problems and more execution problems and I want to leave you with some practical tips and takeaways in the form of like these are surveys that you can run it. this is how you should think about it or you know how do you what kind of tests do you do and when do you do them and what are principles general good principles of usable security that you can apply to your to your own system designs and I think it would be you know
a big miss to not recognize that the world around us uh is changing that AI is currently going you know through a a huge revolution that we are looking at changing how we write systems how coding what the software engineering job looks like and I think that has a profound impact on security as well and on usable security and I don't claim to you know have the answers for how to make security usable for AI. I think we barely understand what AI looks like and what's going to look like but I do want to share some reflections and maybe you know some parallels on how we can apply that to the AI age. So let's get started. What is usable
security? Well, it is an interdisciplinary field combining human computer interaction and security engineering. And the idea is that we should design systems that are both secure and userfriendly because we can think as much as we want about cryptography and make things you know bulletproof and secure. But if these systems are used by users and users either don't agree or understand with the security requirements or they bypass them altogether because they are just too uncomfortable then you know we are not going to end up with secure systems and I do think that the proverbial example of like unusable security for many decades now has been passwords and you can talk you know to any non-technical user about passwords and
they understand the struggle and they understand the uh and I think that it takes more than usability research to move the needle in such areas. So when I say usable security many people think of it as a somewhat security paradox and it's often you know sort of presented as if you increase usability then you have to lower security or that they are at odds and I do think that they are at odds in some situations but not in all. I think good designs also make sec the system secure by default and make it you know easy to to use security systems. And for me my passion for usable security has started during my PhD when I was reading like
tons of anecdotes research papers on how people actually use systems. And one of them, my favorite one was about the user that was asked to change his password, but he was already using his favorite password. And the system wouldn't allow it to wouldn't allow him to choose a new password that is the same as one of the last 10 passwords he had. So what does the user do? The user changes the password end times until it's no longer one of the last 10 ones and he's allowed to use his own his favorite passwords again. Um, and I always thought, you know, anecdotes like those were really funny and enlightening and they motivated me to do us more usable
security research. In the era of the eye of AI or with many other systems we also see like less funny less fortunate incidents where maybe the user doesn't understand exactly what he's giving permission to ends up installing some AI agent that has access to everything all their photos to sort of organize and AI agent ends up deleting all the pictures and it's irreversible. I've heard, you know, recent stories like this, some with a one that I remember had a happy outcome at the end because there was a way to like time to reverse everything. But um the these sort of stories, you know, they they can be funny, but they can also make us feel that we have
failed users uh when when systems fail. So during my PhD I spent a lot of time reading research papers and a couple of them well more than a couple have been really foundational to the field of usable security and to my own thinking about security and about designs overall and I want to call out a few of them here because they really are seinal and even if they are by now 20 30 years old or more um I think they were published in 1999 uh not sure right Now the the principles and the lessons still apply and I do still recommend that you re read them today. So the first one why joining hunting crypt was published by
Alma Wittton and it was really revolutionary at the time because at the time people thought well users can't just can't use PGP it's their fault they're not doing it right and she sat down with users and she gave them specific tasks here encrypt an email and send it and watch how are they doing it and where do they stumble and how do they think about private key and public key or other con concepts are you know confusing for them and doesn't match their mental models and the next one users are not the enemy also had like a big I don't know shift in my mentality that like hey you have to work with users and if they don't use your system
right or they fail at security it's not their fault like try to see how you can change your own system and then your own design and one of the people that have really shaped my my thinking around usable security is professor Lori Craner from Carnegi Melon and she went and looked at all the leaked password databases and looked at the most common passwords that were out there and created a security blanket and the the biggest numbers there or passwords are the most common ones. And I think that was one super artistic but also such a powerful way to convey to users that hey your password is not safe because many times like just explaining these things
to users and the messaging to no to nontechnical users is hard and I thought that was a very nice and creative way and another sort of metaphor that stuck with me and that I'm going to come back to throughout the story is her talking about security warnings and saying that security warnings are like, you know, putting a warning sign there on a pavement. If everything was secure by default, if everything worked right, then pedestrians could just walk by. Users could just use the system and not worry about it. But when we as engineers cannot fix those security issues or we cannot make decisions, then we add the warning and we ask, you know, the user
to make a decision when like sometimes we have failed. And I think throughout my I'm going to come back to this as well, but throughout my work in industry, I got to appreciate much more like how much work goes into making the payment secure before you can like allow the user to, you know, to walk through and you can afford to not show those warnings. So let's see some general usable security principles. I would say design things that are secure by default. So you don't need to show those security warnings and you can allow users to just go by to their day and focus on their tasks and focus to where they are getting at. Aligning systems with users
mental models and I'm going to come back to this a lot because I think that matters a lot. The way people think about security, it influences so much how they think about your system, how they use it. And it's even more relevant in the age of AI and AI agents which are such new concepts and it's so hard you know even for for experts to wrap their head around agents and their capabilities and uh provide transparency and explain security warnings clearly uh and there's a really nice study I'm going to I'm going to quote on that and involve real users in design and testing from the beginning and this last one is something that I can say you know really applies
in industry as well. All of them do but the last one I do think it's very important and it can lead to you know cutting a lot of engineering effort that otherwise would have gone into building things that were not necessary that you need to reiterate on afterwards and throughout this talk I'm also going to talk about the different type of research methods. So when I was in you know in my PhD we would do some type of papers or studies they would be more explorative studies where we talk about for instance one of my research was on cloud usage and when users put their data their photos in Google cloud in Google drive on Dropbox how do they feel
about the security of those those systems how do they feel about the permissions about who has the right to see their data and we didn't always know what we are going to find or what you know what exactly to ask going in but these sort of studies they can be really powerful in sort of understanding things that you don't know already that you don't know what to ask about and that also reveals a lot of you know work on mental models and a good pattern or a pattern that I I have applied in my research is that once you discover those things and you see like what sounds interesting then you can use surveys to
quantify and to see hey was this just one user telling me this or this is this a trend does this trend apply more in you know Switzerland versus in India and that's where s surveys can really help you run studies at scale and compare the results and bring in that quantitative data one other type of user study that I do think we do a lot more in industry but it's done quite a lot in in academia as well is interface or usability testing where you give people a concrete task. You're building a new system and you're like, "Hey, let's see, can you authenticate with, I don't know, biometrics. Can you go through this flow and like set up the right access
controls?" Um, and that's a very powerful method as well. And, you know, you can use all of these at different times and they complement each other. And finally, one that is harder to do, but I do believe it becomes even more important with AI is having like longitudinal diary studies because many times the problem with maybe interface testing is as a company you invite users in the lab. Maybe uh you set up a very scripted session and then they are able to perform the tasks you gave them, right? But usage over time also changes and real world environment also change. And especially with AI agents where maybe you get one response today but tomorrow the agent learns something
different about the user and is going to adapt. I believe longitudinal studies become even more important. So there are odds I uh finished my PhD. I was very proud of myself and I went on a trip and I met someone and I was telling them about my work and they asked me well Julia but what should I do to stay safe online and I was stunned because I thought about it that there was no like easy answer I knew to give them. There were just so many things to do and though I believed you know many of them were important like use good passwords install updates uh like you know don't don't reuse password useful
authentication I I didn't know exactly in which order to put them or which one to mention first if I can say only one thing uh and I think you know the our world is complex and we do rely on on users to to keep many things, do many security things and it can be quite overwhelming especially when you don't have time for all of those. So I so I embarked on a journey to see what to do to turn it into a proper study and see what do security experts recommend, what do they do and what do end users do. Um and this is a a paper we have published uh in 2019. uh in 2019 and it was replicated since
then and I'm going to use this as a I'm going to go through the methodology and the findings. I do believe many of these things still apply today uh though maybe one one or two things will change with with AI and the introduction of past kids.
So when we so typically when you do research in academia one of the most important things to do before you start is to sort of come up with some research questions and what do you want to study there and so in our case it was how do experts and non-experts security behavior differ and to get to that we started we went to some security conferences like besides but I think it was black hat and another one and we stopped security experts attendees and we said hey would you be willing to you know participate in a study I'm just going to ask you a few questions and so we talked to 40 security experts and this was the kind
of exploratory side of our study uh where we just asked what are the top three things you do to stay safe online and then based on those answers we created two surveys one that went to security experts and we got 231 experts from around the world to to take it and had a lot more questions and ability for us to ask like quantitative data and ask about behaviors and one that was almost identical but it went to nonsecurity to normal users on Mechanical Turk and non-experts and that allowed us then to sort of see trends and compare answers. We used Amazon Mechanical Turk which I think is still very popular as a research platform. You can sign up for
it yourself. There are a bunch of people who come there to take surveys and get paid and they get paid a small, you know, dollar amount for the time that they spend. And people use it for all all sorts of questions. If you're curious about anything, you need something like, you know, for for work. Uh it's a it's a pretty cool one and you can turn around the survey in like one or two days. You get answers very quickly. So, in the survey question, we still had an open-ended what are the three most important things you do to secure to to protect uh yourself online. But then from those, you know, responses that we had gotten at the security
conferences during the interviews, we could also identify the top 20 pieces of security advice that were mentioned and then ask about each one of them. Do you follow it? How effective do you think it is? uh for non experts how likely would you be to follow it to you know to do this if you heard that it is effective. So we wanted to see current behavior but we also wanted to get to like perception of like how effective the advice is and how willing they would be to follow it and why not. Um and I think throughout our you know one of the things that we have truly learned throughout this this uh research as well is that if people
you can be the biggest security expert in the world but if you tell people to do something and they it doesn't make sense to them. They're like how is this going to make me more secure? Why is this secure? They're not going to do it. Like there has to be explanability a way for you know for people to understand that. And again coming back to the AI era I think explanability is even more important when we have you know so many AI agents LLM risk based things uh like making security decisions triaging security things if you end up not being able to explain that uh to to end users or even to your stakeholders you can
pose really big challenges. So who are the people that participated in our survey? These were the security experts. Only 4% of them female. That was a bit sad. Um they were distributed across the world. Uh you can see in green all the countries that participated. We announced this study on the Google I was working at Google at the time on the as a Google blog post and people just volunteered to you know spend their time and take this long survey for us uh and that's the breakdown there that you see of jobs uh and is the breakdown of like degrees and it was self-reported you know anybody could take it. We did have some control questions which is
like we wanted to sort of check that people actually read the text and that there was only one correct answer if you if you read the text and that helped us like triage some answers out but basically it was based on whether people said they have five or more years of experience in security and then for the non-experts they were all from US um and we had more a better distribution of male and female And this part here with the demographics in the research world is incredibly important. I have seen many papers like get rejected because the demographics wasn't right, wasn't explained properly, uh that the sample wasn't representative for the conclusions that they would
draw. And it's something that you know matters a lot. And I don't think it's you need the same rigor when you do research like in in industry. But you do you should also keep in mind the fact that if you run a study or some questions in US they are not necessarily going to apply to like you know users in India that behaviors are different and there are like changes across differences across demographics across regions uh and so on. So let's see some of the results. So for security behavior, we asked, what are the three most important things you do to stay safe online? And this is what we got. What you see there with purple is
the percentage of security experts that said they do one thing and it's uh and it's sorted by that. So most experts said they keep their system up to date, use unique passwords, use two factor authentication. And if you look at the big black lines like the one in the middle uh use antivirus, visit only known websites, use strong passwords. So there are differences uh you know in in one experts and non-experts did to stay safe online and then we're going to see in the perception of like security of different things as well. So let's look at this data a little bit different and do just the delta between the two lines. So to the left we have experts wait
update your system use two factor authentication use password manager by the way pass keys had not been launched by back then I do think it's a again coming back to my hate of passwords I love I love pass keys and I think if we were to rerun the study today that would make it um and on the non-expert side things that um that maybe to me were a bit surprising use antivirus was incredibly popular. Um, it made me think that security advertisement works. Um, change passwords frequently, something that I absolutely hate to do and that I would never out of my own will do it for the sake of security and visit only known websites, which made me
wonder, well, how do you survive on the internet? There didn't seem very practical to me. And we also asked to try to understand why users do different things or why they think that something is secure or not. We asked a bunch of, you know, more questions about it. And what you see here is an example of a quantitative question where they had to select I remember passwords for all of my accounts, most of my accounts, some of my accounts. And what you see here in green at the at the bottom is that way more non-experts than experts resorted to remembering passwords. Whereas what you see on the right side is that more experts were using a password manager.
And we asked some additional questions to understand why. And that's sort of an example of then we combine this quantitative data with qualitative data trying to interpret and explain the results. And uh we had experts say things like password managers change the whole calculus because they make it possible to have both strong and unique passwords. And this is one thing that both experts and non-experts thought you should do. And I think it's still true today. Uh you should have strong and unique passwords. But how are you humanly capable of doing that? That is a different question. Right? And coming back to the discussion about mental models, this is why I found this so fascinating because when asked why
why they don't use password managers and instead they you know remember passwords, non-experts said stuff like I try to remember my passwords because no one can hack my mind. that that is great, but we know use the same password everywhere and it's already lived on some database. And honestly, like if we ask users to change a password, we're just going to add a one at the end or an exclamation point. And also, no other when asked about password managers, they didn't quite trust them because no other application seems to be safe. So, how can I believe password managers are? And this is where, you know, this is in my opinion a strong example of where you can build a
system that's really good for security. password managers. Yes, there have been breaches for like password managers as well and some people worry about that. But when you look holistically, users are much more likely to be to get their passwords compromised because they use weak passwords and they use it everywhere than because somebody stole it from the password manager. But we can build secure systems if it doesn't, you know, if we don't explain it properly, if it doesn't match the way people think about what is secure, like in this particular example, they're not going to use it, no matter how many experts they tell them to. And to me, that's, you know, that's a strong reason for doing
studies on mental models, for trying to understand how people think about security and trying to align, you know, your messaging and the systems you built uh to that. And for the non-expert behavior, uh, use antivirus, change passwords frequently, visit only known websites. Coming back to the change passwords frequently, one of my like I mentioned one of my heroes has looked through all the security uh usable security research and was able to really show that if we force users to change passwords, it doesn't result in stronger security outcomes. that an attacker if they know based on the research that was out there if an attacker knows the previous user password they are very likely to guess
what password what is the new password because the coping strategies of users when they have to have a lot of strong and unique passwords is to have variations of the same password where you add the one you add the two you change it a little bit again and those things are also very predictable for attackers uh and they can be guessed So I've been super thrilled to see the recent sort of shift in compliance standards as well and in the industry to stop requiring like forced password changes. So conclusions from those study um users the nontechnical users were using antivirus saying you should use strong passwords change passwords frequently uh whereas on the right side the experts
attitudes were different they were uh you know some things over overlapped but some didn't. So we should, you know, when thinking about security advice and what we promote, I think we should promote the conclusion was to promote keep your system up to date, update factor authentication, use password managers because those were the things that people were doing less compared to experts. Now again uh I think this one thing that I would add today is use pass keys. Uh I love pass keys. they weren't they have you know they weren't launched back then uh but something that absolutely I recommend and that it helps with my case for passwords. So now I I've talked a bit about you
know the different methodologies and what research looked like in my you know usable security in my life in in academia. Um, I spent now 13 plus years working in industry and yes, I'm I'm still working on security systems but with a lot maybe less focus on user research since my main job is engineering and managing an engineering team. And one thing that I can truly say is that the same principles so sort of usable security still apply. those things about like understanding why you need to build things secure by default to test things early to yeah basically to to apply all of those things they are still important and in terms of methodologies I still apply
similar type of methodologies however it's a lot stricter it's a lot less uh how would I say academic from the point of view that demographics don't matter as much. I can use smaller sample sizes and even just talking to five users like gives me a lot of information about like what bugs were what things to to fix. Another thing that I think it's way more prevalent in industry and that you know I was fortunate to to do uh is AB testing and AB testing can be very powerful because you can essentially try different versions of your interface and you can see how users actually react to it and you can see you know what security
warning or what security explanation works better. Um and we also use surveys a lot. So one of the things that I have really gained an appreciation for is the secure by default and what that actually means when building large scale systems and I think a lot of more of my time that goes into that into securing that pavement and like securing the things that are underneath and a lot of my projects are on security infrastructure. they are on, you know, things like paying technical debt, uh things that take a really long time to get to then to that minimum like usability improvement that you want to see. And many times also we know what the problems are or what the usability
problems are, but it's you know it takes that planning and engineering and the time to fix them. So definitely I I gained a lot more appreciation for that secure by default and how challenging that is or like why not many systems are you know secure by default from the beginning. Um and as for user centered design one import really important lesson that I think it's you know it's even more important in industry is involve real users from the start and like as soon as you have a prototype as soon as you have something going like put it in front of some people test your assumptions and it can be as easy as like you know like uh
stopping someone on the hallway and asking for feedback. that design based on actual workflows and mental models. I do still think it's very important and we can see some examples of that later and test with representative users throughout development. So I want to talk about a recent project uh that we have done on building an internal access group management system. And the story here is that um we you know we I we made a design uh we we started we implement it we tested it early but we got to a point where it was like a go or no go on the launch and initially we had some assumptions which is that you don't need
uh that you don't need or that you need UI to launch. So um maybe let me jump jump to that. So we had one way to use this access group management system and add people to teams is to use the CL. We had a CL that is available but the UI then was going to take longer was going to take longer to uh to put out there and we intended initially to launch only with the client. And now the problem is when testing it with some users, some of our users were managers, engineering managers and others were engineers and we got you know some feedback that hey it's not good enough. Uh but basically like this is an example of where I think
as an engineering manager I get put into this situation of like make a decision launch or not launch today. Wait for I don't know three more months to build the UI or like do it with these things. If you launch something that's not good, sometimes you only, you know, get one one first impression and then you sort of miss the moment. And that's one example where I put back my research hat and I said, I'm going to make the decision, but not without data. Like, you know, the only way to like really know is to go talk to your users. And I think many of the sort of usability designs they they are somewhat intuitive
to us especially when we are users ourselves. But so many of the times like they seem they may seem obvious in retrospect or we try to you know make decisions without actually like basing it on on real data. And that's where the my research mentality sort of came in again. And in terms of like how do I design a study for that I think it's again it was much more lightweight. It was task based so it was more like the interface like usability testing and I created a little script and I asked people engineers and managers if they want to participate and you can do this very quickly. you can use, you know, recording uh you can record your
sessions and then now you can use AI a lot to process that data and even just five interviews like this can really reveal like quick pain points and wins and help guide decisions uh like that. So in our particular case like one thing that I was you know that I found surprising me being a manager I tried to install the CLI and I thought this is unusable but the reason why it didn't work so well for me is because I wasn't coding every day and I didn't have all the development tools set up and so it was a lot of like churn for myself. Uh, and so I assume that naturally people would like a lot more to like just go to
the GitHub UI, click an edit button, and then add one person to that team file. And so basically that's it. I'm done. That was one of the tasks, add a person to add a person to a team. But surprisingly to me, when I spoke to engineers and they went through both flows, they like the Clyde better because they said, and this sort of comes back to maybe like mental models, even if the CL and the and the UI ended up creating the same like the same PR and ending up being the same result, they felt that I'm sure that the CL is doing the right thing because it's formatting everything right, but with the PR, if I do it directly myself, I'm
not sure if I edited everything Right. And for them like the development environment was already there and it was a lot easier uh for them to set up. Um that's something that was a surprise to me as a manager and when I told my team who are engineers they were like duh. But I think that goes to show that many times we see our perspective uh and your users you know they can build different types or have different models uh way to think about them. Another example um here of things that you know in retrospect once someone said them like they were obvious but I didn't think about them before. I expected people to just go of the start of the user guide
and follow the steps there and someone uh just started using AI and asking AI how to do it and the AI someone is right there that's the person and the AI didn't give the right and the AI did not give the right answers and I was like well duh of course he's going to use AI why would he like you know go off the documentation anymore but that was you know that wasn't obvious to us beforehand to be like, hey, let's try to see the AI answers. Is that going to guide users to do the right thing? Let's look at it from that perspective. So, we got a ton of really useful feedback in a very
short period of time that then allowed us to say, okay, if we fix these things, we can launch early and then the UI can come later and, you know, save a lot of like get those security gains sooner, get that customer usage sooner. uh with very minimal effort like designing a designing a research study in my academic life would take months of planning to make sure it's rigorous and you're doing the right thing. Uh but in you know for daytoday and in industry some some some studies take longer. It depends what you're trying to you know to get to. But I encourage you all to you know apply all of these methodologies in your in your dayto-day.
So a few takeaways you can design your own usability studies. Please think yourself of yourselves as like uh researchers. Um examples could be uh you know when you have a a process a security system that doesn't work so well identify your users. Think about what are the the mental model questions. you want it to be more exploratory or do you have specific things that you can quantify and then you can just send surveys out uh and you don't need to be rigorous about it and also with AI today like you have your own research assistant basically uh please leverage that right it used to take me much longer to come up with questions and make sure that the survey and the design
is done right now you can now you can just uh use AI to synthetize synthetize ize results but also you can use AI to design your your studies and to run things very quickly. Also in my research life I used to spend after doing many interviews there was a very specific like and it's still being used coding methodology where you go back to the qualitative data and you have a code book and you like you see how many times did people mention passwords and how many people said this and then you have to like sort of add that all of that up and summarize. Now AI is very good at that especially you know for like when you're not
sending your paper to a peerreview conference you can get those surveys and you can have AI summarize the findings really quickly. Um also recording recording sessions so you don't miss notes is something that I find very valuable and that you can go back to later. So one Maybe other practical uh advice that I would recommend there was this paper also from a while back on helping engineers design meet security warnings whenever it's a framework basically it says that security warnings should be necessary they should be explained they should be actionable and testing I think some of the you know worst things you can do is show a security warning that people have no idea what it means what
to do about it um that maybe does the wrong think and that doesn't provide that clarity and it's not actionable and I think this still is so relevant today and it's relevant to like anything you build for security one thing that maybe I would uh I would change where I would start to think about now moving into the era of AI if engineers are not the ones building systems anymore where you have so many AI doing the coding is it time to teach the AI agents like these principles and teach your AI agents about what uh security uh need security warnings are. And that brings me to the last part of my talk. And as I said,
full disclaimer, I don't have the answers for, you know, what usable security looks like for AI. We're barely starting to understand what AI means and things are evolving super quickly. uh but I did think it's important to address that and sort of reflect well if the world is changing then how many of the things I still said are still you know are still valid um if you're not the one building the systems perhaps you have a bunch of AI agents doing it then like maybe they know it all right maybe they already know how to do things secure and you don't need to worry about it well so let's go back to what's new we have with AI we have many more
applications that are vioded. We have many more applications that are being created and that is great. uh but many of them you know given the sheer amount of code that's written there has been no users that uh went through and properly understand the security choices and you can assume that the AI agents will be good will make good security choices is not always the case like we've seen you know examples of um of private keys for the backend database being stored in the client side JavaScript so basically you're making it available to everyone to be able to read anything in the database. Um, that's something that AIS can make big oversightes like that. And I think that makes security even more
important. It makes security even more challenging given how fast new applications are being built and how hard it is to to keep track of everything. It's also made it a lot easier for attackers to vip code incredibly sophisticated campaigns and to find vulnerabilities and to exploit them. And for decades now, we've had a lot of passwords databases being leaked and sitting there. And we're already seeing, you know, evidence of attackers starting to use uh AI agents to really exploit those uh password databases and to do data expiltration. So their capabilities have really increased and also where for some things before they had to know how to code. Um you see people that attackers that have no prior coding experience
just by coding everything and being able to do things that you know would have taken months uh of resources and dedicated engineers before. And from the user's perspective then as AI agents do so many more things and they get access to so many more things. I believe the access control problem is just exploding. It's just like now there's so many decisions for users to have to do on whether I allow this I grant this and it's very likely that it's going to end up in just um warning fatigue where every time you ask the user you approve you approve yes I approve I don't understand like just you know move on and keep going and those
are really usable security challenges and we have to rethink about how to address them then because you can't have the user like reviewing every prompt anymore more but it's still very important to have you know those neat warnings any form concept um and finally another thing that I want to call out is that it's harder to understand some of the security decisions when they are made by AI or they are made with AI but explaining them and the explanability of like why a security decision was made or what a security system does is even more important than before. So with all of those things changing the usable security principles still apply it is my my my main belief that they do
remain that all the you know the things we talked about the good practices they are very important and I think in order for for for these systems to be built right just like uh you know engineers still need to know how to code or the principles of like how do I build systems in order to guide AI and to verify if AI did the right thing. I think that's also the case for usable security. Uh there we we need to you know continue to understand them uh and to and to help AI agents. But the con concept the context does shift a little bit from where before like the focus was like help the users
do the right thing not make mistakes for security. the world has become more complicated, right? And so now humans have to delegate safely to AI. Usable security is going to mean different things and it's probably not always going to be decisions made by humans, but decisions made by humans together with AI agents. Um, and so now imagine you're that pedestrian walking on the sidewalk and you could only go from like a couple of blocks before. with AI now you can go a hundred times more blocks let's say because things are just moving so much faster but if at every block you have a security warning that does not scale right so I think we need to also think about how do we scale
the human how do we scale systems to account for human input where we cannot like scale at the same time uh so I I think security warnings must be more automated there should be an oversight and an interesting idea there is to use an perhaps an AI agent that can make decisions for security or that can oversee other AI agents but it's on behalf of the user and knows like what kind of decisions to to surface to the user and which ones are low risk and so to just move on. Um and this is where I just wanted to say a couple of thoughts on the safe mental models. Part of my uh initial research during my
PhD was how do people think about cloud and it was this new concept because all of my data used to be on my laptop and now it's all over the cloud and so you know it was we had no idea do they think it's safe? Do they think it's still theirs? And I think the use of AI like it it it's a new paradigm shift and I think you know making sure that users understand what models actually do what access to their data they have uh what kind of priv privacy guarantees they have when they talk to LLMs about their and ask legal advice medical advice what are the implications of those things it's incredibly important and I think
this this work on mental models or making sure that expectations are, you know, meet reality is even more important in AI. And another one one other thought, there's a concept and it's happening for AI to be adversarial, right? Um, so far maybe you've used security systems and the expectation was that you click buttons and you set up the right policies and is deterministic and you know it's going to be to what you set it up. Now, if you use an AI agent instead and this AI agent is starting to blackmail you or starting to have a different uh uh you know a different intent, there is a whole new dimension of like adversarial AI and on top of
that there's also you know many uh advances in like uh biometrics in deep fakes for biometrics all sorts of attacks and the previous advice that we had for users to be vig vigilant to not fall for fishing to look for spelling mistakes during in the fishing email because that's how you can tell if it's a legitimate email or not. It falls all falls apart, right? Because with AI now fishing emails can be perfect. Uh so I I really think we need to to I believe we should think not just about humans but now humans and AI together uh how can they team up uh to make usable security and in terms of in terms of uh methodologies I think all
of the you know usable security methodologies still apply but in my mind longitudinal studies become even more important uh because if you just do interface testing. You bring some people, they try like an LLM agent once it does one result. That's going to be probably a very different experience to like next day and five weeks later when the user uses it for the same purposes and uh AI agents also technology changes so fast and user behavior changes. Uh so I think we'll see more shift towards like longitudinal studies. Um and this is my last slide. I was just thinking back to that first slide on usable security recommendations and how would I maybe adapt them to AI or what would I
add for AI as I said secure by default I still think it's very important but what that means I think it has a new interpretation I think we need to build in usable security guidelines to teach agents from the beginning when you start building when you start writing prompts to build a new system you should already be thinking about usable security and you should already be thinking about like you know how do you teach your your agent to think about that as well um align system with users mental models like I've said super important have to design for AI speed and involve users uh in decisions based on a risk based process because it's not going to be possible to ask for
user input on everything um provide transparency upload that neat warnings paper into your LLM, into your AI agent and have them read it, right? Like teach teach your agent. Um, and when we used to say involve real users in design and testing from the start, one uh one crazy thought is that your users are not necessarily going to be real people anymore. Your users may be AI agents because you're building a system that's used by other AI agents. Um, so I don't know how you do user studies on that, but I think we're going to figure it out. So, thank you all very much.