
Okay. Wonderful.
Time for my closeup.
>> I try to >> I'm learning how to be more reliable. achieve that status. >> It's a team effort. >> Oh, >> and sometimes sometimes it's hard to volunteer and then be the team. >> You teach a lot of >> I do. I've got four classes I'm teaching. freaking
enjoying a lot. So, I have to learn how to enjoy letting my doublelm chatbot enjoy the coding >> and that could be a challenge for better at it than I am. >> I'm worse. Most
of my >> most of the advice I've been given is to try to prevent your brain, try to prevent brain rot. >> Yeah. >> Always treat the LLM as a partner. >> Never treat it as a proxy. >> Never treat it as a tool >> because it can do so much more than just >> dumb tool. >> There's some middle sweet spot. Treat it as a partner. You have to be as engaged as it is.
>> Someone correct me if I'm wrong.
[Applause] [Music]
If you are at all [Music] [Music] the biggest organized at allure [Music] Please
have with us Dr. She's the director of the nuclear safety regulator.
[Applause]
Thank you everyone. Thank you Lindsay and the organizers. I'm I love to be a friend's community. I hope that I live up to the hype. I probably won't, but um that's all great. A little bit about my background. I'm really glad that I read my bio, but um I have a PhD in psychology. Um, and I work alongside a lot of highly technical people, engineers, uh, risk analysts, uh, computer scientists and hackers. Um, I have a personal connection to the community and spend a lot of time talking about security, about humans, about technology, and about energy as well. Um so my my research wide variety of spaces
>> okay thanks for yeah I kept hearing the microphone as well >> I don't think it's me I don't think it's where I'm moving I think it's the microphone it might be the so if you just If you can't hear me a little bit closer, too. I can also yell my voice. Okay. My backgrounds, >> we're gonna get you a hand mic. >> Coming to this conversation from a perspective of someone who has spent their life studies. Um, my education was looking at how uh like different theoretical mechanisms for how practice of things affects your memory for them. But we'll try this
>> testing. Yes. No reflection on microphones. >> We tested the microphones pretty well before but I didn't walk around. Okay. So um start again coming at this conversation from the perspective of someone who has spent their life trying to make sense of human behavior um and really understand what makes humans special, what makes them behave the way that they do um and what makes them perform well. Um and so uh obviously I have a very a bias towards humans rather than machines. Um and so that'll come through in my talk. I'm gonna talk about security, but what I'm gonna most of what I'm going to talk about is AI, humans, and messy problems. And um security is a messy problem.
Spoiler alert. Um but I'm also going to talk a lot about how what we know about human intelligence, what artificial intelligence is, um and how we can apply those techniques to security. I'm not going to talk about today um is but I think it's something we should talk about um as a community is the threats associated with being able to very easily create things to do ill and that is a very real thing that AI enables. Um, I'm not going to go into that, but you know, being able to create videos, images, convincing text, convincing voice, um, or very quickly being able to just spam people with a lot of things that could be harmful is a really big threat and it
is a part of this conversation, but I'm not going to spend a lot of time talking that today. We can talk about in the Q&A if you want to, but just so you know, I'm gonna sort of skip that piece of it. And it is a very real threat. It's a very real aspect of security and AI that we should be talking about and addressing and preparing for. That's not the point of today's conversation. So why does hype matter? I mean um I'm I'm going to make the claim today that um there is a very large hype bubble around AI as a concept um and that it's harmful. The AI hype is harmful for all
contexts but also for the context of security. Um so so why does it matter? So first of all, hype creates an environment in which we invest a lot in things that are not going to work. Huge mindboggling amounts are being invested in AI. And just today I've read I don't think these remarks were today, but they were very recent. I read about Mark Zuckerberg admitting that there's probably an AI bubble, a height bubble. Um Sam Samman also made similar uh remarks. But um it's okay. There's a greater risk. We could risk wasting billions of dollars, multiple billions of dollars, but there's a greater risk on missing out on super intelligence. And to me, that is like the cornerstone
of magical thinking. Um and hopefully I'll convince you of that by the end of today if you're not already already there. um because this is we're we're going to admit that we're not getting the value out of the investments that we're making right now, but on faith we're going to believe that someday we're going to get to super intelligence and all these investments will be worth it. Um I don't think we have a a historical president um to to say that that's a thing that that has happened or will happen. Um you know we have the internet and the do bubble which were bubbles that created some value. Those investments created some value. Um but
this is on an order of magnitude much greater than that in terms of the uh investments that we're making, the potential time and money that we're wasting. But then the other piece of it, the other aspect of harm is that when we don't understand the technologies that we're building, how we're applying them, and how um we are changing the landscape of human decision-m and it is human decision- making. Everything we do is for humans or it should be. I I don't know of any other reason why we would do anything right to better um improve uh our lives. It's everything we do should be for humans. So if we don't understand how we're changing the landscape of
human decision making and um how we're taking shortcuts, we're going to implement things and we are and have that create real tangible harm in all kinds of spaces in law enforcement, in making decisions about who gets loans and making decisions about who gets jobs and making decisions about pretty much anything. um we can make huge consequential mistakes and we already are um with with AI and this applies to security because security is a messy social problem which I will say over and over again. So we definitely need to have a much more sober understanding of what what AI is, what it does, what it can do and how we apply it to decision making. So the first problem that I want to talk
about is AI. We all have an image in our head when we say AI. And in order to break through the hype, we need to start calling it what it is. Um Arvin Nar, I can't say his last name and say Kapor in their um book, these are AI researchers, but they have a very excellent book called AI snake oil. Um, they make the brilliant analogy with the word AI, artificial intelligence. What if we had only the word vehicle to describe things as varied as a scooter, a rocket ship, a lawnmower, a steam engine, a car, a freight train, the word vehicle, very vastly different things. If you're If someone promises a a rocket ship and hands you a bicycle,
you have a problem. But that is a very astute analogy for what you have in the landscape of AI right now. We're using the term AI, artificial intelligence, to refer to vastly different things um that do vastly different things that use vastly different techniques that have vastly different capabilities. Um it's not serving us well. Um, when the AI, the term AI conjures in our minds, everything from maybe spellch check if you're uh, you know, being not so generous with it, all the way to the Terminator. Um, and anything in between and everything beyond. Um, and so, you know, people think about artificial general intelligence, super intelligence. There's lots of people who spend a lot of time describing what
these things are and what they mean and why we're close to them or why we're not close to them. But when we say AI, we are talking about different things. From my perspective, AI right now is a set of techniques. Um, we're using the same word to describe a lot of different things. One of them is we use it for mundane automation, things that are just simply automation. We're using it data driven automation where we uh augment our automation techniques using real time data, historical data. Um we're using it for predictive analytics. We're using it for things like large language models, content generated generators, classification systems, um and all sorts of things. From a security standpoint,
we've been using these techniques um to do things very successfully for a long time. So things like span filter, AI, things like fraud detection AI. Um we used to not call them AI. Now if you go look at any um any companies marketing information, any of their brochures, almost all of their products have AI built into them and they're describing a large number of techniques, usually using data to improve our analysis. So, in order to make progress in understanding how to use these things, in order to make sound decisions about um whether we're using them appropriately, we need to start calling all of the different techniques that we're using, what they are. And I would
I would suggest not using the term AI. Um a lot of us are sort of forced to use the term AI because it's a marketing term right now. And uh if we want to get money from our sponsors or if we want to get uh interest from our customers, we are starting we we need to use the term AI to to generate that excitement and interest. However, it would be much much better if we started calling it automation or datadriven automation or any number of other things predictive analytics because then we would understand exactly what we're talking about and we wouldn't have this image of some sort of super intelligence that's going to ultimately sort of magically
solve all of our problems without us putting the thoughtful effort into how we're actually going to go about solving our problems. Do we have the right data? Are we validating things effectively? So AI, start calling it what it is. Stop calling everything AI because that's not helpful. So what is AI? There's a lot, as I said, there's a lot of things that we're calling AI. Um, none of it is intelligence. And the reason for that simply is like we don't know what intelligence is. Um it might you might be surprised to know that there is no consensus on what human intelligence is. There's no consensus on whether some animals have intelligence or not. There are lots and lots of definitions. Um but
we don't know what intelligence is. We don't even know really how human memory works. We have some hypotheses. We have some information. We have some good ideas about how certain aspects of it work. But we do not know as psychologists, neuroscientists, as the human race how our brains actually work. And because of that, we can't say whether something is intelligent. We don't know what intelligence is. We don't know if it's something special. We don't know if it's just really sophisticated classification schemes, really sophisticated data driven experiencing of the world. There's a lot of things that make humans special compared to machines, right? We have bodies. We experience the world through our bodies. We have there's a term called embody
cognition, which is our thoughts and how we perceive and think about the world and what we do and our behaviors are deeply rooted in the fact that we have bodies and we experience the world through bodies. Um, and you can maybe simulate that. We don't know if anything makes that special or if we can, you know, create a sophisticated suite of sensors and inputs and outputs that mimic a body in a way that all the important things can lead to intelligence. So, it's not intelligence because we don't know what intelligence is. Uh, so AI as a field, if you're uh not like familiar with the long history, it's been around for a while. A lot of
my colleagues, you know, when they're talking about this uh recent surge in energy, everyone there's like this sort of exasperation like we've been studying this forever. We know all the limitations and now everyone's an AI researcher. Um and nobody's listening to me say like, "Hey, can we be a little bit less more practical and and less hyped?" Um but in in the past, you know, up until the 1980s, um the AI sort of was dominated by a perspective, the symbolic AI perspective, which is building systems that kind of do things like humans do. So building on human expertise to try and uh create intelligence or help make decisions the way that humans make decisions. Um so things like medical
diagnostics, all kinds of expert systems and these had severe limitations. Um they were very brittle. Um they they worked fairly well when uh you had good uh inputs. They worked for it. They could be better than humans. Um especially if you have humans help train them, but they they weren't like general. You couldn't generalize them. um and we couldn't create these systems that could do more than just very specific tasks that could break in brittle ways. Um so with the advent of you know the two well several things sort of conalles to create our our re our current excitement around AI and like the 2010s we had a large we had advances in machine learning techniques
um with deep learning we had huge amounts of data being generated and shared and we had computing power which increased and all those things made a lot of what we think of now as AI um datadriven automation, machine learning. Um that came to the forefront front and replaced symbolic AI as as the main uh area of interest in artificial intelligence. So most of the techniques that AI is, some of them are used, you know, they started out as classification, right? Image classification was one of the main tasks and benchmarks that we um assessed our AI systems against. So can I say what's in an image? What what the object is in an image? Um can I do facial
recognition? And that has advanced significantly over the past years. you know, when I first had Google photos and it tried to um detect faces, it wasn't that great. Um it's pretty accurate now. It makes some mistakes, but not a ton. Um and so recognizing classifying something as oh this is Katia, this is uh Ginger, this is Lindsay. Um that kind of stuff works. And those classification algorithms can be as simple as like statistics, right? It's just really sophisticated statistics or it can use machine learning and machine learning is really where the start of the AI story uh starts getting into this really big hype machine. There's been as if you read about AI there's you know uh
periods of of extreme hype and then periods of oh people get a little bit sober and pragmatic about it called the AI winters. I've been waiting for winter for since Chachi BT came out. Um, and it hasn't come yet. I think it will, but um it it's not here yet. It may not come and that may may or may not be a good thing depending on what we do right now. Um but machine learning is based on an analogy and this is where I think it one of the reasons that we have this magical thinking. It's based on an analogy of the human brain, right? So the perceptron which is essentially mimicking a neuron which is something we
know is in the human brain and something we know is related to how we form memories and have thoughts. Um machine learning is based on on that analogy. So a neuron is basically you know a unit and machine learning is a node. Um, and we know that something about how neurons interact with each other and the strength between connections in neurons is related to memory and thought. Um, and and as we get perceive things um and have stimuli, those stimuli um activate neurons and and the more we're exposed to certain things, the more connections are made and the strength of those connections has something to do with how we think. And that is how uh machine
learning works as well. So we have that analogy with how the human brain works. But we don't know really how that you know we understand that yes we know how neuron works. We know how signaling works. We know how different things affect the uh the connection between the synapses um the between neurons. But we don't have a really deep understanding of how we get from there to uh the complex behaviors that humans perform um and complex thought. And so that analogy while it works and it's useful and we've done some amazing things with it doesn't necessarily represent what human intelligence is. And it's interesting to me as a psychologist because the analogy between computers and psych
it goes back and forth, right? So u the you know behaviorism was the dominant um way that we thought about humans and psychology for a while right and the tenant of behaviorism is that we don't know what's going on in the brain. All we can do is observe behavior and so we spend a lot of time studying things that so we observe behavior. We don't speculate on what's happening in the brain at all. That's not useful because we can't possibly see what's going on. Uh so we don't even talk about behaviorists don't talk about h what's happening in the brain. Um and we only observe behavior. Well that was somewhat limiting right because a lot of what we
want to know a lot of the big questions um can't really be studied from a behavior's perspective. Um and so we then sort of started that we had this shift to the information processing theory of human cognition and that is using uh computer analogy right you have inputs you have processing and you have outputs and that's how the human brain works and so this back and forth between human brains and computers in terms of how we think about them so we we borrowed from computers to talk about humans to try and understand how the human brain works. And we borrowed from the human brain to uh maybe describe how we can make computers think more or act
more like human brains. Um kind of gives us this un unjustified assumption that you know there's this deep connection between how computers work and how the human brain works. But I'm going to argue that they're very very very different. Um and just because machine learning has advanced what we can do with computers doesn't mean that it works anything like the brain works. Um with the exception of using the analogy of neurons is very helpful to to do things like machine learning. Um so machine learning um as I've said uh is is based on the analogy of uh neural connections. So I have I have a neuron which represents something um and I'm gonna affect the strength of
connections. And when you it turns out when you create really complex um networks of these things with multiple layers and like millions and millions of parameters and tons and tons of input uh and lots of training, you can do some pretty amazing things. Um you can you know do image classification to a point where you can recognize objects and um you can recognize you can we've been able to do some pretty amazing things with uh speech translation obviously with text and so when we um we think about text and large language models which I would argue right now is the thing that everyone thinks of when they think of AI. Um when we talk about AI people are not talking
they they are to some degree you know people uh developing predictive analytics are thinking about that kind of tool when they're thinking about AI, but almost everyone is betting on large language models being able to solve all of our problems. So why is that? Um, large language models use the same techniques they use and there's a lot of different techniques from statistical basic statistics to really sophisticated statistics to machine learning um all the way to uh to large language models which use a mix of of techniques and there's also reinforcement learning which is learning from experience. Robotics uses a lot of reinforcement learning. Basically, if I'm the simplest thing is the same way that you train uh
a a dog to do something, except that you're not necessarily uh rewarding the behavior that you know is going to get you to the desired outcome. You're rewarding how far you are from the desired outcome. So, reinforcement learning is I'm a robot. I'm trying to get over there. I randomly move around the room and you're gonna reward me for some function of getting me closer to that. Um, and that works okay for robotics and other kinds of problems and also adding that kind of u technique to other kinds of techniques um can make them better. And so pretty much every modern AI system uses a mix of these techniques. Um but ultimately all they are are providing data as an
input um looking at all the different patterns. It's pattern recognition um and then making a decision about an outcome. Maybe that's all humans do, right? Maybe we're just sophisticated classifiers of information that we get. Um I I think we could probably create a model that describes a lot of human behavior. Um that uh would would say that that's exactly what we are. But I think we're nowhere near uh in terms of development um and creating that sophistication of whatever humans do um that makes us intelligent um at this moment. And I don't think that the techniques that we're using are going to get us there, especially with large language models. So what are large language models? They use machine
learning, they use text, and they have some sophisticated things that they've done, but really they are predictions of the next word. And it turns out that when you take all the text in the world, um, and train a model with it, uh, you're going to get very plausible and compelling outputs, right? And can you imagine what any one of you could do if you could read every literally everything in the world? um and uh what kinds of really exceptional things you could do in terms of output. So like large language models have a huge advantage over people, right? Because they can they're omnisient. They can literally they have access to literally everything that's ever been
written in text and digitized. Um and so that that begs the question, right? If we're you know the outputs of large language models are extremely compelling, right? You GPT to go pull some research from you for you or to write something in someone else's style. It creates something really cool um really compelling. But don't you think that if uh if we were going to get to with the techniques that we have now, if any one of us had access to all of the scientific research um and we do have technically have access, but let's say we could process all of it, right? This is an advantage machines have over us. They can process things. Um, it can
process uh all of the information has access to it. Can you imagine what you could do? What kinds of discoveries you could make if you could literally process all of the scientific research that's been done, read all scientific papers. I can't even read all the papers in that field. I certainly can't read all the papers in the field of AI. Um, and so in this sense, the machines have a huge advantage over us. So we're actually seeing major discoveries made by Chai. Um and so that's something to consider. On the same vein, if you're thinking about like uh autonomous cars, right, uh Tesla or any car that has um some of the automation or AI or autonomous
capabilities um either planned or built in has a ton of sensors that allow it to see and perceive in a all different directions, something a human can't do, right? So the humans can't look behind us and in front of us and beside us and they can't process all that information all at once. So why aren't those vehicles better at driving than humans? Because humans do more than machines do. Um and so AI is not intelligence. Um, AI is a set of like techniques that take data, process it, and provide an output. Um, and when you think about it that way, it kind of takes some of the mystique out of it. Um, and so we're
going to talk a little bit more about generative AI and why I think it creates magical thinking. Um, so who here uses TBT every day? Okay. Um, is someone or Claude or whatever your favorite thing is Gemini co-pilot or I doubt anyone. So, can can someone share what they like someone willing to share what they use it for? >> Researching topics. Okay. >> Writing emails. writing wrote emails. I love that example. I like researching topics.
>> Okay. Coding. That's a That's an interesting one. Okay. Let's get a couple more. Let's go back there, Lance. >> Correcting grammar.
>> Okay. Are you willing to share an example?
Yeah. >> Okay. So, changing wording. Um, so wrote emails. I like So that's we'll we'll start with wrote emails. So, I think generative AI is a like a great tool for doing things that are kind of a waste of our time, right? Um my question then is like why are we doing that? Um right. So, so trying to like doing things that are kind of a waste of time, things that are are templates. Yeah, maybe it is great for that. Saves us time. My question is like should we be doing that? I love so code and this is where I can't judge effectively um because I'm not I don't code every day. Um but I've heard from people who write
code for their jobs that it's absolutely exceptional to uh save a lot of time for doing all kinds of things that are time consuming. And that makes sense to me, right? Because you know the if you're doing something if you're coding something that you haven't coded before that somebody else has what's the first thing you do right you search the internet you pull something off of uh Stack Overflow you change it up and you uh great you're done right or you do some debugging but uh TGBT because one code is structured extremely structured and two there's a lot of it there's a lot of explanatory text around it, conversation around it on the internet, um, and in in all of
our, uh, basically in all of our computers and networks. Um, then it's it's actually a great tool for that kind of thing. I think there's a very interesting implication for security that I'm sure we've all considered is that you're, you know, you're not writing the code. Do you know what's in your code? Um, and are there threats and risks to to having your code generated um, something other than you. And I don't think those risks are particularly unique to generative AI, but um I think they can be exacerbated by that. Um your example of you know trying to communicate better and the reason that that's a compelling thing, right, is because there's a lot of people who've
written a lot of really good stuff on the internet or and shared it on the internet or shared it in text and digit digitized and used in these models. Um for people thought like people put a lot of thought into like how do you communicate these hard things to people? People have written books about it. People have written blogs about it and so we can take from that experience like very easily, right? And so you don't have to go read the book, you can just take the input from that. Um and so there's a lot of really uh good uses for it. Um, and that they're really compelling because we found these really useful things. Whether, you know,
writing an email that wastes time is a compelling use case. It kind of is. Um, but it's really compelling when you can say, "Ah, I can I can find a way to formulate my thoughts so that I can be more uh thoughtful and sensitive to how I interact with my family, with my co-workers. I have used it to uh sort of plan out hard conversations with people. Um, and it works really well for that, but only Because a lot of humans and in the same case with the code, a lot of humans put a lot of thought and effort into making those things work and into making the things that exist to train the models. Um, and without that human
effort, this is worthless, right? And so the bottom line here is that humans have driven it's not the AI, it's not the techniques, it's just getting quicker access, more efficient access to human expertise. And there's limitations to how well that works. Um, and so and and I think the other aspect of generative AI and I think why we've put a lot of faith into AI based on our experience since chat GPT came out is that language we are sort of hardwired um because we're social animals to view language as a sign of intelligence. Um and so we naturally interpret anything even text coming out of a machine and this has been historic like from the
very beginning of machines being able to simulate uh speaking um because they are can process text. um we are hardwired to see that as intelligence. And when we are interacting with something that uses language, especially natural language, we overestimate its capabilities naturally. And that's what we're doing. We're overestimating capabilities and we're sort of like thinking about the potential. We're investing in things based on the potential that they're going to work despite uh a lot of evidence that a lot of things that we're investing are not going to work the way that we expect them to. And that's where we're seeing a lot of the news right now is that a lot of investments in it in AI
companies trying to do enterprisewide adoption are not finding the return that they were thinking. So, I'm going to make the argument that this is a very simple argument. AI can do things that humans know how to do already. And so if you think about all of the really good examples, so alpha fold where you have um the you know protein folding is a very difficult problem and how different uh how pro the way protein is folded matters for how it's going to behave and it's really hard to predict. Um we have uh like that feeding a system with a bunch of different uh protein molecules and how they behave results in like being able to predict new things. Um try new
folding uh and then see what it's going to do and predict what it's going to do. um that only works because we have a huge amount of human generated data that we already know what it does. Um and so we're not starting from scratch. The magical thinking comes in is when humans have no idea what's going on, right? We don't know how to predict whether something is an adversary or not. We have some some inputs, right? So we have a bunch of blogs. Is this good or bad behavior? Um humans know how to do this. We do this all the time. But the magical thinking comes in when when we think okay for any given situation. We're
going to be able to uh predict based on data which changes which is dynamic which humans don't understand um what whether this is good or bad behavior, whether this is a friend or a foe. Um if we don't know how to solve a problem, we're not going to be able to automate it. And we're certainly not going to be able to automate it with AI. So um systems you know we can basically machine learning can can basically classify things as is X in group Y are X and Y similar and uh have I seen X before? Right? That's we can ask those questions uh with machine learning classification systems which A lot of what we do, right, classification is, is
this good or bad network traffic? Um, is this person creditw worthy? Um, is this person likely to become a criminal? Right? We can ask those kinds of questions, but if we don't like as humans already know how to answer that question well, so like, do we know how to answer whether Lindsay is gonna commit a crime? Um, no, we don't. I'll tell you that. We don't know how to answer that. And security has very similar kinds of questions, right? We don't based on the data we have or can collect. We can't answer all the questions that matter in security. And so we can't use AI to solve those problems. Um so if humans don't already
can't already say with X Y and Z data or information, we would make Z decision um we're not going to be able to automate it with AI. I mean, I think what people tend to believe because AI is based on data is that data is magical. Data is an objective representation of the universe. If you're social, if you're if you research social problems, you know this is not true because it's very I've spent my whole life trying to get data um that answers very basic questions with very controlled experiments and is extremely hard. If you're a physical scientist, you may believe that data is objective and objectively represents the universe. But we like I I've asked engineers, you
know, what does a thermalouple do? Does a thermalouple measure temperature directly? and they all say yes and I say no it doesn't. Um what you're you're actually measuring is a voltage difference and you're inferring temperature based on that. And what's different about that is that it's we kind of understand the underlying principles. Um we understand the uh it's it's not as variable as some of the other problems that we look at. And so when you're working in the physical sciences, you get this sense that data represents objectively the world, but it never does because data is a human construct. We create the data, we define what it means, and when it's reliable and does, we can predict it really well
because we have good models of what's happening that are not very variable, then it works great. Um and so in those kinds of situations where we have good data, we understand the underlying principles and we have good models, we can use AI to solve some problems. We can do some probably some amazing things. But unfortunately most problems those prerequisites aren't met. Um and so let's talk about security. Um I think it's really tempting as a security professional um to think of security as a technical problem right the techniques we use are very highly technical correct right so coding is very technical is very technical um analyzing logs network logs network traffic reverse engineering all the techniques we use
are extremely technical but is security a technical
It's a rhetorical question. I mean it is uh no security is not even remotely a technical problem. Security is a social construct. Um so while we may use technical techniques to achieve security to assess security to monitor security is inherently a social behavioral thing right um we don't systems don't exist for their own sake we have systems to do things we have financial systems to uh enable trade we have monitoring system systems to protect the things that we care about. We have energy systems to create energy so that we can have our lights on and stay warm. Um we have uh social networks and social media systems so that we can connect with each other and
you know stroke our own egos. Um, so all of these systems that we have that we're trying to secure exist for a reason. And whether or not we want some someone should be using it, whether or not someone is using it in a way that they should or shouldn't be, that is a social construct. And social constructs are extremely messy. We don't know how to predict um we have we can define whether you're an authorized user for this moment but we don't know how to predict um whether you're an authorized user for all future contexts um or maybe even for tomorrow right you're um whether you should be accessing a certain system can change between now and tomorrow and because of
that all of the problems that I've mentioned with AI and the the limitations and the kinds of ways that we can use it are extremely relevant to the problems of security. And what I see is this thinking that we're going to be able to, you know, if we just get enough data, right? So, anyone who's tried to analyze logs um for a specific purpose will likely recognize that logs sometimes don't exist or there's not information or context that that like actually exists in data that you can gather that tells you whether something is good or bad, traffic, or tells you whether someone's a good or bad user. And because of that, we're never going to we can't get to the
point of using AI in the way that a lot of people think we can um without one making very thoughtful considerative efforts to make sure we have the data that we need to make the decisions that we need to. We need to define the decisions that we're going to make very thoughtfully. We need to test and validate whether we're um whether we actually can in the context that we're using things make those decisions. We need to understand that everything changes. And a system that works today, and we see this with things like spam filtering and fraud detection, right? Your spam filter works really well and then all of a sudden you get a bunch of spam in your
inbox because something changes about the um the way that uh the people who are sending them spam are formulating it. Because spam, what is spam? is not a technical problem, right? You can it's not just how many people are recipients of the email. It's not just the form of the email or the text of the email. It is whether you want that in your inbox or not. And that is essentially the bottom line of using AI and any data driven technique. Um, which I will say just stop using the word AI. That's what I'd like for everyone in this room. Start using start calling it what it is. But any sort of datadriven technique applied to security needs to understand
the limitations of data, right? What is the data we have? What decisions are we trying to making make? How would we make that? How would a human make the decision with the information that we have? Um, so with that, I'm think I'm close to making out of time, but I do want to be able to answer some questions.
We're getting another mic. So, I'm check shirt back there.
>> What do you think would be the outcome?
>> That's a that's a heavy question. What is the outcome of this? Well, I I think a lot of people are going to lose a lot of money. Um, but a lot of people are going to make a lot of money and already have. Um, I honestly don't know. Um, that's a great question and I that that question keeps me up at night. Um, I think we're gonna the thing I'm worried about there's there's a lot of really legitimate uses of datadriven techniques that might lose credibility, right? We might not be able to fund things. Um, we'll just find another word for it probably, but um, and so like those I worry that some some
really legitimate and like good uses of any sort of data driven technique will lose credibility. Um, I think we're already a lot of decisions being relegated to systems that cannot make them. And I think we're going to continue to see that unfortunately. And I think that's doing a lot of harm. Um things like facial recognition, right? Like if it's your Google photos tagging you, it doesn't matter. But if it's deciding whether um you should be arrested, it does matter. And I think we're going to continue to do things like that. Um so in terms of the bubble, I'm not sure the hype bubble, I think we'll just a decline in investment. All the things that we're excited about
right now, a lot of them are going to go away. Um, but beyond that, I'm not sure. And it keeps me up at night.
I specialist works. tell you that and I was so kind of talking about the idea of human behavior and AI we so I started university I'm now finishing university while it's in full swing I'm curious if you had any ideas about place in education. Maybe that's a research question. So >> yeah. >> Yeah, that's a great question and I like it gives me an opportunity to um address another aspect of the previous question which is so one of the things that that keeps me up at night is when we ask this question education because it might be able to accelerate your ability to do research, right? But what the whole point of education is to teach people
how to think, right? We're not paper producers. I'm not like even in my job, my job is not to produce papers or reports. Um that's like the outcomes that I provide the deliverables, but my job is to think and solve problems. And that is a human activity that we cannot relegate to especially in education. Um and so to the extent that you can go find information that you couldn't find very easily before, it's great. But when when people do a literature search and don't go read the primary sources, they're one, in my opinion, performing academic misconduct, but two not doing the thinking. And the whole point being human is thinking. Well, it's not the whole point, but like a lot of like
education is learning how to think, learning how to understand things. And so we're not knowledge consumers. We're knowledge extruders, right? We are thinkers and that's the point of education is to learn how to do something valuable. So it to the extent that education is not a commodity I think it could have a small role but I don't like you know fixing your grammar making your emails a little bit more I think that's fine. I'm not sure it's worth the cost but it's you know it's okay. Um maybe providing feedback or this you know this really providing instant access to those really useful tools um I think is is its role but I think beyond that it's it's going to be
a detriment to education. So great question. I think Lance has the mic and then Sud's hand. >> So one longstanding view of machine learning is you're basically fitting a function to data >> and our functions have certainly gotten more complicated as machine learning advances. So the first question is do you think that still all that machine learning is basically fitting into Yes, absolutely. >> Okay. And the second the following question is that what your brain is? Your brain just been a function >> honestly. So yes absolutely to the first question. I have no idea to the second question and I had I just I had an intern who was making that she's super smart this
summer and she's like I think it's just like and I don't know if our if our brains are just fitting functions of data we don't do it in the same way that computers do it because I babies can learn uh new things with like one or two examples we don't need gazillions of data we don't need maybe we have a billion parameters that we don't see but I can learned how to classify something as one example. I learned your face as one interaction maybe five now that I've been but so maybe I don't know you're the answer to your question maybe humans are just functions to data but we're not doing it in the way um that mach doing
it there's something something different about the way that we do it that makes us better at it and worse in some cases Ginger so I'm really struck by one of your opening comments that the entire The purpose of what we do as humans should be to benefit humanity. And each of us in this room is a steward of this new AI technology which as we've talked about has a good side and a dark side and in some ways today is serving to benefit some of humanity at the expense of other humans or aspects of humanity. Can you offer any advice as we step away? on how to be good stewards in our usage.
Be thoughtful, I guess. Um, question why you're doing it and also be like really critical of whether it's actually helping. Um, so like and in this case going back to education like why am I doing the thing that I'm doing? What is the purpose? what what exactly am I trying to accomplish with this? I mean I think we should all be reading about the impacts um in other countries about the people who are trading these and exposed to extremely graphic content for like content moderation things like that um to the human cost of everything we do and I think we should this is independent of AI there's a huge human cost but also like being being
thoughtful like the the amount to me it's astounding to think that like multiple gigawatts of energy are now being diverted to data centers I'm in nuclear power and that's great because data sensors make and AI make nuclear extremely relevant and so there's some really good excitement about that and I have mixed feelings about the fact that um that that you know that's creating resurgence of interest and uh relevance for nuclear power which I do think is something that we need to invest in and advance but at the same time there like what value is all this stuff bringing what value is this stuff bringing into my life? What am I trying to accomplish with this information with this tool?
Um, and is it really actually accomplishing that? So, yeah, I would say be thoughtful, read as much as you can, understand as much of the impact of these systems. Um, and also like know that messy decisions cannot be automated ever and they should not be automated because messy decisions require extremely thoughtful assessment of widely variable uh input parameters and each situation is unique and unfortunately that means we have to work hard to solve those problems. >> We'll take this one last question I think right here from Mr. Wayne Austin and then we'll give Katcha a round of applause before we leave after questions answered. Thank you. >> So this may you may have already covered
this at some point since your keynote. um what's the one to three call arms that you do for this audience? What would you have us do? >> Well, so the first one is call it what it is. So when you're talking about um AI um and you're if you're developing AI, if you're using AI, if you're investing in AI, call it what it is. So if it's data driven automate, if it's automation, call it automation. It's data driven automation, call it data driven automation. Um the second uh three calls. So I've already said be thoughtful. Um anytime you are uh doing anything especially anything of consequence if you are a decision maker or influencing decision
makers about the use of technology in general but specifically AI. um first try to understand the problem that you're trying to solve and then be really critical in whatever technology you apply to solve that problem um and understand the consequences of what you're doing. And the third is just to to like read more about about AI and also like use your brain. Um because like I our our brains are special. I'm not sure why, Lance. I don't I don't know. Maybe we're just really sophisticated function fitters, pattern matchers, classifiers, and that's okay. I'm okay if that's the answer. Um, but our brains are special. Um, and humanity is special. And so, uh, please remember that and continue to value to value
human cognition and human capabilities and don't become enamored by technology. uh and at the cost of human existence and prosperity. So that's it. Thanks Wayne. [Applause]
>> Thank you so much. That was excellent. Everybody, we are back here at 1:30 for our exciting lightning talks right here in this stage. Um it is 1:30, correct? Yeah, it is 1:30. But um please we'll see you back here and go get a quick lunch and we'll have a full day of uh scheduling after that.
Come back.
Oh,
yeah.
Well, it's always
>> question.
Fish.
>> Hey. Hello everybody. Welcome back to lunch. >> Joseph, are we good to get started? >> Is that a yes? >> Sorry. Yes, we're good. >> Okay. It looked like a number one. I was like, thank you. that. Okay. Um, perfect. I hope you all This doesn't sound great. Um, I hope you all had a great lunch and got the opportunity to get off campus and get some sunshine. Thank you for coming back. We're going to do something really exciting that we've actually not done in this format before. So, we are bringing everybody back into the TAB auditorium for our backtoback lightning talks. Do 15 minute max. Um, gives gives our speakers a chance to on stage and
talk a little bit and then gives you guys a a little introduction to their topic. Um, some may have time for questions at the end, some may not. If you not have time for questions at the end, I encourage you to go find them um and and talk to them afterwards. So, please give all of our lightning talk speakers a very warm welcome. [Applause] Our very first speaker will be on fishing yourself and profit. Is a security engineer. He's going to talk a little bit about himself in his 15 minutes and I will turn it over to him now. Hi, my name is uh as you hear I'm a security engineer. Um I like doing CTF labs
random ideas that pop inside my head. One such idea is a driving couch. The idea being that uh I don't want to get off the couch to the TV remote couch go to the TV remote. So yeah, fun stuff like that. >> So why Why am I here? We've, you know, been talking about fishing forever. Like you probably got like super freaking tired like fishing. Yeah, problem solve. We work on it for a long time, but I kind of want to plant a seed in your mind about like where fishing could go and where it might go. like we are seeing like a pretty big jump in like tools and capability to like help expediate and like do fishing
better from like an adversary perspective. Um and I think it's very important to know what is out there about ourselves on the internet. Not how it might get used against us, how it will be against us in the future. U people just leveraging X to do whatever. So why do we care? You might think, hey, I'm not like some multi-million dollar crypto trader like breaking in all the dough. Specifically for that might not be a CEO, Fortune 500 company. Um like I'm not like character at one of these things. Um so you know I think all statistics are basically lio this is more to emphasize my experience is that fishing attempts are getting harder to catch and they're coming more
often. Um so we're seeing like a lot more come in and targeting a whole bunch more people all the time. Um, so we're also seeing like increase in active factors. We're seeing fishing come in via LinkedIn, WhatsApp, like all those kind of things. We're kind of diversifying the platforms. Isn't just email and like fishing for um we're seeing leveraging of new tools.
you see that we're seeing a lot more clickthrough rate. I think like a 30% more jump when they're leveraging LMS as like an un trained like spear fisher. They saw like a 35% jump in effect people click through those fishing emails. Um yeah, and also everyone's using it. whole list of people. Um with this there's a decreased um cost to do targeted spear fishing attacks because whereas before you know experts were able to do the same stuff that's pretty cost. So we're driving down cost and so that makes it more economical for them instead of targeting big businesses enterprises businesses they can start moving down the chain more towards small businesses because now it's more cost
effective to go ahead and target um those small businesses [Music] also maybe you're a bit curious about what's out there about you and like who has access to that So, what can we do? As I already said, fishing is a tricky hard problem to solve. There's multi-billion dollar companies out there trying to solve this very problem. But just because we can only do little doesn't mean that we should give them up and let let them do whatever. So yeah, at the end of the day you have like you know people in HR that have to review applications. Their job is to click on links. So you're not like it is pretty much near impossible to stop all those. We can do
our best. Um, so looking at kind of like the anatomy of like tracking and spear fishing attack research, reconnaissance, general stuff about who they are, what they do, what they're interested in, like what their circumstances are, and then you target using that information, target, personalize those details to them. And then construction like how you actually nuts and bolts perception that can be like email is like instead of google.com it's like google.com that slightly psychological foc talk to right now. Um, where does your data escape? Where is like information about you target you? Some of the places are social media professional profiles, public records, anything that they can just go talk to about that kind of stuff.
LinkedIn Facebook Instagram stuff that you will just post about yourself, you can leverage. Yeah. So more records and stuff that you can get or get someone to get for you that kind of stuff. Here's some like data brokers. You've probably never heard of them. They don't like being talked about because a lot of these companies have some of the information that they collect public records that can get their hands on some some of the stuff like the flood risk and they have that information like you Oh yeah, this person like he knows he's like more vulnerable to floods and so I'm going to send him an email for like hey there's like data protection laws in place to stop them from just
giving out that data anyway right so like I'm saying like there's no way that they're going to get their hands on this stuff they collect data breaches like Pokemon cards like they are constantly getting exposed in some way some data in some way and so your data is probably eventually going to get in the hands of someone. So some tools that you can use heard the right ones like incessantly if you've ever watched YouTube believe me they are a service that offers like you can pay them and they'll go ahead remove that data. Um I'm a bit skeptical because once they remove that data technically they collect it in some cases that so I'm a little more
skeptical and then repo that you can just go down the entire thing and you can just opt out of all these data collection themes and so get a little bit less of it. Um yeah so some of the things you can do there and now here's the fun part. We're going to actually use some of information without their belt and kind of make a fishing email forms. So, the person that can probably fish you the best is you because you know you and you know what you're going to fall for. And so, we're going to go out there and we're going to look for the information that's out there about you that you know you fall for. Um, so some
tools framework, some of us are probably pretty familiar with this. Um I think I find that most these tools are generally okay and some of them are but good starting point start about yourself and what's you can also file data access requests to these big vendors. Um whether you're going to go hear back to them in a reasonable amount of time is up to news. I filed these like six months ago and I still haven't gotten back yet. So we'll see if they actually So you can kind of place LinkedIn freaking fantastic for I find one in my people just have their email directly on so you can go straight to their email. Um if you're curious
about how to turn that off visibility down your email address pretty easy stuff like Hunter this is a platform mainly for marketing people like getting people to talk to or getting business contacts also freaking fantastic for getting sort of how to target people's personal emails like that Um, now I'm going to walk through a bit of example how I would fish me. Uh, I expect you to do your homework and see if there's five fishing by the end of the day. Um, so this is me. I went to a concert and I posted that on. Um, so I figured, well, you can just register these for free online. So you can just go here, register for some for
free, and then get the email receipt that you received. um for that. But yeah, this is exactly like it stands order receipt $0. But what we can do is we can just simply edit that email to $3,000, put it in our own link there, and then boom, I have the perfect email because I've seen that exact email before. I I have seen that exact email. And so if I just change that and change the that link, I'll be like, "Oh, frick. He's charged me $3,000. I better click on that and see like what the heck's going on. Um yeah, so that's pretty good method. Like I think exact email is pretty effective, but we can do better. Um so I
leveraged an LLM to just keep the email in like hey I am a student at university. How can I like attract this email? you are a marketing person, track this email to a student. And so now it's like, hey, you get full inerson class for 5.98 like 70% off. Oh yeah, that's pretty attractive to like a college student at university. Save a lot of money. And so like this is like took 10 seconds to do like an hour and probably increase the effectiveness by this one. This is how I generally feel about athletes. They're only as smart as you are. Thank you. [Applause]
[Music]
Yeah. So that first
Yeah. So this one as I said essentially what it's saying is that while we are getting better at making this maybe not asen
I exactly, this is something I didn't get, but I have 80% done a system that reads my entire inbox and then sends me a specialized fishing email at time during the week. I didn't have like finished yet. I actually have half of that complete.
If you have any other questions, please feel free to find after and let's give a round of applause.
>> Thanks everybody for your awesome questions. I'm excited about this track. We're gonna have come up next to talk to us about esbomb tools. Are they as reliable as they claim? Um, as he's getting set up over here, I'm just gonna give another plug for volunteering. My third has anybody heard all three.
>> Yeah, that's what that's what happened to Zack and Zachiously agreed. So, thanks. Um, anyway, if you were in track one, track
Thank you.
Good afternoon everyone. Hi, my name is MD Foster and today I'm going to talk about generation tools. Are they as reliable as they claim? Just a little bit about myself. I'm currently pursuing my PhD in computer science at Idaho State University. I'm also a member of software engineering and cyber security research team of short at IC which is led by professor Dr. Now let's start with a simple thing. When we want to buy a product for example this chocolate from a super what do we check after the price the ingredients right to check whether there are any ingredients that are bad for our health. Suppose from today this product is selling without the ingredients on the
back side. Do we still buy confidently? Of course not. Then what about the software that we use every day on our phones and computers? Do we know the ingredients inside those things? No, we don't. That's where comes in. So software bills of materials or in short sbomb are list of all components dependencies and metadata within a software. Now most of today's uh software relies heavily on these third party components to speed up the development. But cyber attacks or cyber attackers don't just target the main software but also to third party components. For example, log for sale attack in 2021 where attackers exploited a simple logging library and injected malicious payloads and affected millions of systems worldwide.
To address this kind of issue, US government mandates a bomb for federal software. While US government mandated bomb adoption outside the US and in non-mandated organizations is still limited and legacy software makes things even harder. So what's the solution to automatically generate right and there are tools out there that claim they can create this bomb automatically and accurately but there are actual effectiveness is still unat [Music] from and from Linux foundation that means these can give output in either of those two formats. But the lack of a standard format makes things even harder and there are few studies that actually validate the effectiveness of these sbomb generation tools. So we addressed actually two question. The first one is how effective are bomb
tools at identifying components and dependencies in last projects and is there any significant difference between the comprehensiveness of sbomb tools that gives output in cycl versus spdx or rust projects and this was the first study to evaluate comp tools on rust projects. We mainly focus on two formats. Uh the tools that gives output in cyclone or in SPS format at last was chosen mainly because uh it it is increasingly popular for uh CPU and high performance. We followed these five methodology. It first selected the sbomb tools then selected last projects then generated ground run the sbomb tools on last projects to generate sbombs and then finally compare generated sbomb with the ground to discuss this step let's see tool
selection we selected tools based on rust compatibility and those uh tools that can be run on command line interface and gives out either SPDX or encycl and we found seven perfectly accessible tools that uh that met our criteria. Two of them were like gave output in cycl gave output in SPS and two in both format. Then we selected 50 projects from GitHub that were primarily written in blast language and included cargo. And cargo files and we selected 50 projects based on startup start and you can see the summary of those projects. Then we built our ground truth based on cargo metadata which gives output in JSON format and this JSON file contains components their versions and
dependencies for each projects. Then we extracted the component names versions from the ground like both ground and bounds and compare exact and minimum version format. Then dependencies were broken down into single pairs and then we compare with ground truth compare ground truth and output given by the sbomb and then we have evaluate the book tools. This is the summary and in this table you can see that see which bomb tools is compatible with which for like either cyclone DX or SPDX and in dependency column you can see uh three tools CRC it depends and SSG short for these three tools actually have dependency capability like dependency information capability and others don't And we ran all the tools on same 50
projects and you can see some of the tools actually fail on several projects. Now let's look at the findings for component name identification. You can see that shift last column options. Okay, Sift gives the highest accuracy and recall and CRC tool give the highest precision but most tools have lower accuracy and lower with high number of false positives. Now let's look at more view. Now this box plot I don't know if you can see up there but these box plots uh is more inside for example the median accuracy and median of CDX 10 and is slightly higher than 60%. And you can see that and independence these two two have higher spread distribution that means
these tools are inconsistent in their performance. Now let's look at the component version detection and again see if gives the highest accuracy and recall and CRC have the highest precision and if we look at more granular view none of the gives medium accuracy and recall more than 60%. And again depends those have like larger strip that means those tools are inconsistent in their performance. So in component dependency detection I have mentioned before that only three groups have the dependency uh information capability and out of those three CRC and SSG gives moderate precision but their accuracy and recall is very low and obviously it reflects on view as well. So what's the answer? So we found that all tools shows model
performance in identifying component names and partial detection and most of the tools lack dependency capability and those who did actually give dependency information have very poor performance and we found no statistically significant difference in accuracy and precision and recall for generation tools that gives out PS and SPS format. Then we actually did similar study for JavaScript project as well. As you know, Java is the most popular language on GitHub and we similarly selected 50 popular npm JavaScript projects and we found four tools that meet our criteria and actually this time we didn't consider DX for we only take the cycle tools and we found four and out of those four tools only two have dependency information
capability and yes we found that CLM which means cyclp is short this is the highest performance in component identification component partial identification and this What I determination
of might be helpful because instead of depending on just one combination of methods So this is our website. You can find more details on you have any questions or comments you can find me after sessions. Thank you.
Hey, thank you everybody. All right, our next talk will be who is a PhD student at Idaho State University and she'll be talking to us about fish. It's great. We have two of these um lightning talks related to fishing. I'm actually learned a lot about how those fishing um scans together and maybe learn a little here. Yeah.
I like
Jesus.
Good afternoon everyone. Um so I'm a PhD student in computer science at university and today I will talk about my research from our research lab and the top title is fish can you so all of us probably know that in 2024 The US President Donald Trump's campaign was hacked through a single sneaking and it's not just politicians address it can happen to any that's why in this study we actually focus on fishing email. So fishing emails are fake emails that look real but are designed to steal password, credit cards or your sensitive data. For instance, this email may see Gmail, but it's a fishing email with many red flags. For instance, even though it claims it's sent from Amazon
customer service, it's not actually sent from Amazon's official domain. And it also contains many other headlines. For example, it creates art with words like account suspended or lock. And it also ask for password and im immediate action. And even though with the advancement of technology or all this AI, we are still getting fishing. This fishing us continues to even automate detection system and 91% of all hacking starts. And in 2023, 690,000 adults have lost more than 10 billion dollars to fishing. So this reminds me what cyber security expert said. People are the weakest link in information security and hackers are exploiting this weakest link using uh psychological manipulation and social engineering techniques. They tool even
cautious, educated and experienced users. So that's why there is a growing research interest in human centered approaches. Recent studies have looked at how people react to suspicious emails and how their choices affect overall safety and security. However, these studies treated fishing ML as a one big general category without exploring the effect of different manipulative tactics that the hackers use or their influence on fishing detection ability. Moreover, the impact of visual cues and the role of exploratory feedback are unexate we in this study we focus on categorizing fish in females into five psychest and analyze their influence on fishing. We also tried to evaluate the impact of this one cube and explanatory feedback on fishing email detection.
As for our methodology, we adapted and three-phase approaching categorization and selection followed by user study and finally we analyze user performance. So these are some uh uh literature review that we have found that identify different manipulation techniques that hackers usually use. And so we reviewed this literature and mapped those principles into five categories. So following this we ended up with five fishing and evil categories. Uh first one authority and social compliance. Second, distraction and overload. Then liking similarity and deception. Fourth uh social proof and heart mentality. And finally these are triggers. After categorizing these emails, we selected emails from two fish and the curated email data set that we have actually previously. And we selected email based on their
quality, believability and how well they fit into. And if one email fit into more than one category, we fit them into the best match. And following this, we actually selected 20 emails uh two fishing email and two generic emails for each category that we later use in our users. As for the user study, participant had to perform three task. In each task, they had to identify whether the presented 10 emails were fishing or not. In task one, they received the emails uh without any hints. It was done to see their natural ability of detecting fish. Then in class two, they received the same emails but in a different order and with visual cues highlighting suspicious
elements in light. We did this to see if visual cues improve detection. Afterward, we gave them write answers to these emails with a brief explanation to create a learning environment. And finally in task three they saw a set set of new emails uh without any cues and we did this to find out the impact of feedback and after this user study we uh analyzed the uh so for the findings we recruited 64 participants but nine actually did not complete all three tasks. So they were excluded from the study which left us with 55 complete responses. Among them 32 32 were male and 23 were female and the age ranged from 15 to 39 years and majority of them
held and majority of the participants were students or from academic. Interestingly, over 40% of participants did not know or were not sure whether they had encountered fishing before while about 11% reported that they actually fail for it. And now let's look at our fishing detection performance analysis. So here we see that uh in sporting fish uh participants performed 10% better in task two while we provided visual cue compared to task one. So this implies that visual cues help recognize fishing. However the performance uh dropped by 9% in case of non fishing email detection. This suggests that visual cues may cause some bias on decision. Next, to find the impact of feedback, we uh compared user performance in task
three with the previous task. And in task three, we see that fishing detection actually enhance uh about 29% over task one and about 6% in task two. So the participants did better in task three and if you remember before this task three we actually provided the correct answers and explanation. So this shows that the experiment feedback actually help participant learn and better to analyze the results further we did statistical test. So the shap test showed that the data were not normally distributed. So so we actually go for the treatment test which showed that there are significant difference between us but did not mention a specific task which are significant that's why we f conducted the pair wise men test which
actually signifies which pair are significant difference so from this test we saw that there is significant improvement from task one to task three and from task two to task three. So this again implies that they did better and sign statistically significant better in task three and this confirms that exploratory feedback helps participation and later on we actually conducted analysis to find the impact of categories and for easier presentation uh representation I just presented the accuracy to just fishing. So here we see that except for the authority and social compliance method the fishing detection accuracy improve gradually from task one to task three the participants actually found the social proof and hard easier to identify while they struggle
the most with liking similarity and perception. So this implies that instead of just warning users, we should train them using visual cue feedback. Of course, real world examples and system designers should consider integrating the psychological manipulation patterns when they design for model. And so to summarize, we actually conducted a user study with 55 participants to find the influence of visual cue feedback and email categories on fishing email. And to do this, we map psychological manipulation strategies strategies into five distinct people. And we found that performance varied significantly across these categories where participants found social growth type most while they struggled the most with language. And we also found that visual cues improve visibility cognition. However,
it is subject to bias. And finally, we found that exploratory feedback significantly enhances performance. And these results are validated inside analysis. So that's all from me.
>> One question. We have time for one question. >> Yes. Um, can you share an example of an email liking deception and familiarity that that one of your study had trouble with? [Music] Uh actually I have not put that in the slide. >> It is in the slide. Okay. >> It will take some time. It's fine. >> That's fine. >> Yeah. I can later show it to you. >> Thank you. >> So if you have any questions or any comments or my work or about the work in our [Applause]
All right. Thank you very much. Um, last pop out. We have Era Garrett. Is it my pronounce your name? >> Thank you. She's coming up. Just a reminder, they put four 15 minute lightning talks together, not realizing that not us 10 minutes for a break. So, as soon as we're done here, we're going to move over and probably start just a few minutes later on tracks. Those that are planning to attend tracks, there will be a small pause when we get everything here.
Perfect. [Music]
[Music]
Okay. So good morning everyone. I'm going to be talking about determining CPU architecture based on how it does. So first question who am I? That's me. I'm currently a master student.
But I did graduate. I got to science. In fact, I went to the campus way back. But it's a lot of fun to be back and at the time of computer science. I have finished. That's kind of technical educ.
My first love was judo and recently short Not quite the same, but my says that it has to be good enough. Uh, Jack, master of DND. And in case anybody loves links or scanning QR codes from strange people conference security, that's my LinkedIn. So, anybody wants to look at it, feel free. So, first off, CPU fingerprinting. What's it? Why do we care? First talking
right there. Super cool. iPhone has a Snapdragon. Basically point of this research is to figure out what kind of CPU the actual CPU running system but you want to know what it is and you can't expect to ask someone
but let us introduce 75 it's the international standard for operations operations because we'll get into that second standard for operations. It says how accurate the results must be. So addition multiplication additionally that says however it doesn't see all of that is still up to able to actually do the hard part is still up to them which means they're able to use fun things to be able to trans
figure out how That's not going to get introducing functions. What is a transcendental function? Think of a normal functional function. list whatever that's
12 million. These are numbers that are finite functions are not [Music] best guess approximations. examples are significant
which means that we have fewer iterations we have some accuracy we have we have accuracy. If we do 100 iterations, we have pretty good accuracy. We're still exactly there. The only way to get an exact value is to
exactly take advantage of so we have a handful of different division values. So I have it
pi it's all zero. So you have all these different
series for it sequences eventually between six and 20 iterations we get a wall. We run into a wall. We can't get any more accurate than it because well that's exactly where the court becomes more present because we know that the correct answer is going to be zero. That's all the way down here. One of those answers to zero. We get pretty close to 15.
That's not very much. But it's not zero. Those are finite values differences between 105 and zero. So I ran a fun script put together myself. I'm pretty proud and if you want to look it repository everything there
is go ahead. I appreciate it. So what does a script do? It has a unique ID for each system. It has a hash value for every script just to make sure that's all. It has automatic detection and automated. It does these six functions 10,000 iterations until we have more accuracy left. We have
So what kind of results did I get from this? I have two computers that one computer it's right up there. That's great. Hess
and fancy. So each of these graphs has something important. Those aren't very interesting, but that's because there's no difference, no changes between the results. So processors just
look at a couple
iterations start to lose accuracy. But for the first iterations, you can see that there are discrepancies between these processes. Let's look at two of these
threec
is But if we look at these first one8
digits that's a difference 0 but that's It's not just close enough. That's a legitimate value difference between these two processes. That's my 12seimar
difference 0 0 But we can see that there is not just
from this data it's only few but it shows that using just the flip operations just doing math I can process and I think that's pretty cool. So in conclusion at the school
questions Yes.
Can you ask question?
Yes. Justifications.
Any other questions? Oh my gosh.
Yes.
Pieces are still just short.
Thank you so much.
Okay, so I got the approval from Sale aka Tiffany. Uh the talk tracks will begin 10 minutes late, so 10 minutes from now. So no need to to rush over there unless you're the AB guy like me. Um and we'll see you over there. Thank you very much for the light talks.
Oh, she's running off. >> No, she's
>> Oh my goandom.