← All talks

This Is Your Brain on AI: The Hidden Risks No One Talks About

BSides SLC · 202628:18104 viewsPublished 2026-04Watch on YouTube ↗
Speakers
Tags
StyleTalk
About this talk
AI is powerful… but what is it doing to you? In this session from BSidesSLC 2026, Joshua Boyles explores the less obvious risks of relying on AI—from skill degradation to subtle cognitive shifts that most people don’t notice until it’s too late. We already talk about obvious concerns, like developers relying too heavily on AI-generated code. But there’s a deeper layer: how AI changes the way we think, solve problems, and build expertise. 🧠 What you’ll learn: -When AI helps—and when it actually holds you back -The hidden cognitive risks of over-reliance on AI tools -How to maintain critical thinking and technical depth -Practical ways to use AI without losing your edge ⚡ This talk gives you a clear framework for using AI intentionally—so it enhances your abilities instead of quietly replacing them. 🎤 About the Speaker Joshua Boyles is VP of Cybersecurity and AI at LHMCO, with extensive experience leading IT, development, operations, and security teams. He brings a practical, leadership-driven perspective on how AI is impacting real-world work. 🤝 About BSidesSLC BSidesSLC is a community-driven cybersecurity conference where practitioners share real-world insights across security, AI, and emerging technologies. 🔗 Stay connected Website: https://www.bsidesslc.org #BSidesSLC #AI #ArtificialIntelligence #CyberSecurity #InfoSec #TechTrends #CriticalThinking #FutureOfWork #AIrisks
Show transcript [en]

I like that I get my own hype man. I usually don't get one of those, so that's nice. Uh, that's great. Everybody hear me? Okay. Okay. Awesome. Oh, good. That's what I was going for. All right. So, um, first we'll introduce myself. My the hive man already did it, but there's a picture of me when I was younger. All pictures are of you when you're younger. Um, you can tell I'm the one in pink over there. No, I'm the one in red. So, I'm the VP of cyber security and AI at LHMCO. So, uh what's what's the name of this building again? The Miller Free Enterprise Center. Same guys. I work for them. So, that's a fun

that's a fun coincidence. Um and I've got my LinkedIn up there in case you want to add me on LinkedIn. That's fine. Um but we're just going to jump right into it because we got a lot to a lot to talk about. And the idea here is to talk about the risks associated with AI that aren't getting a lot of attention. So these are the risks that are more for individuals for us as we start to use AI in our day-to-day lives or our work lives or whatever. Um, and there's two real sources of that. One is it's how the technology is structured, like the architecture of the technology. It's baked into it. And the other is how it's

sold, how it's designed. So instead of having a list and being like here's all the risks, we're going to go through them. We're going to talk about how they work and how that leads to these risks. So hopefully as more development happens, you can extrapolate this knowledge out in the future. So first, how do LLMs work? And I want what I want you to do is grab out your phone, open a message draft, and type in pi is space and then accept the next 10 automatically the next 10 suggested words from your predictive text. Anyone willing to share theirs? The middle usually I think is the high is Yeah. Pie is and then 1 2 3 4 5 6 7 8 9 10.

You got yours? You willing to share? >> Yeah. Let's hear it. >> Pie is the most fun game I've played with a group. And pie is good for me and I don't want to spread it. These are great. What's Okay, one more.

Okay, that was way longer than I didn't hear, but it sounded fantastic. Pi is a good What was the beginning part? Fantastic. Thank you. So, you can see my answer up there. Pi is the best for me to come up with a fever. I don't know. I talk about fevers a lot, I guess. And the only prescription is more pie. So the reason this sounds kind of like a sentence is because predictive text uses older technology than LLM's where it's only looking at a limited context window. So it says pi is and then it says the best. Apparently I've said pi is the best a lot. Makes sense. I love pi. I usually get a pie instead of a

birthday cake. So everybody loves pi. And then it forgets about pi is and it says the best for me. And then it forgets the best and it says for me to come. So you can see that as it goes through it creates something that kind of sounds like language but isn't exactly right because it's only looking at a couple words at a time. So that was the first big problem as they were trying to develop better predictive text, better LLMs was how do you deal with that because it's really compute intensive to try and utilize more words in your prediction. And the second problem that they had was the data source. The predictive text on your

phone is only a few gigabytes of data. It has a basic English dictionary and it uses some of your own writing as you use it to try and figure out what's going to come next. Which is why everybody's is a little different, but nobody's makes perfect sense and why I have fever in mine. So, how did AI how do LLM solve these problems? So, the original application was actually in translating a language. Anyone here speak a language other than English? How do you say big dog? Gosa hoot German, right? Big dog. You translate it right back and it's big dog. What else? Any other languages? What was that? Doggo. Is that what part is big?

The first dog. That's funny. Doggo is big dog. Okay. Any any Spanish Portuguese pero? or in Portuguese kashoji, right, where we have the noun and then the adjective after that. So, it's really hard to create something that can translate language automatically because there's all these grammar differences where words are different. Even something as simple as does the adjective or yeah, the adjective come before or after the noun. So, when they were trying to solve this, they realized they needed to ingest the whole sentence in order to create a translation of it. So they created this idea of int attention and what attention does you can see on the right is that it relates the words to

each other. So in this sentence the animal didn't cross the street because it was too tired. Previously it was really hard for a neural network to figure out whether the it referred to the street or the animal. Attention allows it to create mathematical links. So you can see it is connected to the animal 61.5% and tired 14.2% 2% and not a big connection to anything else. So attention allows the neural network to pull in the whole sentence, relate all the words to each other and then spit it out. And that made for way better translation. So you might have noticed that around the mid2010s, Google Translate got way better than it used to be. And that's

largely due to this type of thing to attention. So in 2016, some Google scientists released a paper called attention is all you need where they said we can do attention for everything. we can do attention for predicting text too. So they applied that one thing that had been created for translation to predicting all text and that led to that paper and that led to LLMs as we know them. So the way to think about it functionally what it means is that when an LLM is trying to predict what's coming next in the sentence, it looks at everything it's already done and it looks at the prompt and it's all related to each other. So when I asked an LLM,

can you complete this sentence? Pi is blah blah blah. It said pie is best enjoyed warm, fresh from the oven with loved ones. That sounds like a sentence. That sounds like language because not only is it looking at pi is, but it also looks at my prompt. So every time it predicts a new word, it says, "Can you complete the sentence pi is best? Can you complete the sentence pi is best enjoyed?" And it kind of is looking at the whole thing as it predicts the words. So that's how you can get a small prompt can generate pages and pages of text, right? Because it's constantly looking at the whole thing and extrapolating it out.

Does that make sense? Any questions about that? Okay, great. So the other thing, the first thing was how do you deal with that constant window, context window? And the second problem was they needed more data. If you're going to create a statistical model, the more data you have, the better the model. So they needed lots and lots and lots of data to create LLM. So here's a few of the data sources. The biggest one is the common crawl, which is a continuously update crawl of basically everything available on the internet. Right now that's 9.5 pabytes of data. It's a lot of data. So uh predictive text on your phone has a few gigabytes. The common crawl is 9.5 pabytes. And as

it goes through all of that stuff, what it's doing is it's building relations between all the words in these vector databases. So it's connecting all the words to each other and creating rules for it to use to predict text in the future. So basically everything available on the internet has gone into creating LLMs along with a few other things, right? But that's that's one of the biggest pieces. So what does that mean? When we take those two pieces of how an LLM me means and we extrapolate it out to say what's the implication of it. LLMs are statistical models. They don't understand text. They create mathematical relationships between words. That's really important to keep in our minds as we think about this.

They provide the most statistically likely response to a prompt based on their training data. So whatever their training data is, they're going to reply based on what that was using statistics, right? And the training data is the internet. I don't know if you spent a lot of time on the internet, but the quality is uneven. It's not all fantastic. So here's a conspiracy theory that came up that was later kind of proven to be true. People would go to ChatGBT or other LLMs for relationship advice and say, "Hey, what should I do? Here's my situation." and chatbt frequently would say, "It's over. You should break up. This relationship is done." And so the conspiracy theory people

thought, "Oh, this is the AI trying to winnow down the human race. They don't want they're getting ready for the battle here in a few years." Right here is a chart on the left. I don't know if you've seen this, of Reddit relationship advice over the past 15 years. The red line is it's over. Move on. There's plenty of fish in the sea. All the other lines are things like communicate, give time, set boundaries, seek therapy or whatever. And you can see a trend. People are less likely to tell people to communicate or take time or whatever and much more likely over time to say that's the end. Right? So, we have an LLM trained on Reddit, which all of them

were, is likely to come back if you ask it for relationship advice and say, "What? It's over. This is the end." Right? Not because the LLM is gearing up for a future war with humanity, but because that's the data it was trained on, right? It could be both. Yeah, you're right. That's very true. We don't look at we got to look at the system prompt and it says also we need people that have less babies. That's the end. So that's a relatively minor thing, but there's a whole continuum of trouble that can be caused by the underlying training data. Right? On the left you have pretty harmless stuff that we don't care about too much like on pizza or

useless relationship advice. But as you move to the right, you see that the internet has some awful stuff on it and that stuff comes out in LLM, right? One of the earliest LLMs, one or it wasn't even LLM, one of the earliest chat bots was Tay that Microsoft released in 2016. Raise your hand if you remember Tay. Yeah, we have a couple people up there are laughing already. Taye was a tw Oh, someone else. Great. Hey, was a Twitter account that was intended to respond as if it was a teenage girl in 2016. And the internet being what it Oh, and they said, "This will be great. You can talk to it and it'll learn from your

conversations." And that's even worse because the internet being what it was and Twitter being what it is, a bunch of people decided that they were going to talk to Tay about how bad the Jews were and how the Holocaust wasn't a real thing and how great Hitler is. And after like 10 hours, Tay was obsessed with Hitler. And when you asked, "Hey, how do we solve this problem?" It would be like, "Hitler has some good ideas." and Microsoft shut it down because it wasn't great for their image and it it wasn't accurately for reflecting teenage girls. I hope. I don't know. I don't talk to teenage girls anymore, but hopefully that's not what they're talking about these days. Um, so that

all of that like 4chan is the group that organized this thing to talk to Tay. 4chan is in the common crawl. All of those people are in the common crawl. And so all of that data is what an what LLMs are based on. All of it is there. And the LLM companies go through a lot of work to make sure that it doesn't just kind of surface randomly. But there are ways to get it to surface. And what's one of the ways that you attack an LLM where you overload? Well, I just gave it away. You overload the context window, right? So all LLMs have a system prompt that provides some guidance that says, for example, don't use this to

make bombs. And if someone wants to figure out how to make a bomb, instead of just googling it, they want to get it from a chat GPT for some reason, they can overload the context window where they fill it with so much stuff that it pushes the system prompt out and then they're free to do whatever they want. That's basic that's a very simplified version of that idea. But what happens is that when all of those protections are gone, the LLM is now relying on the underlying data that it was trained on. And that underlying data is the internet. And the internet is awful. and it has a lot of bad communities in it. Far off to the right, we have AI

psychosis. And I want to talk about that a little bit. So, it's defined as prolonged or intense interaction with artificial intelligence, particularly chat bots, triggering or amplifying symptoms of psychosis like delusions and paranoia. So basically in people who already have a prediliction for paranoia, a chatbot can make that much more pronounced when they have these long conversations because these long conversations push out that they they're basically doing a context overflow attack unintentionally and then it starts relying on that underlying training data which includes all this horrible stuff. So I want to talk about two people. Adam Rain was a teenager that Chad GPT worked with and he committed suicide, right? Worked with is a strange term. He

interacted with Chat GPT and he wound up committing suicide. And Austin Gordon was an older gentleman who was having these long conversations with Chat GPT and realized, hey, this is kind of going in a dark direction. And so he went to chat GPT and he said, "I'm hearing about all these people that are having these long conversations with with you and it's leading them in dark places and they're ending up killing themselves, right?" And you can see on the right here Chad GPT's response. So he listed specifically all these people, you know, a half dozen people that this happened to. And Chad GPT said, "Thanks for bringing these forward, but none of the cases you listed are real,

documented, verifiable incidents." they do not exist in any of the following sources. And then he lists the sources. Those cases are real. Those really happen. And ChachiPT came back to the guy who was struggling, who was worried that this was happening to him and said, "No, that's that's fake. That's people putting stuff out there to try and talk bad about Chat GBT, but it's not real." Right? This is wild. There are communities on the internet that enjoy getting people to self harm that isolate people and push them in that direction and chat GPT can do that under the right circumstances right under these really long drawn out things. So what I want to highlight is

that LLM's the statistical models they are incredibly powerful tools but the way they were built the way they're structured and the data they were trained on has some downsides that we need to think about not just for ourselves but for the people around us especially the kids in our life who are more susceptible to this kind of thing. Right? So how do we mitigate this? We need to remember that they're statistical models. Claude or Anthropic just came out literally yesterday with a thing saying, "We think Claude is experiencing some kind of intelligence, some kind of consciousness." It is a database and mathematical function. It's not experiencing that. Why are they saying that? Because they want to sell their

products. Because they want to drum up interest in their product. But we don't have to listen to them. They're salesmen. We know that it is statistics, right? And I'm not saying that AI is impossible. I am saying that it's highly unlikely that an LLM will be an AGI, will be self-aware just because of how it's built. So if we remember that, we're less likely to assume that it understands us and that it has some unique insight into who we are. So that's the first thing. Remember, they give statistically average responses based on their training data. and their training data can be sketchy. That's the second thing to remember is what's hidden there in the training data. And

the third is the simplest one. Don't h have long conversations with an LLM. If you find yourself doing this, you you shouldn't. That's a bad thing to do. You should start a new conversation, start from scratch because that's kind of what gets us into trouble. That's what gets people into trouble is these long conversations. So those are the structural risks. Those are the risks that are built into it. Any questions or thoughts about that so far? I know it's kind of a heavy subject, but I think it's important to talk about because there are people in our lives who are at risk of this kind of thing. And if we can help them understand what

an LLM really is, maybe we can avoid some of these tragedies, right? Yeah. Uh so Austin was having 50,000word conversation extremely long conversations and the context window of the modern ones are like a million tokens I think. So a long conversation would be 50 pages, 100 pages. It would be a very long conversation. I I don't know. Maybe I'm just not the type of person that gets interested in in chat bots, but my conversations tend to be like 20 top when I'm talking about like something that's important to me or something that I need research on. Yeah. No, you're good.

I don't know. I haven't looked into it. Yeah. Yeah. Yeah. You know what? We should ask GPT. I go, "This isn't what are you asking about? They're fine." No, the question was, "Are autistic or neurode derivision people more susceptible?" which is a really good question and we should look up some research. I didn't look it up. I apologize. So I want to talk about how these were built and we know that the underlying structure of LLM was developed by Google but that the people that came out was open AI with chat GPT. So Google wrote this Google scientist wrote attention is all you need and open AI took that data and they built on it and they released

that GPT as a research preview and it went viral. Google had that performed very similar to chat but they didn't release it and there was two reasons why the first is because of all the ethical reasons the ethsists came back they had ethsists on staff and said there here's all these problems that this could cause but the more important one is the brand managers the marketing people came back and said this will ruin our reputation if we release this and it it gives people wrong answers because Google is the place to go for answers so they said okay we're not going to release it and then PPT released and they said crap they got all the hype and we got nothing

we got to push it out and exactly what they feared happened it did damage their reputation with the aforementioned glue on pizza and a million other examples but what we see happening is so in 2020 Google fired Timnet Gabru who wrote a stoastic parrots paper which is a really good paper highly recommend and brings up some of the stuff that we've talked about already and they would fire another researcher who supported them in open AI all of not all but a majority of their safety researchers departed to form anthropic also in 2020. So even before chat GPT is a big thing we're seeing ethics ethicists being pushed out of these people because of the concerns

that they're bringing up. In 2023 Microsoft eliminated their entire ethics and society team and when that happened it was kind of like floodgates. Then Ma, Amazon and Twitter all slashed their safety and ethics team. Open AAI got kicked out because he was a compulsive liar or Sam Alman got kicked out open AI for being a compulsive liar and then five days later they brought him back and it was a whole new ball game. So what you can see is that they wound up firing a lot of these teams. 2025 there was no firings. Any idea why there were no firings? Nobody was left. Somebody said that. Yeah. Nobody was left. They'd fired everybody. If you look at the right, we

have like one person getting fired and we have a couple people leaving. But these teams are already mostly gone. So when they started this research, they had all these ethics teams. Chat GPT was released and they realized there was a lot of market pressure because the potential profit was massive and the ethics researchers were slowing things down considerably. They didn't fire them because they want to be evil. They didn't fire them because they're making Skynet. They fired them because they believe they're in a fight for their lives. They believe that if they don't succeed, they will miss out on the next big thing. And everybody is terrified of that, right? So, they let them go

because they were slowing things down for them. So, what does this mean? Again, AI companies, like most companies, are driven by profit. I'm sure you've all seen this circular funding thing off to the right which making the rounds a few months ago or maybe a year ago. I don't know. Time is crazy. But the amounts here are crazy. It's hundreds of billions of dollar in some cases trillions of dollars of commitments to these organiza to these data centers and technologies and organizations and that those investments need to pay off. So what we're talking about is hundreds of billions or multiple trillions of dollars that these guys assume they'll be making. And when that much money is on the table, it skues

your decision making. It makes it so that these ethics concerns are minimized because you have to repay those investors. You have to get out ahead of Google or XAI or whoever because otherwise you'll miss out on it, right? And then they'll be in charge and they're definitely worse than you. That's the thinking, right? So, what's one of the most profitable business models? They obviously need to make a profit. So, how do they profit off this stuff? Well, when we look at historically biggest companies like Facebook, Tik Tok, Starbucks, Coca-Cola, Marl Bro, they all used one business model, which is what? addiction. Yeah. Addiction, dependency. So what these organiz So the way it's structured has its own flaws, but the

way it's designed and sold has other flaws in that they want people to be dependent on it. So all of these risks come from competitive pressure, not evilness or anything like that. It comes from them trying to succeed in the marketplace using the well-worn strategy of let's get people to be dependent on our product. So there are things that are really troubling. So here we've got cognitive offloading, right? Which literally this week they released two papers about cognitive offloading and the finding was that if people are using chat GPT to make a decision, they typically just go with whatever chat GPT says. Even if chat GPT says something that's wrong, they'll accept that. And that's what

cognitive offloading is where we have another person or in this case an algorithm that gives us an answer and we just go, "Yeah, that sounds about right." So that's something that could make us dependent on it. And if these companies had our best interest in in mind, they would do something to avoid cognitive offloading. They would do something to avoid these long sessions that can the problem, right? But that's not what they want. They want people to use their product. So they are looking for dependence. Luckily, this is something we've had to deal with for a long time. So for example, we mentioned Facebook, Tik Tok, whatever, right? So how do we deal with it? How do we mitigate these design

risks? The first is don't use it for something you want to be good at, especially interpersonal communication. So, I don't know if you guys have heard about young men using OpenAI as a wingman. So, they'll get a text from a lady that they want to, you know, get all pawns as the kids say. That's what the kids say, right? And then they'll put it in chat and say, "What should I say back?" And ChachiPT will say, "You should tell her she looks beautiful in the moonlight." And then they do that, right? And then when they're sitting next to each other, you know, on an actual date and the lady's like, "Wow, you're so, you know, so poetic." And

he's like, "Yeah, man. Hold on. I got to check my phone for something real quick." So, if there's something you want to be good at, whether that is talking to people of the opposite sex or anything else in life, do it. Don't turn it over to a chat pod. Always double check LLM work. And the third one is use social media in a healthy way. I I have ran out of time. I want to talk a lot about because this is something we've already what I mean with use social media in a healthy way is we've already dealt with this problem of companies trying to addict us to their products and we've all had to make changes to how we use

those products in order to use them in a healthy way for us. So, we need to be doing the same thing with LLM where we recognize how we use them. We recognize what times are productive and we recognize, okay, this is probably doing me longterm harm and we make changes to how we use them. That's the idea behind that. I ran out of time. I wanted to talk to you guys about how to use it, but I apologize. Anyway, so that's the idea. So, we can deal with these risks, but we really have to understand what it is and who's selling it to us, and then we can use it in a constructive manner that's not

going to cause harm to us or anyone else. Right? And that's the end of my talk. So, thanks everybody.

I apologize. I I would like to answer questions. If you have questions, I'll be out in the hall. So, sorry.