
Thank you very much. Good morning, everybody. Um, this is a talk about AI. You've probably seen this and gone, "Oh my god, I don't want to listen to another talk about AI. I'm really bored about AI." Don't worry, so am I. I had the pleasure of reviewing for Defcon's AppSec Village this week. There were a lot of AI talks, but I'm hoping this will be a slightly different AI based talk. Um, and hopefully be kind of interesting. and it's called bad vibes but maybe good job security but probably not good API security. So to introduce myself my name is Katie. I am a occasional hacker, a YouTuber, a lecturer, a principal security researcher. Um I do I do a lot of speaking stuff. Um
I'm very very busy for 90% of the year. Um, so here is me lovely sitting in front of a computer, uh, looking very hackery. But notably, I have a PhD and you know what my PhD is in AI. You know what it was before? LLMs. So I'm like an OG AI person. So while my um like infosc training has been, we'll say chaotic. Uh, I actually do have like a formal education in AI. But I know what you're probably thinking. Uh, we've had enough of AI experts. Um, so let's talk about Gartner. So anybody who works for a vendor will be very familiar with what this is. Uh, this is a hype cycle. Um, and it basically describes where we are
on the hype. So to look at this, this part here is like the thing taking off. At the top, that's probably where we're at now with a lot of AI. Down here is when we realize it's terrible. And then up here is where it actually becomes useful. Um, and Garner really likes these hype cycles and they put like a lot of things on their hype cycles. Here's one. You will note that uh API security down here is like way down there. It's fine. I work for an API security company. I'm sure that's not going to be a problem anytime soon. Um, and with API security, we're kind of stuck in this cycle of it's so over,
we're so back. um like the mobile devices what we all have and the mobile app revolution really took API security to be back um then you know uh we've now seen it kind of go all over uh it's so over uh with like recessions and pandemic and then we've got like GraphQL that was so back and now we've got AI we're so over right we are we are very much in the trenches here um and we're not hopefully we'll get out eventually. So, it's so over, right? If you look at what a lot of AI experts say, they kind of agree that it's all over. Um, you know, a lot of the kind of AI is taking our jobs, weirdly enough,
has kind of slowed down when it's come to artists and come up massively when it comes to software engineers. The irony of coding yourself out of your own job is not lost on anyone, I don't think. But if you look at what these people are saying, they're saying, you know, 25% of the new code at Google is produced by AI. They're saying that 90% of all the code produced is going to be AI. Um, and you know, more potentially the more sensible people are saying 20 30%. That's still a lot of jobs. And if you have a look, I mean, literally like a few months ago, Microsoft um laid off a bunch of people. Uh and
this was somebody uh who was a director of AI who lost their job. So we may have girl bossed too far from the sun here folks. And if you look at um things like this is layoffs.fyi um you kind of end up asking one question which is can we as security people survive the the coming AI apocalypse? Um and I don't think that's a bad question to ask. So I say this as suddenly my mom's an artist. Um so obviously for her this has been very very salient in her minds of having AI kind of make her industry dead. And there is a certain amount of push back against this within the art community. A
lot of people like well AI can't produce art. It has to have intention. Does code have to have intention? Probably not. code kind of doesn't need to be written by a human. Oh. Oh no. We have colored we have coded ourselves out of a job. So here is the apocalypse. The four horsemen of the AP uh the AI apocalypse. First horsemen vibe coding. The second horseman agentic AI. The third horseman model context protocol or MCP. And the final one AI tools of some description. versus your like look you bought a thing and we put AI in it so we're upcharging you 50% on your subscription fee. So let's talk about death. Um I gave death uh to vibe
coding because it's kind of the death of the idea of a professional software engineer. Um so previously when developers would write code they'd think of it, they'd write maybe some requirements and then they'd kind of just write the code. That's not what happens. Now you go to Gemini and you go, "Hey Gemini, write me some requirements please for this." Um, and so this is an example that I produced. Uh, and here it's like, um, I want to make an API to teach people about API security. I want to be intentionally vulnerable. Uh, and it came back with a full API specification. Gemini can produce loads of these documents very, very quickly. Um, and it's fairly good. like it's your your
typical um like API specifications, some of API roots, etc. and then you throw it into cursor or wind surf and then you give it the MD file that Gemini has given you and it just creates it. Here's an app that I buy coded. Um you can't download this yet, but you will be able to download it soon. So this has a so it's a capy bar because like APIs cappy it kind of has the word API in it. It's very very smart. Um but you can see I've created this app that allows you to kind of explore abandoned capy bars. There's an idea of rescues. There's a login system. I didn't write any of this by the way. Cursor wrote all
of it. Um, and if you haven't had a go at coding yourself, I really recommend it because it is scary how good it is. Um, the problem is this is what happens next. When you do finally deploy your vioded application, this guy's called Leo. He had a time on Twitter. Um, so he built a application in cursor and he decided to start selling it. um you know good old build something with AI, throw it online, get paid the cycle. The problem is he didn't know anything about programming or security. So for example, he was seeing maxed out usage on his API keys that he'd been paying a third party for, bypassing the subscription fee for
his app, um messing with his database. Um and he fully admits on Twitter, I am not technical. I don't know what's happening. Um, you know, and a lot of people were then telling him like on Twitter what was wrong. He was getting quite angry at them as well. He was getting like, why are you why are you hacking my website, you criminals? The problem is is that you can't just get cursor to do this. You can't type into cursor, please find and fix all the security errors for me. Um, it doesn't work. And if you think about your regular errors, your regular kind of software errors that anyone who's programming will be intimately familiar with, the errors are really obvious,
right? They are big red letters, stuff doesn't work. You have some feedback around them. You know, these functional errors are obvious. It's super obvious. You can just tell cursor um that you can like copy and paste a stack trace and be like, please fix this this bug and it will. The problem is functional errors may be easy to fix. Security errors are not. Security errors you can't see. If you don't know about security, you cannot possibly like predict that these errors are even there in the first place. Here's another example. This person's 53. Um, and I have to say, you know, it's very inspiring to see so many folks who are picking up programming because of AI and
they're being able to build something. I think for a lot of people, you know, anyone in an organization has ideas for apps that might make their job easier or, you know, maybe there's they're a specialist in a certain field and they're like, "Oh my god, I've never coded before. I've built something. I've created something. I've been able to see something come to life." And as somebody who is I knit, I crochet, I do electronics now, I 3D print, it's very, very satisfying to see something you've created. So, I get people being so excited about it. The problem is that if you don't know there are security issues, you can't just get cursor to fix them. And fundamentally, you're just not
going to know those issues are even there in the first place. You don't know the right way of doing things. And you might be thinking, okay, this is very sad for the wannabe AI entrop entrepreneurs. You know, doesn't look good for their upcoming podcast where they have to talk about how it's failed. Um, but actually this could be anyone in your organization. So, let's say you work in marketing. I work in a marketing department. Um, and one of the first things I did is I built an application that would let me search all of our marketing collateral based on like video transcripts, PDFs, websites, because I was getting really bored with sales being like, "Do we have anything on
GDPR?" Yes, of course we have stuff on GDPR. We're selling a new of course we do. Um here's and here's the links to the the right collateral. I made that I used to be a software engineer. I knew that like the risk of that is okay. Um this is going to be an internal only tool. It is going to show some things like transcripts. They're mostly private but some of them may not be public. Like I understand the risk I'm taking. There is no reason why say somebody who works in support who might um have the same problem of people keep asking you for stuff and might throw in create the same application throw in a few little
little technical um internal only stuff. Oh no. What you've got there is you've got new shadow it forming that you didn't even know about. These are apps that people are creating from their phones. You got no clue. Like this is like sensitive data could be leaking constantly like like a tap. Next up, famine aentki. And this is based on the dead internet theory. Uh the fact that there are no more humans left. Uh we're in a famine of humans. So what are AI agents? So agentic AI is very quickly becoming the um way we do AI. If you listen to any of the AI people talk, they're very excited about AI agents. Everyone's building AI agents. Um, a bunch of
security vendors have now built AI agents as well. There's a lot of AI agents. Um, and the idea is is that these are a kind of I want to say like way of building software like like more of a methodology than like a sheer technology. The idea is is that instead of having something like chat GPT where you just ask it questions about the universe code, you know, your social life, and it also pretends to be your your girlfriend. Um, that doesn't really work very well in some situations because the more it leans towards social skills, the less it leans towards code. The more it leans towards code, the less it leans towards social skills. We've all seen Stack
Overflow, right? So over time it kind of gets worse in every direction. It's really hard to keep on balancing um that idea. So instead you use agents. So agents are small versions. They are small specialized AIs. So instead when you speech chat GPT what it's going to do is it's going to orchestrate a bunch of AI bots to do it. So maybe you're speaking to him and being like, "Hey, I want to book flights for New York, from Manchester, right next week. I want to go in the morning." There's going to be maybe a flight hunting agent which goes and looks at all the flight times. There's maybe a booking agent that will actually use the airlines APIs to book
the flights. Or maybe it's built on top of like um whatever like browser automation stuff so it pretends to doesn't have an API but it does it uh itself or maybe there's a kind of suggestion of oh you're going to New York here are some things you can do I've hooked into Google Maps I brought you the best restaurants and so instead of having just one AI you've now got this kind of explosion of AP u of AIs unfortunately everyone seems to have forgotten that these AI agents are really just APIs. It's fundamentally just APIs. So, we're so back, right? API security is back, people. Um, we're now a we're now going to be um, you know, you look under
AI, it's APIs. And this is becoming a really big problem um, in the bot community because I don't know, is anyone here a sneaker head? Is anyone really into their sneakers? I know Glenn is not in here. Um, but if you've not seen, people are obsessed with shoes. Um, and there are these limited edition shoes that go on sale and people try and scalp them, right? They'll go and buy like all of the shoes and then they'll resell them at ridiculous markup rate. That's all done with bots. So, they have bots, specialist bots to build to buy sneakers. However, in a universe where you've got a genti and you using one of these like orchestration things where
you have a sneaker buying bot or maybe an Amazon bot or a bot that buys things for you, buy me a green pair of sneakers. Suddenly, you're you've now deployed a bunch of bots where they've been defended against. Used to be that all bots were bad. Now bots could be bad, could be good. It creates a really interesting problem for a lot of security teams because it's now hard to know, hey, is that actually malicious or is it just AI agents? So, we're on to war now. Model context protocol. Um, put your hand up if you heard of MCP. Put your hand up if you heard of MCP before the start of June. Okay, a few people. Um, it's definitely
becoming a topic that loads people are talking about. You would not believe how many MCP talks we got at the outset village this year. Um, but these are not APIs. If you speak to people, they're like, "These are not APIs. These are a new thing. I'm the expert in it." Um, there's a guy on LinkedIn who who works at a company called Speak Easy. I don't know what he's paying LinkedIn for his ads. Every single time I open LinkedIn, it's him talking about how MCPs are not APIs and don't call them that. And they're actually something new and special. They're really not. They're APIs. So, how does MCP work? So, MCP has these like three different um parts to
it. You have the host. That's your like general agent. That's the one you actually speak to. You've then got a server. So, this is what is exposed. You've then got the individual tools. So if you want to say the m the file ser the file system in Windows has an MCP server. It has a bunch of tools for reading and writing data um to different files or creating files. All of these are called separate tools. They're basically if you want to think about it in one way they're kind of your individual terminal commands. So every unique command gets a unique tool. And then you've got data sources like APIs. The way MCP is pitched is
like this. It's like ah it's a USBC cable. Um where you have like this kind of like actual um like dongle thing that can connect a bunch of like USB and other random cables together. It's not It's more like APIs, but they do like their picture of they do they do not like being called APIs. This is what they look like. So um they talk with JSON classic API um they can do it over standard in standard input output. So okay it's not HTTP a lot of it is over HTTP though um also classic API and uh they have requests I'm not going to say it's an API but sure as hell looks like one.
Okay, so pestilence, AI tools. Um, and I picked pestilence and disease for this one particularly because I, as somebody who calls myself like an AI skeptic, um, I'm not a big AI user. I think it's crazy that people use AI every single day. Like I have no idea what you're using AI for if you're using it every single day. Like what what what you actually no idea. Um, but nowadays AI is everywhere. You cannot avoid it. like everything has just stuck AI in and now you've got subscription price increase because they have to pay for um uh their their APIs to uh GPT or Gemini or whatever. And what we've seen really from this is
the explosion of AI slop. Um if anybody has kind of elderly relatives on Facebook, you've probably seen quite a few of these pictures of Jesus. Um, honestly, there probably should be AI should be less accessible. Um, because people actually believe this. They're like, "Oh my god, I can't believe as someone who knits as well, right?" The amount of like genuine people who pick an AI pattern, it looks very obviously AI. It does not look knit. And they're like, "Does anyone have a pattern for this?" No, it's AI, Janet. Okay. Like, just And then you have AI being added to random things. This is a a IPS monitor I was looking at. Uh it's like a
touchcreen for some IoT projects and it has AI in it. Why? It's a monitor. >> Like what could you possibly be using AI for? And if you look at the stats, people are using chat GPT constantly. Also worryingly, US employees are using chat GPT to analyze data and information. the thing that hallucinates random facts. Oh no. Um, and what they're doing is when chat GPT is banned because it randomly hallucinates and has random facts and it doesn't necessarily help and it's leading you to false conclusions is chat GPT gets banned. It's simple as firewall rule. And then employees go on their phone and they ask chat GPT on their phone and then they just type out whatever they've been
asked like whatever uh chat GPT has generated. Um and so we're in a situation here where you know this is this is a dinosaur and so is this. Um you know this is AI and so is this and everything is AI and if you look at like the paid like LLM models people are buying AI. They're really buying AI. My god, they're using it constantly and it's not possible now to get away from this because most of these purchases for AI are not by individual people. Most of it is companies who are buying the APIs to use in their applications to add AI and something you didn't really want anyway. So, it's all actually APIs.
It always has been. So, in conclusion, guys, we're so bad, right? API security is AI security, right? This is not stuff that is new to us. Developers writing bad code, secure coding uh rules. We've got um uh people you like AI existing in tools and being used to um kind of like potentially do something like prompt injection. That's just APIs. You look at like MCP, it's literally just JSON, right? Like it's JSON being sent back. It's just an API. Everything about AI is no longer AI. Now it is just APIs. And in conclusion, we're so back. Thank you very much.