← All talks

AI Security 101: Securing Cloud-Native AI Systems & Building Modern SOCs

BSidesSF43:55100 viewsPublished 2025-06Watch on YouTube ↗
Speakers
Tags
StylePanel
About this talk
Cloud Security Podcast - LIVE! Ashish Rajan, Jackie Bow, Kane Narraway AI Security 101: Securing Cloud-Native AI Systems & Building Modern SOCs AI is reshaping security faster than cloud ever did. This panel explores real-world threat models, building AI-first SOC teams, and the gaps legacy tools can't fill. Learn what it takes to secure, monitor, and respond to threats in AI systems directly from those doing it. https://bsidessf2025.sched.com/event/02cf37eda322cf52ea17e2f204e70247
Show transcript [en]

All right, we're live, folks. How is everybody doing today here at Bsides SF? Well, thank you for that. I'm Dana Torus, vice president of product marketing with Armor Code. And today I get the benefit of being the MC here in uh theater 13. For those of you here in the room with us and those of you watching from home, uh today is real special. We actually have a live broadcast. It's going out. I call this a three-way. It's live here. It's live for the Bides folks and it's going to be live out on the podcast. www.cloudsecurity cloudcuritypodcast.tv. Folks, this is a podcast that now has been running for six years, tons of episodes with followers of 150,000 different folks who

I'm guessing give a darn about cyber security, right? Kind of. Yes. There we go. So, I'd like you to give a warm welcome to Ashish, Rajan, and uh and our guest folks on the panel today. Thank you. Take it away. Thank you for the live audience as well. So, since this is a podcast recording, um obviously there'll be a a few things. I'll just call it out up front. If you do have questions, we do have a slideo on the QR code thing. Feel free to use that to ask your questions. We'll have time in towards the end. U before I introduce my esteemed guests over here, I I did this thing. I I need to share a share. I need

to share a story from this morning. I did what every person does to maximize the opportunity for a live broadcast panel. I asked AI, hey, how do I engage this audience? Like, you know, was doing a live podcast panel. People are going to be had this lunch. They probably would be feeling a bit low. I don't know if someone supports the SF team yesterday that lost unfortunately but it said not to mention that but um so I said hey how do I I I I run cloud security podcast we're a weekly podcast people primarily who are interested in our security leaders and CESOs how do I make a podcast engaging for B sites and their theme is by the way it's a dragon

oh uh and the dragon is AI and oh so my I kind of gave that as an input and it came up and said hey by the way uh you should start like with a cheerup exercise for people to just shout out a word. Uh I want to give the word to you and if you guys can shout I just want to check if it's hallucination or if it actually works. So the people who would hear it would be the podcast audience and the live audience in the background as well and hopefully people on their phone in the overflow room. Do you guys get to participate as well? Uh so the what it said was I I because the theme

is slay the dragon very medieval great opportunity for you to uh dive into the medieval times and ask people to shout AI. So, if you can actually get my let me get get my video on for it so you see people can see uh what it comes up as. I'm going to take a picture as well so my parents believe that I actually did this. So, so in 3 2 1 I'll just do that if everyone can shout AI. That'll be great for the podcast audience and for my video as well if you guys don't mind is that. So, I'm going to do just fingers that makes it easier. So, I got three, two, one. Woo. Awesome. That was the

energy. So that worked. Uh other I said like you know for people who've been using Gen AI probably would know this that once you start one level how do I even make it even more exciting and more like you know people should just get in with it. So they said uh stop reading this script and just get back to the episode is what it said. So I'm going to I'm going to get to the episode. So everyone welcome to CloudF. Uh as I mentioned we're a weekly podcast. Today's day is about slaying dragons. I've got two esteemed guests with me. Jackie, if you don't mind taking a few seconds to introduce yourself. Sure. Uh, happy to be here. One of this is like my

favorite conference of the year, you know, right in like my I would it's not my hometown, but I've lived here for long enough that it feels like home. Um, so yeah, I'm Jackie Bo. I've been working in security for just about 15 years now, mostly in detection and response, but I've bounced around. Um, currently I am the technical lead of the threat detection engineering platform at Anthropic. Awesome. And Kane? Yeah. Hello everyone. So I lead the enterprise security team at Canva. So a lot of that is dealing with zero trust, internal endpoints, that kind of stuff. And a big focus for me the last or year, two years I guess, has been on securing AI

tooling, uh, LLMs, MCP, all of that good stuff. Awesome. And as you can tell, there's a theme already forming that I wanted to tell you guys about. We have a bit of a debate in terms of we have one side which is leveraging AI. So imagine for people who will be listening to the audio, imagine a video. There's a slide up there. Let's just pretend there's a slide where there's a dragon on top of my head with an AI label on it. And we've got people who watched the how to train a dragon movie. Uh so we've got we've got Jackie who's the how to train a dragon working with beside the AI dragon on defending against this big

boss dragon AI. And we've got the knight shining armor Kane on the other side trying to defend uh with his shield of the fire flames from the dragon. So people who hear the audio definitely check out the video as well. So you get to hear that on the B thing as well. Okay. I feel like I have more of a wizard vibe than a knight. Okay, fair. So you have your staff. Is that what you Yeah. Okay. We'll go with that. So it's a wizard, guys. It's not it's not a shield. It's a wizard with a staff. Uh so set the scene. Um just trying to set the context for leveraging AI versus actually I'm curious is not that I can

see many people my assumption is few oh few familiar faces here as well u just to get a sense of how many people I was going to say raise hands and see how many people believe that AI is going to take over the world and we'll have to just leverage it at any given point in time. Oh no one supports us. Oh the few people awesome. The two people three kind of three three two and a half two and a half. So, how many people believe that we need to secure AI before we can start leveraging full capability? All right, we've got quite a few crowd. Okay. Can we do both and like an improv

game, you know, like Yeah. What about the and we should do both like one person? I mean, I kind of on the same campus overlords and AI that is wise. Pascal's wager says, yeah, in in case the AI ever listens to this audio in the back in the future, we may have committed some scenes or sins. Uh we may have hacked into some systems but we had contracts that were a good sign. We do say our please and thank yous as well. We do say please and so can AI. I mean so in case this ever happens and the recording ever goes into AI, we are good people. We just had lunch before this. That's the only thing we did.

That's the only thing we committed before this. So uh to set the scene uh we've got the first question in terms of um for I think I'm going to start with yourself cuz you've had security operation experience security operation now. in terms of I guess a lot of people from a different background here. Uh how has traditional security operation been done? Yeah. U before the leveraging AI part if you can set the scene for people. Of course. Yeah. Well, I think in the the realm of threat detection response, we have kind of been locked into these monolithic tools or SIMs, security information event managers. And most of the time, these are ones that you purchase wholesale. They're black

boxes. um and or if you're fortunate enough to work at a company that has customuilt their own. Um I think most people have, you know, experience with things like Splunk or um some of the some of the other large sim providers. uh but actually AI has come into the picture and kind of tainted I think a lot of detection and response people's view on AI because for the past like at least 10 years we've been sold this idea of like AI powered you know like machine learning uh detection and response nextg XDR and it's all trash it's all hot garbage she has had so much extra protein she I did that was her breakfast so if she's a bit feisty that's the

reason Yes, let's blame we'll we'll blame the protein. It's definitely the protein. Yeah. Um but yeah, we've been sold this idea that like oh, you know, this blackbox model can do detection and response for you for this, you know, low premium of, you know, a high subscription cost to a vendor. And I think up until this point, defenders, I think, are rightfully kind of like skeptical of AI because they're like, you know, this just gives me more false positives. Actually, that's a good point because the moment people talk about leveraging AI into an organization, the first thing that comes up is hallucination. Oh, yeah. This there's a lot of think that goes into it. Yeah. I

I I'll put a pin on it for a second. I'm going to come back to Kane since you've been securing the wizard of AI. What What are some of the risks that are being introduced by AI systems and organizations that come to top of mind for you? Yeah, like I I feel like it depends on what angle you're looking at. Like if you are working at uh like a AI provider, you have a very different set of risks than your standard company. And a lot of those risks are just SAS risks plus+ in their own way. So they just sort of add more layers in that can be that can have risks. They add more areas

that can be compromised and they just sort of increase the risk threshold a little bit. And so I wouldn't say there's anything super specific um but it just sort of makes things worse in general. Right. And so it's more effort that you have to put into securing that tool set. Right. And I guess talking about securing and detection response they kind of go hand in hand where people are trying to figure out um hey um to what you were saying earlier whether it's Splunk or any other seam all of us are familiar with false positives being like I mean a lot of level one just spends a lot of time just triaging incidents or is this

even a false positive or should I just call defcon one on this right? So, how can an organization leverage AI uh tame that dragon uh for detection response? Yeah, that is that is a great question. It's also um can I can I pump up my talk tomorrow? For sure. I was going to say please do he's going to talk but we need to pump up both your talks. Yeah, tomorrow I'll be presenting with uh my colleague Peter about um some tools that we built with um actually using uh Claude, which is anthropics LLM uh cloud code to build tools and then we actually do a lot of the investigation and triage using claude. And so I think for me the

difference in right now leveraging AI is instead of a black box of like alerts go in and something gets spit out and you have no idea how it got there. Um, especially with these models that have extended thinking, you can actually see what prompts go in. You can tweak those prompts. And with the outputs, you actually have more control over seeing what's happening. You can also leverage things like best of end. So you can have a model with the same prompt triage a detection, you know, n number of times and then out of that choose the best response. So I think the power for individual teams to leverage um like generative LLMs to do this work. Uh

there's just so much more visibility and it's no longer that black box of like well why am I getting this response? But how do you balance hallucination then? Cuz I mean to your point the moment you kind of I mean and maybe it's my bias and I don't know if anyone else has this because we've been hearing about hallucination being like the number one thing people talk about. Oh yeah, you should use AI but be careful there'll be hallucination. So how do you balance that? Yeah. Um, so we'll talk about this a little bit tomorrow too, but hallucinations are basically the model being super helpful and coming up with, you know, convincing sounding answers. And in some cases, we actually want to

encourage this. We don't want to encourage the model to make up events that have happened, but we actually want the model to break out of like playbook style or rigid human um thinking and have creativity because any of us who are incident responders or or who work in any like open-ended investigation or even like bug um like fixing bugs know that most of the times our most like incredible ideas come when we're doing things creatively or not the same way that we did before. So actually encouraging models to think for themselves and hallucinate maybe investigative actions that you wouldn't have thought of is actually kind of good. But you want to box them in a

little bit, right? You don't want them to come up with like, oh, here are all these, you know, like network logs and they just are completely, you know, not true. Wait, so we want models to hallucinate. Yeah, a bit a bit within boundaries. Like I mean, why, you know, let your models have a good time, too? Fair. I mean that kind of wait how do you even so I'm like wait so we let them let them hallucinate and all these bugs but to your point I I agree it might brings up some creative things that you may not may not have thought of but I I'm with you on that one um in terms of

building threat model cuz uh Kane's going to talk straight after this by the way if you want to join in um how do you even build a threat model for something like an AI system or where do you even start cuz I I imagine doing a threat model for an AI system is not same as I'm sure there's a few apps people listening or watching this as well. It's almost like hey what stride model should I use or what whatever other thing. How do you even start doing uh threat modeling for an AI system that let you let it hallucinate? Yeah, it's an interesting question and again spoiler that's a lot of what my talk goes into after this. So if you're

interested feel free to come along but sort of the high level is that I I like to focus on sort of two areas like whatever model you use is fine. I like to think of it as using sort of access at the beginning. So like how are you interacting with them? What what uh you know desktops, phones like where are you accessing them from? And then on the other end what integrations do you have? So what is your AI talking to? Is it talking to like your Jura servers or your Salesforce or whatever? And those two things are the things that introduce the most risk in my opinion because that's sort of increasing the surface

area of the things that can go wrong. And this gets even worse when you start connecting it to like customer data stores and doing public customer support because then it's not your employees. It's like an unknown third party that can potentially do weird things, you know. Could you expand on the uh could you expand on the whole authorization cuz I I mean now seems like you can't spend a day on the internet without talking about MCP and A2A and whatever whatever else comes with it. What does that how does that play a role in your threat model and the authentication authorization part? Yeah, it's it's interesting because especially with MCP, they've got a spec for authorization in

the model which a lot of people have had uh problems with, let's say, and there's definitely a few blog posts on that that are worth reading. Um, but there are ways you can encapsulate it as well, right? There's a bunch of vendors um I think Cloudflare did one a few weeks ago. I think Merge has one now where you can host them like on a public server. So, it's not like a thing running on your workstation anymore. It is like a public thing where all of your employees are accessing it and rather than having thousands and thousands and thousands of agents across all your laptops, you just have this one server that goes and connects to everything which from a

security point of view, I prefer a threat model, one thing rather than thousand different versions of open source code that people are running. You got some point as well. Did did you want to add something to that growing threat model? Oh, well just like on um like MCP servers, I think like I totally agree with like having an open standard is a great first step and then having you know the first like the first pass at like okay authorization or identity for for agents and for MCP servers and I think I really love what I'm seeing like coming out of Cloudflare and like it encourages the maturation of this technology especially by security practitioners um so we can actually get

the standard that is the most secure um and you've got to start somewhere. That's the thing at the end of the day and even even sort of taking this beyond into using it like what we found is that we can build sort of triage bots using LLM to then threat model our AI tools and so building up like a corpus of info that you can ingest in and then have the AI do the triage rather than you do it yourself especially since every tool is an AI tool now. I don't want to have to do this hundreds of times to every vendor I use. the nextG AI agent that's out there everywhere like like feels

like everything has a nextg AI agent these days but I that's interesting point about uh building capability as well I think you so I guess in your talk and we've been talking about detection response for infrastructure that is potentially using AI systems or build running AI systems in your case how do you even like where do you start and especially if you already have a team like many of us may already have a security operations team we would like to do threat detection but sometimes we don't have the resources Yeah. So, um, so I think one of the most important things that we found is a like base technical stack that really allows integration with these tooling and it it

basically is like set up your technical stack. So, it is engineering forward because models you can think of as like you know software engineers. You give them tools to use and um their efficacy is how open your stack is. So, are you using common programming languages? are using either open source tools or well doumented tools are using tools that have like very good APIs because when you think about like giving a model the ability to do work like on your behalf you actually need to give it like hands or like access to things which you know is the MCP servers it is tools um and so I think a good place to start if you're

starting from square one which honestly is like I think a lot of us dream about coming into a company and being like oh I can just build this from scratch rather than like here's the legacy sim good luck. Um but like if you are in that position I think really focusing on um tooling that is like open that has very well doumented standards like if you can use a sim that uses an open detection standard like sigma rules better than if you know you're using a sim that has a proprietary not well-known format. Um and like for us we built most of our tooling using claude code which is a you know coding agent that is really a collaborator. So like

we use claude in how we do triage and investigations but we also use claude to build like our terraform and our infrastructure and um yeah I don't know about you but I personally fall in the camp of security people who don't code. So I feel like bit nervous and I'm like you want to go outside? I'm like I feel like I heard about VIP coding. I've been hearing a lot about VIP coding the entire day in Oh, I vibe code all day. Yeah. Which is why I'm like it makes me well it makes me nervous that oh does that mean that all those ideas that I've had before that I wish I was a programmer? Yes. Exactly.

So is that how easy it is to kind of even as a security person? Yes. Well, okay. So I will say some of the best security people are software engineers or were software engineers because in order to understand how to circumvent a system understanding how the system works is great. Um, but I will say that what I have seen with like with coding tools especially like cloud code and you know there's tons out there. There's a copilot, cursor, wind surf these have lovable these have lowered the barrier to entry from idea ideiation to prototyping in a way that makes it so if you have these ideas you can actually go and create a prototype relatively

quickly. And you know we could talk about is this a good thing or a bad thing. I think it's a good thing. I'm on team. Build more Um Kane, how do we thread model this one? It it depends, right? And like that's the typical security engineer answer right there. Uh that's a consultant answer. I I can drop the mic and leave now. Uh so like I I think people are going to use it whether we want them to or not at the end of the day. And I think you've got to kind of secure it in place the best you can. And so at the moment a lot of that is through education um because there's not

a lot of tooling out today, right? That that kind of helps this. And there's things like YOLO mode, right, where you can just ask cursor to go do your thing. What are you trying to say? Like and and then you kind of cross your fingers and hope for the best. Uh and then you you add you add please don't make vulnerabilities, Claude. Please at the end. And uh and that that's kind of how you secure it. But I do think um you know there are some things you can do where like I said if you're connecting to sensitive uh integrations um that's where you want to like put your effort because at the end of the day you're not

going to be able to to secure or threat model all of this stuff right and so really just focusing down on like where is the risk what data is it ingesting that's fine like if you're connecting it to or your log sources maybe it's fine if it's just telemetry right in that case but maybe if you're connecting it to like your customer RDS's or thing then you're like oh now I need to put put a bit more effort into securing this. I guess to your point it's focusing on data identity access rather than hey you can wipe code versus you can you know the AI agent can do its own thing MCP whatever else that comes

after. Yeah, exactly. And you you might have some guidance on like, you know, use our provided MCP servers. Don't go out to the internet and just download random ones. Like it's a lot of typical stuff in package management really that um is kind of improving over time. Like it's getting there. I mean, talking about MCPS for detection as well, cuz I think Kane raised an interesting point about using the right kind of logs, which kind of is 101 for instant response detection, all of that. Um obviously on the cloud security podcast people have spent years trying to learn AWS azure cloud logging all of that now with the AI systems being attached to their existing legacy systems as well um

some of them obviously may have started today building applications so your AI from day one or AI native if you want to call that but for people who are trying to incorporate detection response in legacy systems which are running on cloud um h how does claude kind of fit into like a cloud environment I guess. Yeah. I mean I don't think you can separate cloud from kind of most of the modern uses of AI because uh in in Claude's case you can run cloud on bedrock which is AWS or you can run it on Vertex which is GCP. Um and so you can access the models that way. You can access the API. We also have a

firstparty API but most of what we build is in the cloud. So it's you know either in GCP or AWS. And I think one of the great things, you know, you mentioned like kind of like legacy or like people learning AWS is when I'm writing a detection signature, say for some random thing in AWS, you know, because AWS and GCP come out with new services all the time and you're like, what does this log look like? I can just ask Claude, okay, what are the fields that I should look for? Claude's like, here they are. And then I can prototype a detection signature. Especially doing detection engineering, I can throw up a PR and have a detection written like 5 10

minutes. Wow. Like so the entire life cycle from we have a new service to we now have a preventative detection detective control a detective control and wait. So how do you balance between when to retire a controller? There's a whole question about yes, you built one, someone's watering the plant, someone's making sure it grows into this big tree, but now it's time to well hopefully not chop a chop down a real tree, but in this context like retire a detection. Yeah, I would say like the detective like the detection life cycle is such a it's such a it's a great question because it's like very nuanced and it's very different everywhere you go and a lot of people have different

ideas. But the way that I like to break it down is you have both like alerting detections which are things that you know you immediately need a human to look at like it needs human intervention and then you should have a ton of like lower confidence signals. And one of the best things about using AI is I can spin up like you know n number of clawed agents who can just look over all of my non-alerting detections and then surface things that are interesting. And so um and one of the things we actually were surprised by is I don't if people have used cloud. Claude has a bit of a personality and um I was running Claude

over a bunch of detections that we had and Claude wrote this report for me that was like you know I'm seeing this alert happen a lot of times and I worry about the security posture of a program that is still having this as a firing detection. And I was like okay. I was like are you really working? What are you doing? Yeah. I was like, "Well, we're just testing now, Claude." But yeah, it's like imagine sends an email to HR. I've mentioned to Jackie five times she's not looking at this. Why isn't she tuning this detection? Yeah. Yeah. Okay. So, so to your point, you're able to wait. So, are you using MCP connectivity to AWS or like and in

terms of like the foundational pillars what Kane was talking about as well? I'm curious. Yeah. So MCP, you can think of MCP as like it's an open standard for writing these connectors that you can provide to um AI agents. Um but really like under the hood everything is like you can kind of like break it down to tool use. So tool use is the ability to give a model um actions that it wouldn't normally do or to kind of like coax it down a path. And so for us, we use um like a custom tool that we wrote. We could also use an MCP server, but we just wrote a tool that uh does querying

into our data links. Um or Claude wrote a tool that queries our data links. I I was going to ask a question if that's okay. How much of your stuff is custom to you like a snowflake stuff versus how much is possible for wider wider audience to use? Great question. Yeah. So, what we're building is sim agnostic um ish. So, if you use like what? Yeah. If you use a Sim that you know treats how it works as a proprietary secret, I'm sorry. Um because that Yeah. Um I won't go that's that's like yeah like 14. But I would say like this is the the tooling that we're using. None of this is anthropic secret sauce. Like nothing

is things that are only available to us. We're using models that are currently out. Um and everything we're building is in cloud. Um and could be you know we're using like Postgress databases, data links, things that you can have in either GCP, AWS or Azure. Um so yeah, I mean uh because obviously we've got two camps here for leveraging AI and securing AI as well. I'm I'm curious in terms of now that we know how to build uh we can wipe code, let it hallucinate, come with interesting solutions and hopefully we can figure out a way not to talk to a developer and still be able to figure out what the hell they are doing.

I guess um in terms of I guess my my question is in the existing market that we in with security AI being this big unknown kind of on the side and we're able to leverage something like claude to make our own detection where is the I guess what's the starting point for someone to enter into this to Kane's point um are we just able to use leveraging existing cloud logs existing application logs putting into a data lake to what you're saying and just Oh, Claude, go hallucinate on it and hopefully come back. I I feel like we're stuck on the hallucination. I mean, I was just like felt right to say that like what else do

you find here? But I mean, is that where you going? Basically, we found that once you have the logs, which you know is like the the first thing that you need to do, um, then giving Claude access and tools to both like query your logs and also do like some processing like we have some tools that we've created that write standardized uh, reports based on um, like a detection signature. Um, and there's like a lot of ways that you can experiment and create different tools. I think one of the most exciting things for us is the ability to rapidly prototype and run experiments. So we can try different strategies of triage. We can try different um yeah like different

like modalities and like we can have an idea of like okay I want to have you know a thousand claude agents go and like look over every log versus I want only to you know surface um like I want it only to look at the alerting detections and then to give me like really clear reports. Um it's just yeah it's very it's very exciting because the ability to kind of just like have an idea prototype go out and test and then get results. Uh I've never had this kind of like you know power before. I mean you sound like a wizard already but my my wizard over here on the other hand I was I was going to say now I mean I I

love the passion and energy Jackie has about how AI is amazing. I without giving up too much about your talk uh what can you share about some of the things you found about doing threat modeling across AI systems both the SAS ones and the in-house ones if you could share that as well so I mean cuz I want to balance the picture as well I mean as much as I'm excited about AI and we can leverage it to amaze do amazing things curious to know about what you had found in the whole thread modeling that you did yeah what I find is that um so my talk is about enterprise search so if you've used um glean atassian rovo the

Slack has one, every every vendor has one now basically and so it was looking at some of those tools and like what some of the problems are and what you find is that when you are doing these things there's like limitations and so the biggest limitation is of course authorization and so they've had to build like all the vendors have to build on top of the already existing SAS APIs right but those SAS APIs aren't always goods and then the layer on top of it isn't always good and so what happens is when you're building or on top of war bad things usually happen and so I find the issues are not like you know you

you'll read about things like prompt injection and you'll realize like you know this is the worst thing possible and we need to look at it but really only matters if you're building sort of public facing platforms right the bigger risk with a lot of this stuff is who has access to what how you're getting like thousands of service accounts now to connect all of this stuff together uh and so again it's a lot of the existing stuff that just sort of gets amped up to 11 um In that regards because to your point with the threat modeling space traditionally we have looked at oh what threats am I looking out for a lot of the conversation around threat modeling

AI systems going with oh it's a dynamic system I don't know what Kane would say next it's like can't put in the chatbot or wherever but what you're saying also is that what's the true reality of inter internals of the I guess inside an organization there's a lot of SAS which is using AI I'm just going to use throw a few words like Salesforce has an AI Atlassian has an AI Canva has an AI I everyone has a customerf facing AI. Um and that's obviously being used by other customers on the other side. Um so you being on the other side of this where you're obviously um part of the consuming sales space yourself. You have

your own SAS AIS that you're looking at um and the glean and everything else you mentioned as well. Is there like what's missing in the current approach for threat modeling? uh or is it the same way to approach AI systems as well in terms of I guess cuz a lot of people would be thinking am I learning something new here completely or am I able to just leverage what I know there's a few new bugs and things like that there's things like like I said like yolo mode and stuff like that which is all kind of like brand new but again like I think if you've if you threat modeled a lot of SAS tools in the past

you'll you'll pick this up pretty quickly I do think that um probably what we'll see will be interesting like MCP at the moment is like a layer for our LLMs, right? That kind of sits in front of our already existing APIs. Yeah, I do wonder how long it will be until we go, we just don't need like those original APIs and we just llm it. And at that point, that's much more scary because MCP is basically taking sort of like wide swathing prompts that I'm putting in and it's turning it into specific API actions that I can usually see. And so, when I can't see those things in the future, that will be really interesting.

Yeah, for sure. Uh, we can audit them as well. starts. Yeah. I think like like one of or the thing that Kane is getting at and from like the like incident response and investigation side is like we we need to keep logs of like what is happening cuz you know I'm very excited about look at how much I can do like look how much like my work is like amplified by using LMS but you take that for every person at a company and even people who maybe have not historically interacted with like infrastructure or technical systems but they're able to now and we still are kind of having this having this like really beginning forming idea of like what is identity

when it comes to like agentic workflows. Um, and so the ability to trace back like where actions are coming from, um, I think is going to be more and more critical. Like especially for me, if I'm looking at an incident of like why did this server go down and it's like, you know, oh, this API call that came from like, you know, like bedrock, it came from like an AI, that doesn't actually help me. I need to know where it actually came from. Uh, and so like tracing actions, I think, is is critical. Are you able to use AI for those like going into rabbit holes as well? I mean I would say AI is pretty good at

going into rabbit holes. Um but yeah I think you need to I mean guide it. Yeah. It's been pretty classic right as well that security teams don't really scale with engineering departments generally. Yes. And so like I feel like we we have to even if you do not want to even if you are like one of the doomers who is like no I will do everything manually. I feel like learn the hard way. If if your engineers are doing it then like you are going to fall further and further behind. I think I think that's such a good point and like I think the position I have is we are not going to be able to

keep up as defenders if we are not willing to use this technology. I think if we are um only on the side of like oh well you know MCP servers are vulnerable or you know this technology is you know like let's only talk about prompt injection which is something like I feel this community sometimes gets stuck a little bit in is like the hacking or the breaking we won't be able to scale with offensive capabilities and offensive technologies if we are just kind of waiting and blocking ourselves on well we'll wait until it's more secure we'll wait until it's better um like I Here we are 16 years later still talking about cloud adoption I guess. So I mean some

things would always be slow I imagine. I I think it's it's I love both the perspectives but I also want people to walk away with a a starting point for um you know they heard how passionate you are about cla code and people should definitely go and try that even if you've never coded before uh open up a visual code or whatever your favorite editor is. What's a good starting point for someone who's basically inspired after hearing this to start leveraging AI? Yeah, I feel like trying out um some of the coding assistance. There's a ton of resources out there. I feel like anthropics documentation is pretty great. Um no biases. No bias. Completely. Yeah, I'm completely not

biased at all. Uh and there's Yeah, there's a lot of like YouTube videos and things to kind of talk you through. And if you have an idea on like something um that you would have liked to build, uh prototype it yourself. You know, stand up your own like AWS or GCP account. And um I don't recommend you do this on like your corporate um like like on your production live stream. So I'm glad you mentioned it. Uh but yeah, you know, in a sandbox environment or if or if where you work provides like nice sandboxing so you can kind of have like a playground. But I would say definitely don't be afraid just to try things. So is there a I I

don't know like a S3 bucket going to internet or whatever. Is there like a is there a thread that comes to mind that's probably the easiest ones to start with as well with this kind of a wipe coding? M so in order like I think one interesting thing is like just throwing logs at a um like at an LM and and asking it to come up with patterns or like you know if you have an idea about a detection signature but also if you're like I want to you know create a system to collect logs or I want to like one of the interesting ones for me is like I want to run some kind of analysis over a

bunch of files um and set setting up like infrastructure and systems to do But um I found cloud code to be really helpful with that. Um and how do you scale something like that? Cuz I mean there there's one thing making one. Yes. And now you're like okay now I do this across 300 plus AWS accounts or GC accounts. I think so what we what we start with is you know an idea and then we will we talk to Claude and then we come up with a design doc and then you know in in a good design doc you have uh components about scalability and uh and it's a really I found it's really collaborative to like with my with my

colleagues we'll come up with these design docs we'll iterate on them uh then we will kind of start broad and then move into the specificity of like an actual technical deployment and then we'll will go into the actual like vibe coding with Claude where we take the design doc and like a markdown format drop it into a repo and then um let Claude cook um I mean we're hallucinating we're cooking I don't know like we should need to change update the vocabulary a bit better can on on the same flip side for threat modeling as well what's a good starting point for apps folks or people who've been on the on the other side and

how can they scale that as well yeah here's like my kind of opposite take I guess which is if you are like a cloud security engineer and you're building stuff you should still learn to code manually and like connect APIs and do your day-to-day manually so that you really understand that because if you use vibe coding to do that you're kind of stealing that learning away from yourself however say you need to build a UI and like you're not a front-end developer like you just need something to show say you need to make a button yeah then go nuts, do do that, vip code that. And I feel like that way you will sort of like gain knowledge in your

domain and you will keep that. And so specifically with something like threat modeling, do loads of it. Build up a big piece of like knowledge base when it comes to that stuff. Keep doing it so that you're good at it and then you can tell what the AI is good and bad at in that regards. And that way you can kind of use it as a triage step and say, "Look, this one's high risk. It's done half the work for me, but here's a bunch of stuff I still need to do, and I needed to do the learning first to do that. Yeah, that's such a good point. Yeah. Oh, awesome. Uh, I think I just

want to take a moment for Dana. Do we Okay, by the way, for folks who have questions, feel free to use the the QR code. Uh, we've got someone walking around here as well. Hopefully, if someone has does have a question, feel free to raise your hand if you have any. Dana would find you in a few seconds, but Dana will find you. Yeah, Dana will find you. Sounds like a threat. Yeah. is does anyone have any questions that people would want to ask before we proceed to the next questions? I can't even see people. I just see lights. Yeah. So, I'm just like I'm just assuming Dana can see them. I'm like, "Okay, cool." So, we'll find Okay, but

if you have feel free to find these people out afterwards as well. I've got a couple more questions coming towards the tail end of the episode as well. Uh I was going to say if this is kind of like a fun question. We've been very serious so far. We're totally not having fun at all. So, I' I've got some fun questions to, you know, just lighten up the mood a bit. Um, no AI was involved with this. I didn't let them hallucinate. Uh, first question was or is if AI could protect one thing in your life besides your passwords, what would you want it to guard? U, so this my answer I think Kane might have feelings

about this too, is I would I would want something to protect my dog. Um, I have a I have a Pomeranian. Um, and you know, sometimes I can't be home with her, so I would like something that just, you know, like like a shield walks around with it. Yeah. And make sure she's okay and that she's like, you know, has enrichment time. Um, yeah. Plays with her as well when she when she's in board. Fair. That's a good one. What about you, K? I can't steal that one. I have a little I have a little 8-week old Pomeranian at the moment. Oh, right. Okay. So, here's a funny story, right? My friend lives in rural Australia and

he has like a big like homestead, right? and he has these fat wombats that come and steal his strawberries every night. And so he's shown me videos of them. So I think we need to sort of make a start up for like wombat detection and response or something. For for all the non Australians, wombats like giant rats or raccoons for lack of better if people have not seen them just they're just everywhere in Australia. Uh okay, good great answer. I've got the second question. What's one totally ridiculous what's one to two totally ridiculous thing you think AI should have security for? Like I think I would love for it to protect my crypto wallet, but Oh, what's

a ridiculous thing? Oh man. Um I I'm thinking of like so Oh, this is this is like I'm just going to say it. I have no filter at this point. Like I talk to I talk to AI a lot as like a therapist or like you know like for interpersonal things. So like I would like protection for like when I'm talking um you know to to LLM about like emotions or like you know so they're Yeah. to remind you. Yeah. Yeah. To remind you that hey, everything you put here is not like you may hear answers which I may be hallucinating, I guess. Yeah. Yeah. So, don't take my advice seriously about your life choices. Fair.

Uh, what about you, Kane? I It's really hard to follow with a wombat answer, you know? I feel like that is a pretty dumb one by itself. So, fair. Okay. So, that's ridic most ridiculous thing. Fair. I think that's pretty ridiculous. Yeah, I would think. Oh, wait. So then the next question is if your AI could be a your spirit animal that would walk around with you. Don't say wombat now. Should we answer at the same time? Oh. Oh. Okay. Right. So, so if people are ready for it, the the question here is if you if AI security could be your sidekick, uh what kind of animal form it should take? Which animal would you pick

and why? So, let's let's just same I'll let you come to the mic as well. One, two, three. Pomeranian. All right. I'm I'm going to say a different one. golden noodles, you know, like put another in there like someone needs to stand up for the golden noodles out there. Chicken nugget. Wait, why Pomeranian? Why why uh your sidekick needs to be your best friend in life? Yeah. Well, I I just think since I since I have a a Pomeranian and she is she's already my sidekick that I would Yeah. I mean, is that this just like dogs are best friend? I mean, at this point in time, I should just end the show at this point.

Yeah, we're actually going to do a slideshow of our dogs now. We we've got an airdrop going on here. So should come with those dog shows off right. But uh that was the episode that we wanted to record. Thank you so much for everyone who joined us and at the overflow rooms as well. Thank you for engaging us in the conversation. Hopefully I don't know if we have been able to sway you on the and side of security where you're test still testing the ground or happy to test AI. I mean behind the passionate Jackie that we have here or still threat modeling your way with the wizard cane that we have. Hopefully you get to slay some dragons,

AI dragons for the rest of the conference as well. But thank you so much for uh joining us and u being part of the podcast live as well. Thank you so much. I think right ahead. Thank you. Thank you folks. We've got some gifts for you from our great sponsors. Thank you. Uh AI sponsors Kane. Thank you. Here you go, Jack. Well done. Oh, I don't get one point cuz I said AI. I said AI. I didn't get my iPhone. You get a thank you out of this guy. Oh, thank you so much. Well, thank you folks. Uh that was a great conversation. I hope it all got captured on tape. For all the rest of

you folks in the room here, uh if you want to stay right here, we've got future proof your career evolving in the age of what? AI. Of course, it's we get paid every time you say it. Uh don't forget coffee upstairs till 4 p.m. Uh there's also headsh shot still thanks to Opal Security. Great folks over there. just a I guess it's technically out and around by the concessions. Uh have a great rest of your bsides. We get to do all this again rest of this afternoon and all day tomorrow. Thank you for coming. Thanks everyone. Thanks.