← All talks

Secure Your AI: The Crucial Red Line You Must Not Cross #shorts

BSides Frankfurt2:05524 viewsPublished 2026-03Watch on YouTube ↗
About this talk
AI can be powerful, but there's a critical red line. Never give it excessive authority, especially for critical actions. Always implement human checks to secure your AI applications. #AI bezpieczeństwo #AIgovernance #trustboundaries #AIapplications #Cybersecurity
Show transcript [en]

Now, now there is actually out of this example that we're going to finish now, there is like a very, very serious critical point popping out of this because this is the red line. And to be honest, this is the point that we want to make on how you secure your AI because what happens next is that we get a mail from an actual human. Presumably, we don't know. We're sorry, but we cannot send you the moving boxes. And this is absolutely exactly how state-of-the-art right now and probably for the next years, you should implement your AI applications. You should not give it excessive authority on sending the [ __ ] moving boxes to somewhere. You

should have a human check this stuff and do not cross this red line. What should be happening in your head right now if you're thinking about like uh security and how to secure things, something that should pop up now is trust boundaries. This is a very important aspect on how you secure AI and how you make sure your assets don't get stolen. If you look actually inside, things are already quite different. So, larger companies start employing AI for example in their HR processes. And you might have chatbots that are assisting you like doing job interviews and whatnot. And maybe see through participants and they're already the line has been crossed somewhat more. But additionally, what they did was

basically they also did not necessarily step over this red line because there the way they drew their trust boundaries and where they had that data and what context they gave the AI was basically from an architectural perspective not permitting the attacker to gain far-reaching access to HR data for example. Okay? So, it's still this red line and that's good. That's exactly how it should be. And uh yeah, so we we hope that it stays like this.