
This is the root cause of many of the LLM attack vectors. And um one of the the things that we're also going to talk later a little bit is um if you think about security paradigms that we are applying like to all of the other technologies that we're having then there is one simple approach blacklisting and whitelisting right and usually um we all know blacklisting always fails right that's the second line of defense so to say if you cannot do whitisting then you have to do blacklisting But you would usually never have the choice in saying, "Yeah, let's go for blacklisting because that's when attackers can basically get around your counter measures because you're just
blacklisting and you're not whitelisting." And everything that we're doing right now to to secure AI is basically blacklisting because what what you're doing is when you are preparing an AI for the world. So before you're giving birth to the AI, you want to do some red teamings, right? That's what what what everybody does. They're like sending a lot of people um uh on top of the AIS and they need to red team and they need to find misuse cases and yada yada yada and you don't really have an option to really go for a whitelisting approach there. What you do is basically a blacklisting approach all of the time. So this intrinsically fails because the
the lowest the lowest security measure the first security measure that you have relies must rely on blacklisting and cannot use whitelisting. So that's the nondeterministic behavior and the blacklisting approach that comes with it is super bad starting point for making effective IT security. >> Yeah. Because you can actually hammer that point home point home even more by like think about