
always large language model this is becoming a probabilistic problem, right? So, you have you can't really apply static rules and you have on the other end just to learn the way how the applications might be attacked from a language point of view. And my question to that is don't we simply need more real-time you know, real-time security in the sense. And what I mean by that is if you have now an offender, an attacker who is constantly checking your model from different perspectives like being a biohacker, being a I don't know, a mathematician, being a teacher and trying to understand how this application, how this model might be attacked and feeding this information back into the rack into the potential
you know, LM firewalls, whatever. You can really push the boundary for the attackers um you know, in terms of the attack itself. I think the problem is changing, but I see I feel um, the same principles would apply, but we just need to apply them in a more more realistic and real-time way.