
I had a a broad question as well, and I wanted to know from what you see, like who comes to you? Large companies, small company, when do they come? Do they come when they're already live or It's actually the answer to this is everything. Basically everything, yeah. So, the answer is they come when they're live, they come when they're not live. It's like It's I guess it's So, this doesn't change necessarily from your usual pen test business, right? They just have AI now, and they're like, "Oh, we have AI now. Can you test AI?" And then it's like "Let's first talk about this, right?" Because does this make sense? How are you Maybe it makes more sense. So,
sometimes when customers approach us, we tell them, "Let's discuss your architecture first and not do like 10 days prompt injection your AI system." It makes far more sense to think about conceptually wise, what did you do with this? And then we can, if they politically need it, do prompt injection and show them, "Yeah, it works." So, they can have like a paper and like, you know, say, "This is not good. We should draw our trust boundaries differently." That's what we also do, but it's it's all over the place, basically. Everybody's implementing it, everybody's using it. Um and sometimes it's also Sometimes our customers are also not aware that they have it. >> [clears throat] >> Uh so, uh yeah. So, it's everything that
I cannot answer this question, to be honest. Um did you also look at uh retrieval augmented generation, the rag? I see that often in smaller projects where people do not want to give all their internal customer information to an externally hosted uh GPT, and they have this kind of concept where they have that document pool in a local installation, and kind of in use that in order to enrich the prompting, and which is sent out to the to the Highly depends on the architecture. I mean, if you are going If you look at the GPT, I mean, the the the examples that we've shown you with the API calls, that is basically something that you would use
if you just dive down into the Open AI ecosystem, you want an Open AI assistant, and you don't want to give them the data, then you basically just wrap it with an API call. So, the AI is coming back to you. You have your data pool there, and you have a back end, and that back end is providing information to the AI to work on, so to say. You can wrap this. Is there a specific way of attacking those? Um not necessarily, no. Okay. Because I mean, at the end of the day, it's a trust boundary. You have like a very strict trust boundary, and attacking the AI doesn't give you the documents in that sense, right? It might
give you, depending on how it the back end works, you might be able to extract a lot of data, but that's also comparable to, I don't know, you have this classic case in a web application where the first document is ID equals one, and the next one is two, and then you just enumerate all of them, and you get them. Probably you're attacking rather attacking the back end instead of the AI to get the the data.