
So today I'll be talking on uh the subject matter uh shadow AI uh being the newest future channel. My name is TJ by the way. Uh I'm a security analyst and I work for Lark information group. I think we're just somewhere around the corner here. The uh somewhere in I know we have offices in Reading and and the lights in Bristol as well. Um so uh a couple of years ago um roughly about two years ago I was tasked with um so before I go there uh my background uh I studied as an IT support person uh then from there I moved to IT infrastructure then from there to cyber security um in ding cyber security I
started as a penetration tester actually certified hacker uh but over the past couple of years I found myself doing more of governance risk and compliance Um so I sort of sit in between that technical and uh the management side of things. So a couple of years ago I was tasked with um based on an audit finding I was tasked with implementing data loss prevention for the company where I work. Um but to do that we're about give and take sometimes about a thousand uh strong staff strength and the company is roughly 30 plus years. It was not possible to just implement data loss prevention without seeing what was coming in or was going out basically. So
uh it took me about a year um disclaimer where Microsoft environment so a lot of these will be based on uh Microsoft things and things like that. Um it took me about a year of um so basically what I did was create the policies implement the policies in pro view um then turn it on in simulation mode. Uh the reason why I did that was basically it sees everything coming in and going out. Uh it alerts you but it doesn't block it doesn't encrypt it doesn't enforce anything basically and for me it was a really very very it was a revealing moment of my life really. Uh you'd be shocked what comes in and what go especially what goes out
of organizations. Uh and I guess that made me a a champion of data loss prevention basically. So I started uh talking on it um talking about it encouraging people to do what they possibly can to ensure that their data is safe. You would be um kindly speaking not to knock any employee or anybody right often times they genuinely have no idea that what they're doing either by way of putting in on the cloud or emails or shared drives and things like that actually exfiltrating data in one way or the other. Um that said uh this is what we're going to be looking at the governance gaps that that are there today. Uh what controls exist and and
where and how they work. Uh where most of the nations actually sit on on on the level I have separate them into about four different levels. The real gaps even with good controls in place and then regaining visibility without blocking uh innovation. Unfortunately, or fortunately for some people, artificial intelligence is the hot bottom topic today. There is no room you step into that no one talks about artificial intelligence. Although sometimes I like to um I always like to talk about the the difference between artificial intelligence and machine learning. Often times what we entrust to artificial intelligence can actually be done or what we actually entrust or put into these AI tools can actually also be done
with machine learning. But because we feel that oh AI is the hot thing today and but some things might just be achieved by mapping A to B, mapping Z to D and still work as well as or maybe sometimes might even work better than the artificial intelligence tools work. But yeah, who am I to to to judge um the governance gaps out of a um survey that was done uh in the US by Syber News, he found that 59% of employees today use AI tools not approved by IT or by management. Right. 29% of organizations have no AI acceptable use policy. And by hacker news, 40% of files uploaded into genai contain personally identifiable information information or payment card
information. Um even the mature environment the governance gap between two deployment and AI policy is real and growing. I I'll give you an instance right. So a couple of years ago when I implemented data loss prevention and um and we went went to CAB and then we turned it on. One of the one of the controls we put in place was so that when people paste data in the browsers it intercepts at that point. It reads the data and know if this is good to go or not. That thing takes a couple of seconds, sometimes a couple of microsconds to process and then see if this has the data you're looking for or not before it checks it
complete and things like that. Um, but then the business cried back. Basically, staff cried back. Oh, this is taking up our time. Like I said, it's a couple of microsconds. So, I went back to my manager and said, "Oh, this is what people are saying." Um, then he said, "Okay, you know what you're going to do? Turn it off at this point so that it doesn't roll back years of what we've worked on." So at that point we had to disable that part of which is basically scanning was pasted in browsers. Uh but what am I trying to say? The reason why I'm trying to say this is that despite how mature we are in that in terms of
data prevention there's now a gap somewhere because at that particular point in time we were not scanning what people were putting on their piston in their browsers basically. Um for uh this is just some further statistics about a breakdown of of some of the stats that I showed you a couple of uh note that this survey was done in the US. We're going to come to to to what's up in the United Kingdom. Um there's 59% of employees use AI tools and this is the breakdown. More interesting is this uh the uh direct manager is aware and doesn't support it. So basically there are way and yeah it's so it's it's open season right use AI as much as you
possibly can if it helps your work right but that's not actually uh how to to go about it this is also the one of the 75% of employees who use unapproved AI tools share sensitive data and here you can see uh some of the datas that are being shared um if we recall a couple of years ago there was the story of the news of um the Samsung employee who pasted proprietary data in in charge GPT and the like. So these are scenarios where by the definition of the word good cannot be good, right? It's it's a very um uh where yeah we look at what we should do and how we should get in front
of of some of these things. Uh executives and senior managers are most likely to use unapproved AI tools for work. So being a compliance person basically sometimes there's something referred to as management body language right where um where policies or or execution comes from the top down to say if management has no problems right using AI tools of course it tells you that they don't really care and of course if you're doing if if your employees are are do are learning by watching you obviously they also want to want to do the After all, it improves productivity. Um, 20% of employee employers do not have an official policy regarding the use of personal AI tools for work. In
fact, they have no policy whatsoever regarding any AI tools for work. So, it's not just personal tools, although that's part of what we're talking about here today as well. Uh then coming to the United Kingdom um there was a survey that showed that 48% uh% of of um employers know or suspect that employees in the organization are using AI tools that have not been officially approved. Um there is 64% are connect are concerned unregulated AI use could lead to data security and compliance risk. 34% of businesses do not have formal policies or guidelines governing AI usage. And then 37% have not communicated to staff their expectations of how AI should be used. First of all, it tells us that there's a
gap somewhere, right? So if if I don't if I don't know what's expected of me at work, then it's open season. I can I know I know we there are also we won't talk about the controls that are in place or controls that are being implemented. We'll see how the nations try to get in front of it. But the truth that is there is is uh um if you don't tell me what is expected I would not know what's expected and I would assume that yeah everything can go or everything flies right so um shadow AI what's a shadow IT 2.0 zero. This is we're just relieving that moment where where um I told you a couple of years
ago I start I started working in the bank in 2007 uh where there wasn't much of those controls and I was an IT person and then we saw people who some for productivity s just for keeping themselves busy while they at work install tools that they had not been approved by it. Uh we tried a lot to get in front of all those things and then that was where things like um um um joining systems to domain having those uh what we used to call it domain maybe policies in the domain to ensure that people are not able to install softwares that that or you have um IT people come around to install those softwares and
we're seeing the same thing with AI now because um I'm not going to go into but but so what what what happens is when you said, "You can't use quot charity or or claude and the likes, right? And I really want it for work. What I'll do is I'll find another way to get to use it that you're not aware of." And that's not what we trying to encourage. We're trying to encourage people to um get in front of it basically, right? It's here to stay and there's probably nothing anyone can do about it whether you're in agreement or or not. Um so common tools we have the personal charge GPT uh the claw the Gemini accounts and
then there are browser extensions today with with AI capabilities on manage coding assistant such as the cop pilot cursor codeex uh in some instances um consumer summarization and doc analysis tools we've seen all those things today you know there were um recent events with respect to open claw um the ones who use it for um meeting summarization ations where they all those things are are going into AI tools basically unofficial API integration built by by developers and then we look at what's missing no enterprise login or audit trail because if I'm using my personal charg the organization has no idea of what I'm pasting with any of those tools right and and those are part of the
things that that we're trying to to campaign against no policy enforcement or or data classification um excuse me by passing endpoint DLP if accessed via uh browser pace. I I told a story here of how we we didn't get in front of that really early and no visibility into model training uh data retention and no contractual data protection with the vendor. So because once once once your data is out there if you don't have u um let's take the instance of the one that happened to Samsung it was it was personal Samsung didn't have a contract with charity in any way right so what have been placed there first of all not only does uh open now have access to
that code base they also it's if it's a personal account and more if it's trial account it's open to uh they can use it to train their models to better improve their models and things like that. So those are part of the things that we need to be very wary of. So really quickly uh we've classified this thing into four levels. Level one are small businesses who have maybe either by you know um ways of um budgeting or something have none of these tools in place right to either implement data loss prevention or or to um implement web web filtering and content filtering and the rest of them. All they have are just is just a basic firewall maybe
email DLP and then the level two are people who have web flow training they have partial visibility right um but there are still gaps in terms of visible logs AI isn't flagged and these are probably the midm market and some some enterprises some have zcala maybe proxy deployed but not uh specifically with respect to to AI uh in the policy and then the third level is the AI aware controls deployed but unmanaged devices. So in environments where they allowed to bring in their own devices uh or work from home they don't have um um you begin to lose a lot of some of those controls as against if they working in their in the offices and
and stuff like that. And then the fourth level is the full governance AI program. uh but some of the gaps is uh unfortunately you cannot be away with insider threats via sanction AI tools because we're going to look at some of the insider threats that that are that are available today and what and how they can harm the business but yeah this these are mature enterprises with um with all the tools in place all the controls in place but still there might be a way that um data can leak from from that particular environment um so the AI expation part. So, really quickly here we have um the user who paste whatever it is. It could be um I
think this might be the good time to talk about the three scenarios, right? So, you have a scenario one where a user says, "Oh, um um I want to summarize an email. I want to I want oh, please help me work this email better." Right? and then I've erinously pasted um personal ident PII or PCIs or code base that that I should not have pasted into the browser. Or you also have the developer who wants to either debug a code or or make his code much more um better his or her code, right? And then he paste that code or they're using things like codeex and uh other tools that that are meant to help developers develop. And then once
they paste that into the AI into the AI website, they've lost complete um um what's it called? Control over now that those so this the first three are the only places where organizations still have control of that data once it's pasted into the AI once it's placed into the website and it has gone for processing that data is completely lost to not lost but it's now out there in the wild. It could be with anybody really. It could be someone could have intercepted it. It could be with the uh um the open the the AI companies, right? Where they're now processing that data and then using it for whatever it is they want to use it for. But
unfortunately because um today we don't know what the retention policy is, how they train those data. Do they anonymize the data? What are they using those data for? How long do they keep it? We have no idea how that is. and um although maybe not but yeah later there's also the ethical use of AI and things like that but this is what we try uh to get ahead of encourage organizations basically to but like every other thing that we have done in security is one of the things is user awareness right to train users on get cookies of AI yeah you want to you probably want to be uh want to be more efficient you're drafting this email and
you want it to come out better anonymize the data it can be Dear Jane do or dear John do if there's a place where you want PCI or PII remove those things and then do um include something else there so that some your either your customers data is is not pasted into the um AI platform. So critical gap regardless of maturity level the personal devices uh they bring their own devices uh people who are working from home and mobile hotspot bypass every on premise or agent dependent control an employee working from home on a personal laptop is invisible to all of the above some of the above even when I have to connect maybe to
work via VPN there also the incognito mode some of those things proxies and web filtering and web content cannot see what's going on through incognito so we also need to be wary of how how how we process that and then there's pview browser extension like I said we're environments so a lot of this is happen Microsoft pview browser extension the chrome edge firefox um extensions exist and they do detect paste events into AI sites but this has to be properly configured and deployed like I mentioned earlier else you just it's better you don't even have it there in place uh so we're going to look at the the uh three types of of users which is the
accidental the unaware and then the one who actually wants to exploit that data and then we have the efficient employee just space the client summary into chargity to improve phrasing right no idea you know it's it's not like I said some of them don't even have an idea that what they're doing is should not even be done right and like I said if you don't have a policy in place to state how AI should be used when it should be used and all that then of course it's open season I just agree at zoom that whereby do works. Um then there is the wonderware insider regularly uses personal AI accounts for work doesn't know in may be retained or
trained in data. So uh I'll be frank with you right um so what we've done at work is that we've said no AI rights no traffic no cloud we blocked all that from our from our web filtering but then we cannot do with copilot because you are in a Microsoft environment so we we were allowed to use co-pilot to do some of those things but personally I'm not a big fan of copilot either maybe by way of of the results it gives me or the output I'm not really I don't like it. So I prefer to maybe go back to by charity which I'm paying for a professional. So why am I saying I I
still fall within some of these things? Why? Because I don't like the output of one of the AI. I'm not going to use my own personal AI. Although being aware, what I then do is to anonymize the data. Like I said, remove some of the things that I don't want out there in the wild. And yeah, but I still get my results at the end of the day. Um and then there is a third person who is the weaponized insider uh uses normal AI behavior patterns to to deliberately cover whatever he or she's doing um because they know that um that the organization is not aware of what's going on or then yeah they can actually
use that to actually exploitate data and this are the people that we're supposed to be worried about. Yeah, we're just worried about the unaware person and the official employee because don't forget in the recent times some of the recent hacks that have happened recently have actually been based on users who are unaware. People talk about things like MFA fatigue for instance. We had a couple of hacks a couple of years ago. I don't want to mention names but who um have unknowingly given out either maybe their second factor authentication to malicious people or sometimes you uh you you go online and you see um I don't even I should say this 26 but sometimes you go online and you see u
although some of those platforms are now getting ahead of it where um someone uploads a code to places like GitHub right and then you see the passwords was very recently I saw one because and it was erroneously but what happened was uh there was an IP we used to use on one of our DNS's and we no longer owned that IP and I think the um cloud platform assigned it to another user but because we have a hanging DNS which means one of our DNS's was still calling that particular IP um so we got a report that there was you know our tool alerted us that um there was confidential information with respect to that
particular IP on the internet. So I started investing so I was the one who bought the ticket. So I started investigating it and apparent I found out we no longer own the IP but because one of our DNS is still tied to it that's why we were alerted. But then when I started looking at it, I saw a complete backend database, IPS, password, username, everything. Although what I did was I then mailed the company because they're somewhere in Saudi Arabia, right? So I mailed the company to say have your security person or your IT person reach out to me. But you did. The person didn't and there is really nothing I can do. And funnily enough,
it's I it looks like a high-end brand right somewhere in Saudi Arabia. But yeah, there's nothing we can do. But what I'm saying is that you'd be shocked, right, to some of the things that are still out there. Um, people actually have no no idea that these things are out there, right? So, um, just like I've said here, uh, we just can't block it, right? We're not because once you do that, then you're forcing people to use alternative means, alternative measures of accessing the AI and that is not what you want, right? So what we actually encouraging is ethical use of it which means state categorically what you want or how you want your people to use AI right and of
course if it's a policy and it's in effect then anyone who you know who goes against it is actually would can come down you know management um what you call so employees use personal hotspots in commut by passing your proxy entirely yeah so yeah kindly speaking I'm not sure my colleagues know about this my colleagues are in sec I'm not sure because funny enough we use content fields in that I think I was going through incognito once and there was a website that we were not supposed to go yeah I was able to use it so we need to be worried about that um shadow AI usage increases because you've pushed it on the ground this same thing I'm saying at
disadvantage productivity gap widens because you have told them not to use AI and things like that and then trust erodess security becomes the department of No. And then you still have no visibility. Behavior just moves somewhere else but most likely somewhere worse because you don't have visibility of what goes in there. uh so what what we would encourage or what we would advise is enable gadies right approve uh enterprise AI with login and DLP integration baked in so that um and because like I said we've approved the use of co-pilots within the environment um in as much as as personally as a person right not mentioned um so of course we have um service level agreement with the likes
of Microsoft and they'll tell you that oh your data is secure and things like that but we just looked at it as either ways we're using them in the cloud our data is with them already right although um the challenge I have about that is when we talk about things like um data sovereignty for instance right Microsoft doesn't have any um data center in the United Kingdom the least I think is one in Ireland right so which still means that whatever data that we're processing with is not is not anywhere in the United Kingdom, right? And with respect to things like GDPR, the data protection act of 2018, those data lives somewhere else that is outside the United Kingdom.
And of course, you know, when uh if well, let me not say when, but if they are sequestered to released or to maybe by the FBI or CIA to access some of those data, some of them might not really have the liberty, but yeah, we'll talk about data sovereignty is an entirely an entirely different subject, but those are those are part of things that that we need to be to be worried about. Then there's a poverty extension deployed and scoped to to AI destination. Um so talking about this a lot of organizations we included use things like web filtering content filtering uh what what what we've done or what most do in that instance is let's say for
instance things like net scope zcala and they so you rely on them to tell you that oh this is an AI website or this is an AI environment but what if they have maybe erroneously miscatategorized that website Right? They still find that your people would have access to that website despite the fact it has to do with AI or what if that website is new and for one reason or the other they have not been able to get ahead to categorize categorizing that website as artificial intelligence your people will still be able to to accept and that is why when we talk about so so what we eventually did was to say um some it might be a bit
loose but I think it was better than no guard rules in place right where we now had to use rejects to write our own uh our own rejects in Pview basically in Microsoft to say this is what we're looking for. So if something they paste here has HTML HTML or div want to look at this and possibly block them or in some instances maybe don't want to block but want to encrypt if they're sending it or building it to someone outside the organization. Um and then employees stay productive and on monitored platform. also like I said because I know today M if you're an administrator on the platform you actually can have a way of seeing what
your employee is actually pinging on respect to AI and things like that and then sh AI motivation is reduced because yeah like I said then you have to approve it and then and then and then just create policy and and get ahead of it get in front of it so what to do uh I think this should be the first thing I don't think they should be this should be negotiated really right. I think the first things we should do is create a policy which defines what exactly management expects. What does the SLT expect of the usage of AI and how it should be used at least ethically um define what can and can't be pasted
human readable role specific controls with without policies are are incomplete. Right? So uh then configure AI categories in your proxy web field trade. So in as much as sometimes we rely on on on um these vendors to say oh this website is AI that website is AI. I think occasionally it might be a good thing to have your security operations people or uh people in security go through what people are accessing and then be able to say oh we think this is AI or we think that's that's that's that's malicious. um don't rely completely on on on some of these uh and the CASB as well. Uh then deploy enterprise AI with prompt login. So I know I know we we have this
in places like Microsoft copilot where you can say you later you can you know say oh I want to see what my people are assessing or what my people are placing on the platform and things like that and and that will be that will be available to you. So use as I'm the enemy really you know um I I say this thing all the time as at press time I think by 2030 we should have about 30 odd billion devices connected to the internet that's just another attack vector as much as this is an attack vector as well we might not be seeing it as an attack vector but the truth is that there are more harm that
that can be coming to the organization just by the usage of AI and the funny thing is this is now no longer people trying to infiltrate the organization but people sit down and then your data are coming to them just by having AI platforms. So we need to really really be very wary because yeah don't get it wrong this is another attack vector we just have not seen it as one. Uh so um the employees reality they have deadlines you know pressure and like I said some want to summarize make it sound better right some have codes and sometimes like I've seen some of those codes sometimes a comma or just a full stop somewhere might break a code and
you want to see what's going on sometimes um then the task seems trivial it's just an email but sometime you might forget to either like I said anonymize the data remove what you don't want to be in the cloud or in the public domain and things like that. No policy exists. So no red line has been crossed. Yeah. So we've always talked about the policy, the policy, the policy. Then personal AI is better or faster than approved to. Don't tell my nobody told them pasting is equals to a governance event because of course when there is a policy and you go against that policy which means you're going against what management ask you to do
and sometimes there might be consequences for that. So what security must do is to provide an AI tool that genuinely is good enough you know to use um good is relative right I've seen a lot of people say oh I prefer chat GP some people say oh I prefer claude or Gemini or there are the new ones uh is it uh Kimmy a lot of AI tools are out there and a lot of them do are effective in different areas depending on what you're looking at right so if you want images or text chat is probably your if you want um um slide co-pilot or kimi and things like that. So there is no one
tool that does everything but the truth is that you need to know where to draw the line right we cannot approve everything for anyone and don't get me wrong aside from cop there also maybe we have a a group with mention say oh this should be an AI group who are allowed people in that group are allowed access to different types of artificial intelligence tools that they want but still that is monitored we know who and who are in those groups and we can yank anybody off if we notice that they are not using it in manner that they should be using Then publish clear human readable AI acceptable use policy or guidance. This is something that you know I don't most
of the haven't really done. Yeah we haven't. Yeah but yeah maybe we can also say this comes under our acceptable use of IT policy which really doesn't. Yeah. But I think we need to be able to get in front of this and say okay this is what we have observed. Yeah. Over time and this is what I think we should do and that is how I by so doing you probably might be saving way you're more stressed than just not having it in place. And besides it doesn't really take anything to publish the policy if I'm aware just it just takes for someone to sit down and write the thing. Yeah. So um run targeted awareness not just generic
annual computer based test like we do every other thing security one of the things we get to do is awareness right create the awareness that these things are actually ways that organizations can get um issues. So yeah that that's it make it easy to report accidental sharing without blame. So let me tell you let me tell you a story. very recently my manager his director of information security clicked on the fishing link right and of course security would have seen it but he sent a mail to the organization why to say oh I clicked on the fishing link right and I'm saying this so that everybody knows that even us in security are not you know that we're not supermen
sometimes we can also be so what that what that does is that should anyone click on the fishing link they're quick to say, "Oh, I did this." Because they know that even if security can fall victim to fishing lings, then who? So, I think that's another way that we need to encourage um um staff to report anything they feel they've shared either accidentally or well the malicious ones wounds because that's their plan. At least if we're able to help the unaware people or people who are not knowledgeable about these things then the the better for us and treat first violations as learning events not disciplinary ones. So uh so we use automated tools in the office where you
click on an email first uh you click on the link first um it does nothing you just get one second one you get one the third time you're a road for for for fishing awareness trading and things like that and that is just what we're saying here that the first violation shouldn't be yeah don't make people feel like some of them are not as aware are not as tech savvy as we are just trying to get by their day and things like that. So punishing them for something that they did just because they're trying to be productive is not productive in any way as well. Um so that's it. The quiet quietest data breach you'll ever have. No alerts, no
malware, no incident response, just data living, right? And that's it for me. So the question isn't are I using AI, it is does your control cover what's being shared, right? And that's it for me. Thank you. ANY QUESTIONS? >> OKAY. >> I love your presentation. Thank you very much. Um, it kind of covers from my perspective the kind of exfiltration you get with pasting data in prompting with AI. That's kind of 25% of the whole landscape and I just wondered if you had any thoughts on things like um exfiltrating data by retrieval generation in MCP type contexts and with agentic AI because those are all AI landscapes where you can excfiltrate data to the wrong people. So I just
wondered if you have any thoughts on that. >> No. >> Okay. Don't mind, I'm just joking. Um, the truth is, um, I I think it goes back to what we've always said about policies, right? So, if you if you know what your people are doing, you can then decide on, oh, I think this is the direction we need to travel or this is where we need to be at, right? Talking about Okay, let's talk about the use of things like CEX because I think I mentioned it a couple of times, right? where I know about coding because I um a couple of times I sometimes write I'm a Python because I write Python as well where
just to help me sometimes I I have to plug my code that comes from my charg directly to my uh VS code but the truth is that I don't know where that data is going is it just reading it and correcting it on the on instantly or does it send my data somewhere that has to now analyze the code right and then give me back the response that I need. So this is what um I I don't know really right but yeah probably because I feel that there's nothing wrong with my data going yeah it's personal but if it's proprietary right something that the organization needs the needs to probably have answered the question that that you
asked or that I asked to say how is this data being processed they probably will have all the resources to find that out right but yeah I know a lot a lot actually I said talked about pasting data and things like so I really don't Like I said, AI is a hot button topic. It's the you really go into a room now that you don't talk about AI. It's a conversational starter as well. Uh I don't know if it's going to be what we end, but yeah, let's let's just see how it goes. Really good. Yeah, fantastic presentations. Um I think the cultural bit about culture education. I think a lot of time users don't understand that
their data they've got access to is proprietary or that it's or whatever this education >> as you say the accidental user accident isn't aware that this dat
few people so few of the users actually understand it and now AI is something else on top of that they they don't comprehend unfortunately >> this even explain it to them >> not something >> so I think the government and the the controls you can put in are becoming as important even though I'm big believer culture >> yeah yeah you're you're absolutely correct because um I was a colleague who recently joined the organization and you mentioned is really old I don't want to call names and all that and they didn't have um in fact he is the first person who's working there as a cyber security analyst he was he was recruited as data protection officer and things
like that um why am I saying that why I'm saying that so I had several conversations with him and I told him that you're going to make a lot of enemies right because these people don't have this in is they are like dinosaurs, right? They are used to doing things a certain way, right? Now, you're going to go there and you're going to say, "Oh, I'm going to enforce this. I'm going to enforce that." A lot of them probably might be looking for ways to kick you out of the organization because to them, you're going to make that work a lot more difficult. Right? I've mentioned here about how we um implemented PV and something that was supposed to check
what you place in brother for a couple of seconds and the entire connection went not everybody talked about productivity. how it was affecting their product and things like that. So there are people who would not there are people who have has to change. They don't want anything to change. They want things to remain the way they are. Right? And I know I have this conver my manager where I tell him that the truth is that the ball falls back to you. Right? So anything to happen you're the one who is going to bear the brunt of the whole thing. Right? So I think make enemies when you can. So so you sort of call the car on the stick you know. So
you go to them and you explain that oh this is I think something we should do. You be nice, talk to them and things like that, but when they don't want to, you probably would have to enforce it, right? And all that and I know, like I said, we spent about a year collecting data and trying to make DLP work. So, he initially said, "Oh, turn this off for now. You're not turning off everything, but turn this off for now and then we work on that and and get back to it." So, yeah, you're right. The cultural change, in fact, the truth is that awareness is is is everything that is to it, right? There are people who would
still who would still But at least the number reduces just like same thing we're always doing with most elections are put in front of fishing. Now there are fishing simulated fishing attacks that do things like that. You can also use that in terms of training people on how to use AI and ethical ways to use AI and and things like that. So yeah awareness is is part of it.