← All talks

Responsible AI Use: What Security Teams Need to Know and Do

BSides Edmonton · 202544:0710 viewsPublished 2025-10Watch on YouTube ↗
Speakers
Tags
StyleTalk
About this talk
BSides Edmonton 2025 This video was captured using a locked-down, unmanned camera. As a result, there may be moments when speakers are not fully in the camera shot. Additionally, the audio quality captured by the podium microphone is dependent on the proximity of the speaker to the mic. This means that variations in audio clarity may occur if the speaker moves away from the microphone during their presentation. We appreciate your understanding of these technical aspects. ___________________________________________________________________________________________________ Responsible AI Use: What Security Teams Need to Know and Do by Saif Azwar As artificial intelligence becomes part of everyday work, it brings new opportunities, and new risks. From accidental data leaks to unpredictable behaviour, organizations are discovering that using AI responsibly requires more than just good intentions. This talk is designed to help security and IT teams understand what responsible AI use really means. We’ll explore common risks, simple ways to think about them, and practical steps your team can take to reduce harm while supporting innovation. You’ll walk away with a clear understanding of why AI governance matters, what an AI usage policy should cover, and how to start building safer AI practices within your organization, no technical background required.
Show transcript [en]

[Music] Okay, everyone. Thank you so much for coming. Uh I didn't know that we're going to have uh that many audience knowing that everyone being into AI presentation before probably and they all think it's kind of like just a buzz word now or white noise. So I promise I'm going to give some different perspective uh beyond data leak or data breach. Um but at the same time I'm going to try to make it non-technical. I'll try to make it really fast and easy. So kind of like we consume the information together. So before I start like I know everyone in this room probably now is tired of the word AI like AI powered AI driven AI enabled

uh machine learning it's everywhere like every vendor you talk to every article you read is they are talking about AI. So I want to also make sure we get certain things out of the out of the way before we start. So I'll try to avoid as much as possible to um mention the usual advice that you read about. Don't put sense information in chat GBT or copilot. Uh do not trust the output blindly which we know as well as hallucination is bad. So that's that's we know and in fact if you go to Gemini today and search like I was searching like who's safe as told me that I'm um a PhD candidates at University of Alberta uh specialized in

machine learning and I'm leading this charge in Alberta. So I was surprised like what what are you talking about? I have nothing to do with UFA. I have nothing to do with machine learning. But that's just to tell you kind of how it's being kind of looked at when when you uh use those tools. So don't trust everything blindly. Uh today basically we're going to talk about things different like different risks that we need to be worry about. Uh some of those are the chaos we are actually dealing with when it comes to AI deployments and implementation. Uh how irresponsible AI deployment is going to affect everyone. It's not only affecting your organization, it's actually going to

affect other people. the society in general and how every technology we've been through or innovation we've been through they all some follow the same uh playbook when it comes to securing it to do governance risk management um and speaking about technology evalu evolution I know in the back in the 2010s cloud was everything like everyone need to move to the cloud this is the same message we receive from everyone it's safest, it's fastest, it's unlimited capabilities and there are some security pain points for the um security practitioners like us. How to secure it, how to do identity access management, how to protect the data, shared responsibility. And then in 2015 we start talking about machine learning and

all the vendors all the articles everyone um kind of like blogs they are talking about that analytics everywhere and that's also came with its own pain point for the security team 2017 IoT let's plug everything let's plug the uh fridges the microwaves um basically let's be online and that's also was a massive pain point for or stress for the security practitioners because we are trying to now deal with a different attack vector. And now AI from the 2020s all the way until now is the same thing. And eventually in 2030 probably going to be quantum computing. And we continue doing that all the time. And it's us here in the room that's being stressed, overwhelmed with the lack of resources

we have, how we are dealing with all those risks as they kind of like keep um getting added new technology. Don't get me wrong, technology is amazing. It's great, but there are challenges with it and we are trying to secure it as much as as possible. So, speaking about AI, I want to say not all AI are the same. There are different levels of AI that we need to recognize. one being um assistive AI and that we saw in the in the beginning like uh grammar checks help you to write a small or adjust or rewrite a small kind of like phrase um something like that and then predictive AI which is used for scoring systems for

example then the big boom was with the generative AI where we start writing all our emails reports analyzing documents files like chat GBT and copilot which is amazing like I'm using it myself and today probably I'm start using it instead of Google rather than I go to Google and search something I'm going to chat GBT and say tell me about this. So that's currently the the um the level we are at and there is nothing wrong about using those tools or those technologies. uh as a matter of fact some people maybe claim it's making us maybe dumber or maybe it's indication that you're not smart enough that you're using chbt but I can make the same claim if I say well

an accountant shouldn't use excel sheets they should do do all the calculations manually so why they use the formulas down there so that's being used it's really helpful u there are I was in a conversation on Friday talking about students using chibbt now to like submit research and reports all that stuff. Is that good or bad? Well, I'm not an instructor. I'm not a professor. So, I don't know what's the implication of that, but I think it's good. Like why they don't use it for doing things smarter? But maybe now rather than I'm um um when I mark their assignment I'll reduce one mark or one point maybe I'm going to do double than that because now

you have AI and your job is to verify the output you are receiving yet you also like just hand it off uh without reviewing. So maybe we start looking at this from different perspective and finally decision automation. This is the agentic AI which is u having a system that look at different inputs from different systems and take a decision which is the one we all always thought when we were using chajbt or generative AI we will never reach that level but now we are reaching that level in fact as security practitioners and some of the vendors that you're probably going to see today in the um in besides or at the conference there are already some

uses for agent AI that take decisions on your behalf. There are some solutions today that could go and scan your environment for vulnerabilities and can open a change request and they can also go and fix maybe in a test environment, see how it's going and report to you. The other one I see a lot and we use also is automated init response. So rather than sending me the alerts from an MDR or ADR, take an action. Go and contain that endpoint. Go and reset the password. Yes, there could be some false positives here and there and there is a risk with that. But this is already happening, especially with the amount of alerts that we receive every day from

different tools. We need help because we have limited resources. I I bet each one of us today if they ask like what do you want for your security budget you will say more people or more budget to implement more automation because we are overwhelmed. The other one is customary team at scale. So tools that's going to go and scan your environment build a customized social engineering or fishing uh campaigns and start doing some exercises tailored to your environment. It's not going to be just like guessing or or something like that. Finally, another example is automated identity access management cleanup. This is very important and also we are seeing it in action with many platforms. So rather

than having a full environment of 50,000 documents or 100,000 documents, nobody knows who have access to it or there are no time to go and check who have access to those documents and files that could contain certain information. Why don't we use AI tools and machine learning based on the behavior of the user and say well if this file is not accessed for over six months remove it. If you know that this person never dealt with this financial information in email or or teams or other platforms don't give them access from the beginning. No need to inherit access from uh based on that. So those are some of the things happening with agentic AI. Um, and it's

going to be more for business like for HR, uh, maybe hiring people. They're going to use some systems that do interviews and vet people um, and so on so forth. So, what's happening or what used to be happening from a human process perspective, we used to approve those actions. We don't use tools to do approvals. We need to look at the outcome, look at the risk and say okay um I'm going to approve this. I'm going to document it and we do it that way. But that process is slow like we are very slow with this. Nothing wrong with it because it's we need to review things um in a good way. Uh but at the same time competitors they are

moving faster than us. Uh business users they are demanding access to new tools. So that's really um affecting our daily workflow. So AI is doing that faster than us. And I always say there are like if someone askked me what is the beauty of AI, I'll say it's fast. And what is the risk of AI? It's fast also because I'm unable to catch up with the u consequences that could be produced by using those platforms. So we start seeing AI bypass those processes. they start basically execute task on its own and if you upload documents to jbt or or copilot um and you put some prompts there maybe it's going to execute those prompts on your

behalf and do bulk tasks so this is also risky so really um a user uh putting some information into AI could replicate to somewhere else and now you have a data leak or you have a data breach So, and what we thought up until kind of like maybe now two years from now that all we need is AI policy. Just do a policy, you're going to be good. So, okay, how do we do AI policy? Well, we're going to use AI to generate that AI policy. >> So, that's become really the pattern is we go to AI, we draft something and we just claim that yeah, I wrote this and I vetted this, it's all good. You send it

to someone, they do some adjustment to it, maybe duplicate the effort and create a new policy and then suddenly you see that drift accelerate in your organization. Rather than having one documents or one policy, now you have 10 documents, one for HR for their IR policy, one for uh legal, one for IT and so on so forth. And that's basically increasing the risk of rather than now I need to protect one document I need to protect now hundreds of documents in the environment. So and we always assume that yes AI was correct this is the policy so let's go with it and it's done within one week we were able to do that and we are proud

that we built a pol AI policy in one week. So but this is this is not enough. Um there are other things that need to be uh done in order to consider yourself using AI responsibly. But first we need to understand where is AI today. So it's we start seeing AI everywhere. You see it's integrated within your operating system like whether you're using Mac OS, Windows, Android, iPhone, they are coming built in with those capabilities and those built-in capabilities if you enable them they will ask for access to your data to your photos everything. So they can help you but at the same time this is unneeded risk because now you start leaking information without your knowledge. Um

how many users or how many of us go actually and check that this solution is not sharing my data externally maybe a little like we we assume that the manufacturer or vendor know what they are doing and we're going to just follow uh their advice with that. So we start using it without noticing we are using it everywhere in the Microsoft word documents in emails in teams it's there the button is there just click and I will help you right away and that will start doing some cascade failures because once you have AI in the in the process or in the way if it's breaks it's going to break many other things. So if HR department is using AI

mainly for hiring people and this is the only process they have today. If that fails, they won't be able to continue with the hiring process. And suddenly you will see that departments are upset because hey, I need resources. I need people, I have a project, I have a deadline that I need to meet. And the same goes for the other industries and also it's creating invisible data movement like I'm unable to track where is the data is going and once it's gone it's gone. Uh I cannot retrieve that. Certainly uh when users using platforms for trial and they upload documents use it for a week oh I don't like it they don't go and clean up they don't go and

say close up my account or give me a certificate of data decommissioning or or hygiene no data is there it's going to be used however it's being used if that company is acquired by another provider they're going to use your data so it's going to be replicated day after day so the leaks travel faster Now um you might provide some information financial data if you are like accountant and suddenly you see this data uh surface again somewhere else six months from now for another user. So we need to start building in new security controls or not in new security controls I would say the same security controls we used to do we need to move it to the

AI systems we treat it the same way that being we need to monitor how it's being used who have access to the platform why you need access to the platform not from the idea of preventing people of doing their work or being productive no but we need to know where is this data is going the minute I give you access to this platform I need to know if you're using it responsibly because that's again if you remember we said the riskiest thing about AI is the the speed or it's being fast data going to be leaked really faster now uh using those platform classify your data um in your environment there are tools today or

platforms that can help you go open the documents files databases flag them if this is like restricted because it's have personal information uh health information credit cards, secrets, classify those and ask or enforce that AI cannot access those restricted information. So you reduce the um uh the potential of data breaches for those sensation for your environment and always update your ac um acceptable use policy not only for the generative AI because this is what we have today. we are only focusing on chat GBT and Microsoft copilot. No, we need to go beyond that. There are other systems going to be used and I'll show some some examples. So the real threats within within kind of like AI that we are trying to um um

stop basically one of them is decision drift. So basically a change of behavior in one of the prompts u that's going to kind of like cascade down the row. Um one example is being AI using sorry HR using an AI platform to hire people and suddenly because of some prompt entered by users they start excluding certain skills or cap uh responsibilities and now you're ending up with the wrong candidate for for the job. So that's that's one example. The other one is biased loop. So basically the way you influence uh AI, it's going to learn that and it's going to use it. The minute AI give you a bad answer and you agree to it or you

like it, you're going to say, "Oh yeah, like they like that. I'm just going to repeat that, build on top of it, and now you will see it's kind of like biased in a certain uh direction." Um that's mainly you see it with some uh chat bots with with websites where it's doing things based on outdated information or basically one of the customer service agents give certain prompts that now it's not serving the purpose. It's not giving proper uh customer service um for the site visitors. Automation chain reaction. This is this is another one. So one bad output could trigger many other issues down the road. So and I'll show you an example with um um AI insurance

company. It's it's um something I discussed with one of the organizations and they are having this issue today. uh but basically um if AI misreads certain data it's going to build decision based on this data and then it start taking actions and those actions are not not needed it's risky but it's it's gone because you build the logic for it um another one prompt poisoning uh that's basically I tried that in the past where using a document with uh some prompts inserted in the document and they ask the generative AI to process the document and process any actions needed within the document and start taking actions based on that. You can imagine now threat actors or attackers start um

uploading malicious documents or guides for system admins to use and you download from a a blog and that documents basically have some malicious prompt that system admin just going to use the document say okay this is 100 page I'm not going to read it chbt can you summarize what I should do and maybe give me an action plan and it's going to start doing that was that available before or or exist as a threat? Yes, it's available before like maybe there are some documents out there that had this risk or that type of threat, but because of AI, it's going to move faster. That's the challenge. AI poison data and this is really important and we're going to see more of

this in the in the future. So we have limited resources when it comes to manpower and we rely on um solutions like manage detection response uh edrs to help us automate some of the work we have which is great. This is really helping. Uh but what will happen if threat actors start using AI to start generating contents and build websites really fast like today I can build a website using AI build me HTML with some bad data bad reviews about certain company and start building those websites. Imagine that now your MDR service or uh manage security operation going to start sending you emails and alerts. Hey um is that you like there is a data leak in the dark like saying that

you are compromised it's us now we need to go and vet those alerts maybe agentic going to help us in the future but we are not there yet but that's going to overwhelm us beyond our capability now so now rather than dealing with 10 tickets we're going to deal with 50 tickets that we need to v one by one this is this is a challenge the other one is they're going to put false information about your um um business to harm your business and but harm your reputation and brand. And guess what? Today, if I'm going to charge and say, "Give me a review about this company. Should I buy from them or not?" If a

malicious actor or threat actor already poisoned uh the LLMs with this information, I'm going to make my decision not to go with that company because I don't have time to go and vet their website or talk to their customer service and see if actually this is happening. So we're going to see more and more of that down the road. And finally, def fake. And def fake is really really um a strong threat to many of us, especially when it comes to um if we are watching kind of like social media, we are trying to see what um um a political candidates or someone talking about their stocks. That's one. But that's really for me interacting with the with

the community. But if I'm using def fake for if I'm using kind of like technologies to interview people and those people are malicious and using def fake to uh convince me that okay um to trick me that um they are um certain candidate or certain personas I'm unable to detect that and I will be uh tripped or uh fooled into hiring someone or fraudulent identity. And I'm not sure if you remember what's happened with no before uh last year um in July where they hired someone from North Korea who used kind of like a defake generated video uh to claim that he's a certain person in I think in Texas or somewhere else and he was hired and they

sent him a laptop, they sent him credentials and he was able to log in and do certain certain actions. Then they detected that this person went through four interviews and he wasn't detected. So imagine the level of of um threats that's going to bring. If someone is reaching out to you and like you're working with a a vendor or consultant and this consultant um is is a threat actor and he's using my face or my voice because they can get my voice from those conferences. They can get your voice from tapping on certain kind of like uh phones. And now they start giving them bad advice. Oh, I don't think MFA is good. Don't worry about it. This is just just

kind of uh nonsense. It's white noise. Just disable it. You will be good. So, this is this is a real real threat. Now there is another risk which is shadow shadow AI and you heard the term shadow it in the past where employees trying to be productive trying to be good to the company start using uh tools or subscriptions especially task based subscriptions to do their task management to kind of like uh collaborate with their team members um whatever the case is and we start now seeing shadow AI because people want to move fast they don't want to wait on you to give them a policy what's can be used what cannot be used maybe they don't

like your policy I don't like you're saying I can only use Microsoft copilot I want to use GBT everyone is using charg so what's happened now it's only 30 bucks or even there are free charg versions I'm just going to go and use it you don't know I'm going to go home, upload the document and summarize the report, build the report, come come to work, submit my report within five minutes, I'm done. So this is this is really risky and um we don't know what we don't know like there is a visibility gap especially if they are working from home or someone using their personal using their phone with charg um so there is a data exposure risk with

that and that's going to create also compliance uh blind spot because you can tell your auditor that I'm good we are compliant we are only using copilot we have a policy about that but actually the users are using different tools and uploading your documents and going somewhere else and it's going to reveal itself in a databach report somewhere else and now the auditor or the cyber insurance company going to come back to you and say well I don't think you have a policy your policy is is is not comprehensive or not covering everything so that's that's a challenge so again remember the users are not trying to do something malicious they are trying to help they are trying to be productive uh

But they are moving faster than you or fasting faster than us. U so really the solution for that is for us also to move faster. We need to streamline the process for approvals. A simple fast process based on the risk. If this is really a risky platform or going to need our data to be uploaded then wait we need to evaluate. If there is no need for our data to be uploaded like you're generating images in a platform using AI go and use it uh as long as we are approving that those images before you put them in a marketing campaign or something like that um and provide clear guidance if build a catalog of applications that's already

approved MS copilot chbt I'm just throwing names I'm not promoting any any specific tool but build a list and tell the users that hey we have a list of tools that's available for you to use. We already bet that we know those tools. We have some restrictions or some safeguards around those tools. So just use those rather than something else. Um classify the data before use AI. This is very important. There are ways that you can say um AI uh or block AI from accessing files with c certain information like PII or PHI or PCI credit card information for example and um that will make compliance easier once you do those uh steps. So remember u

make the right path easier than the risky one. That's what we are trying to do. Um now the other problem we have is rush deployments. Many businesses, many executive u um level um members they are trying to be competitive. We need to use AI. we need to promote on our website that we are AI powered company and we use AI to um kind of like help the community and what's not. But what's happening really is because they are trying to move so fast they are rushing the deployment without building proper guide rails or proper safeguards. What that's gonna um um do is basically once you have a bad deployment the people gonna lose maybe trust in

your platform because it's providing bad outputs or it's not providing a proper service and once that happen you lose the trust of the of the customer or the client or the people you are helping that's will make the use of that platform drop a lot and people maybe stop visiting your uh platform or your website because they think yeah they they did everything AI and now I can't use it. One one example if you call any call center or some of the call centers today and you go through the process of tell me how I can help you and you go through like some information or trying to provide context doesn't matter it's just going to tell you yes we cannot

help you or no click one for that it's already predefined answer there are no logic it's not following any logic and after one minute you're just going to drop and you say you know what I'm just going to drive to that bank and and deal with them there and if it's a travel agency See, maybe you don't have that option. I cannot call Expedia or WestJet and go and deal with it. So, I'm going to lose trust in that system and move to somewhere else and carry my business there. Once that happen also um it's going to be very difficult to repair um this issue because you spend some time to build the system. Uh probably you are

committed to a long-term contract with the vendor because the vendor told you to use our platform to use machine learning. gets better if you use it for three years to train it on your kind of like environment and now you're locked in that environment for three years. So now the hidden cost for that is going to be really high without you noticing that. Um so really we need to break that cycle. We need to go and sit down with the business with the executive and say hey we know you want to use AI. We want to use AI too. It's help us with the business automation. help us to reduce kind of maybe some of our stress but

it's need to be done in the right way rather than really it's a rush deployment just to say or to have a label in your um um marketing campaign that you are AI empowered and to tell you a story um I went to kind of like RSA conference uh this year and I was looking for a GRC platform like governance risk and compliance platform and something with really automated auditing capability ities and I was walking by the booth and I saw vendor saying AI powered uh GRC platform. So I stopped there and said okay can you tell me about it like how it's going to help me? I'm kind of overwhelmed. Um I want to automate my work all that stuff. I'm

using a platform but I wanted something better. So the guy there said sorry I don't know how it's AI powered. It's just marketing team and the service delivery team. They thought it's nice to have it because everyone is saying it. So, but if you give me your email, maybe in a few months I can follow up and give you all the features with the AI. I said, "No, thank you." Like, I'm done. I'm not going to deal with you guys. I'm I'm moving kind of like to another booth. So, you see like not all of the vendors that are doing that like there are some vendors really using AI for good stuff, but there are some people

who are trying to catch the wave like they want to use the buzzwords just to attract people like me. I went to the booth once I saw AI power. So that's that's kind of like um another risk but all of this is only affecting your organization like when we are talking about AR risk usually we are we are worried about our organization how we going to protect it how we going to protect our data all that stuff but it's really more interconnected than that so as an example insurance company if they're going to decide to use um AI tool as their underwriter and this tool going to start using some data sets that wasn't vetted by uh the

departments uh the claim department and start building logic or building decisions that this person is high-risisk or this deal is high risk so the premium going to go high that's going to affect my livelihood I need to have insurance for my car I need to have insurance for my home but now because your irresponsible deployments or maybe you didn't count for everything. You didn't use a proper data uh to feed your um AI engine, now you're affecting me. Same for the banking. They're going to use it for approving loans or mortgages. Is the same thing. It's going to affect me. Same for the healthcare. Doctors now they start using AI tools to summarize their report and we were speaking about

that today. So, um they're going to build treatment plans based on that. So now you're affecting my health uh kind of like outcomes because you're using some tool from somewhere. I know doctors use dctafones to kind of like do those kind of like voice notes and they used to go home and type the reports based on their notes. But now there are tools give me the voice just the audio file. I'll do it for you. You go enjoy your life. And now we don't know how that's going to be captured. And not saying the doctors are not doing their due diligence. No, they're going to go review but that's mean once they do that they will have more time to meet more

patients. So now they will have less time to verify the um AI outputs. So it's it's kind of like it's it's a cycle um as you can say. Retail and marketing is the same. We spoke about this example with the kind of like AIdriven um chat. Legal and compliance is the same thing. Maybe they missed something within the contract review and now it's causing some issues. Education, it could be another thing if you are um using AI tool to tell people I will select the career path for you. It's overwhelming. Let me decide what's what you like and they start asking you questions. Do you like this? Do you like that? Do you want to be kind of like technical managerial

leadership? And now if there is a bad data set or bad logic on the back end, you could set me a different career path that maybe I'm not going to enjoy down the road. Uh public sector is the same. So what I'm trying to say here, everything is connected. Like that's why we need all together to work to say let's make sure we have responsible AI deployments across all organizations because I could affect you, you could affect me and so on so forth. um surviving the AI vendor swarm. So all the vendors claiming they have AI like I mentioned in that GRC platform, you really need to sit down with the with the vendor and say show me exactly how

you are using AI. What is the logic behind it? Show me a proven way with a PC or something that actually you're solving a problem rather than now you're going to automate more alerts sent to me that I need to deal with um overnight. Um so the solution for that single intake path um risk based here try to kind of like evaluate things based on the risk and have intentional strategy that you're trying to help rather than trying to just get work done anyway. Um one more thing coming regulations are catching up. So now you see regulations everywhere trying to say well you shouldn't do this you should do that we should review your work before you do

it. Cyber insurance companies now they are asking you if you are using AI how you are using it. Uh auditors uh financial auditors they're asking is AI touching your financial data or databases show us proof it that you are doing things correctly and it's us who going to be overwhelmed with kind of pro providing proofs and evidence with that. We're already dealing with that. We don't want to deal with more. So uh that's proof before you deploy that's especially coming to critical infrastructure and other uh sectors that could affect the livelihoods or safety and security of people. So show me how you're using your AI before you even deploy it. If you deploy it before that

you're going to be going to have a penalty or something like that. Um document everything. Sorry I'm rushing out because I know we have like I think 10 minutes or Yeah. So um document everything. every time there is a PC or something try to document it before you deploy it. So this is this is the way to go. And speaking about AI regulations, um there are many today. This is based on kind of like some um a project I'm working with kind of like some students from the UFA to gather all the regulations across AI, data privacy, financial and fed is there. He's helping me. Fed Salam, thank you. With this project is we are trying to document all

those regulations. we are trying to map them and see what could benefit us as an organization. Is it um kind of like the US regulation going to be better or the Canadian or the EU and so on so forth. So they're going to be kind of like publishing that in in in community project with the website and I welcome anyone who would like to help to kind of start making things easier um and create a common sense of that. So why security should lead this charge because you see the whole picture that's why you need to be have a seat on the table with the business with IT with legal with the privacy with HR and say we want to have

a say how we deploy AI that's the way it should be done because it's about trust it's not about only technology that you're deploying you don't need to be um sorry you don't need to be AI expert you just need to have logic what could go wrong if I use AI what type of data you're trying to access and how I can secure that this is all what we need to know and you have access to that as well um so your job is going to move from a doer that I need to reset the password I need to contain a machine because that's going to be automated especially for the tier one capabilities you move to a

verifier you need to start verifying saying how AI is taking actions in our environment. You need to be the approver now approving that whatever AI did is correct and you need to be an auditor. You need to be auditing the outcome auditing that out of like 10 generative outcomes from AI all of them are correct or what is the sampling ratio for um wrong outputs governing. you need to govern. That's why I'm asking be aggressive with having a seat on the table with with the within your organization. Uh even if you are just listening that will help you go and prove to them that hey I listen that we are doing this I did a research using

charging doing a research and then those are the risk that I have. Um and communicator you need to talk to the business with this language. Yes, I know you want to say AI powered in our kind of like website, but this is the risk. Those are the challenges. We don't have the data. AI need data to build this decision and generate outcome. So this is basically um a framework like choose a framework uh NIST uh RMF AI or AI RMF like this management framework have really nice way of how to do governance how to do identification how to do protection uh try to use those uh or select any other framework that you think it's good lay the ground from now

and do data classification um do identity access control perfectly because shadow AI it could be SAS it could be something else you need to make sure those um AI agents or agentic AI doesn't have access to uh other environments within within within your organization you need to put a fence around it to make sure it doesn't deviate and start discovering other things um the other thing is um usage guideline and training you need to do a monthly refresher It's no longer enough to do one year or annual training about AI. Try to educate people how to use prompts. That's very important. Make sure they use a proper prompt without data leakage. What they can include in

the prompts and so on so forth. How they can sanitize it if they are uploading a documents if you need that. Um do rapid risk assessment. So have a schedule where if it's a low risk, we're going to approve. If it's a high risk, give us 48 hours and we're going to get back to you. Try to be faster with them so you can approve things so you don't introduce shadow AI. Um, and sandbox environment even if it's simple testbt account that you can train users how how to use prompts that could be worth it before they start uploading real documents or real files to GBT even if it's that simple. not talking of course

there is a need for infrastructure maybe for AI but even with those don't use our data for for training um or learning chat DBT so this is the how we can start next week is basically discover the current AI use have a seat at the table and say what we are using today it's been five years since we AI term was kind of like coined are we doing anything do a survey with all the business units Do you use AI or do you use a vendor that claim they have AI? Let's talk. Let's let's build AI register and and use that and publish a onepage guard guide um guardrail. Basically that guardrail going to tell people this is

absolutely not allowed this type of information and this is okay and this is encouraged to use use that um and teach prompt hygiene. As I mentioned, those are the things that we can start working on uh with others and with us as well when we do our job as security practitioners. Those are some helpful resources. Uh the NIST AI RMF, I encourage everyone to go read it. Read the playbook. They have really cool especially if you're using Nest CSF cyber security framework for your risk management or for your security program that's going to integrate uh into it uh with some playbooks. If you are into threat risk data analysis, all that stuff, those sites are really nice to go through the

AI incident database. It's give you all the um cases where AI failed us and causing issues whether autonomous cars or something else. Uh it's really cool to go and see how certain AI logics or or companies are failing and causing some health and safety issues. uh OAS top 10 if you are a developer and you want to know what type of issues or vulnerabilities that could be introduced in LMS and generative AI you can use that and finally miter atlas which is part of the MITER attack framework they talk about techniques tactics procedures on how threat actors are targeting AI platforms and that's all what I wanted to share [Applause]