← All talks

Staying Compliant in the Age of AI

BSides Tampa · 202641:0150 viewsPublished 2026-03Watch on YouTube ↗
Speakers
Tags
About this talk
Samantha Ramos explores the emerging regulatory landscape governing AI systems across global, federal, and state levels—from the EU AI Act to Colorado's algorithmic discrimination protections and California's transparency requirements. The talk examines practical challenges organizations face integrating AI compliance into existing governance programs, drawing on real-world cases like Samsung's 2023 data exposure incident, and introduces frameworks like NIST AI RMF and ISO standards to manage AI risks proactively.
Show original YouTube description
Staying Compliant in the Age of AI by Samantha Ramos Description As artificial intelligence (AI) continues to rapidly evolve and permeate various industries, businesses face increasing pressure to ensure that their AI systems are developed and deployed in compliance with global, federal, and state regulations. "Staying Compliant in the Age of AI" will provide a comprehensive overview of AI compliance, offering insights into the legal and regulatory frameworks that govern AI usage and how organizations can stay ahead of the curve to avoid costly mistakes and reputational damage. The presentation will begin with a compelling case study that highlights the real-world implications of failing to address AI compliance. This example will demonstrate the challenges businesses face when navigating the complex landscape of AI regulations and emphasize the critical importance of proactive compliance strategies. The discussion will then move to an exploration of the various AI regulations and frameworks that are emerging on multiple levels, from global standards such as the European Union’s AI Act to federal guidelines and state-specific regulations. Understanding these frameworks is essential for organizations to recognize the scope of compliance requirements they must adhere to, as well as the evolving nature of these rules. However, the introduction of new AI regulations presents challenges for businesses that are already managing existing compliance programs. The session will address the difficulties organizations face when integrating AI compliance into their current systems and the risks associated with non-compliance. This section will provide practical insights on how to align AI compliance with existing policies, ensuring that organizations can mitigate these risks effectively. To equip businesses with the tools necessary for maintaining compliance, the presentation will also cover resources and activities that can be leveraged to make AI compliance a seamless part of any organization’s operations. Attendees will learn about available compliance frameworks, tools, training programs, and industry best practices that can be integrated into their current systems. Emphasis will be placed on continuous learning and adapting to the rapidly changing regulatory environment. By attending this session, participants will gain a thorough understanding of the current trends in AI security and compliance, as well as actionable insights on how to stay compliant with evolving frameworks. Whether your organization is just starting to explore AI or is already deeply embedded in its development, this presentation will provide valuable guidance on navigating the complex world of AI compliance, ensuring a secure and legally sound future for your business.
Show transcript [en]

Okay, awesome. Well, thank you guys. This is a really good turnout for a Saturday morning. Um, you guys wanted to learn about AI and staying compliant, staying compliant in the age of AI. So, thank you Gina for the introduction and thank you to Bides for having me once again. Um I am the co-founder of rememediate cyber solutions. So we are a small uh consulting company and we help small to medium-sized businesses uh pass their sock 2 and stay compliant with the other regulations and frameworks that they stay compliant with as well as help build out their cyber security uh programs. I'm also the founder of Tampa Bay Techies. We have a booth in the downstairs in the expo hall if you're

interested in joining. All of our events are free and open to the public and um we have a couple events a month. So uh some of our leadership team members are down there so please stop by and talk to them. I have a MS in information technology management from Western Governor's University and I have a bachelor's of science from University of Tampa in criminology and criminal justice with a minor in cyber security. and I've been in the field for about six years and my entire experience has been learning and navigating um the different complexities of governance, risk and compliance. So that entails um once again helping people pass their sock 2 audits. I've have experience in ISO as

well as high trust um and also basically just doing all the unsexy uh cyber security stuff that most people don't want to do. So today we're going to talk about the current state of AI regulations. We're going to go over some of the global, federal, and state level um and also the evolving nature of the current state of AI regulations. And I think that this is a really important area to capture because everything is changing so fast because of the new technologies and um the new risks that are coming along with AI. Um, we're going to also go into integrating these AI compliance into your current framework or your current structure as well as building on top of what you

currently have and why this is so important in today's world. Um, we're going to go over some of the integration difficulties as well as some of the implementation guidance and then we'll leave time for some Q&A at the end. So, my two goals for you guys today are to educate you guys again on the evolving landscape as well as introduce the new regulations that are coming out or have been coming out in the previous years and are evolving into these upcoming years and what to kind of expect. Um, as well as equip you guys with some takeaways, some tools and some knowledge uh from today that you can bring to your own companies and organizations. um and you know bring to

your higher level and say here are some risks that are coming up and here's what I've learned and here's what we can do about it. I also wanted to talk about what we won't cover because I know there's a lot of tech tech very like low-level tech talks today. This is not one of them. We're basically just going to go over the broader compliance, governance, and risk considerations. Um I'm not a very hands-on technical person. My business partner is and he's in the room today. So if you want to talk to him, uh, definitely do that. And, um, yeah, so we're just going to go over some of the highle regulations and compliance frameworks. So I do want to bring up a

case study. It's a very, um, well-known case study. U, it is the Samsung case study from 2023 where three separate incidents happened where, um, employees from Samsung were inputting very sensitive information from Samsung into chat GBT. Um and this was this included data from I think like HR the HR section and then also from the semiconductor equipment which again was very highly sensitive data. Um and this violated not only data protection protocols but it also um raised the alarms about some of the risks that are apparent and that are emerging within organizations because at this time this was a fairly new area of tech for everyone in the whole world. So we didn't really know what to put in

place at the time. So the immediate uh reaction from Samsung was of course to just ban all of the Gen AI tools because again this was chat GBT um and enforce their inside policies but at the time they didn't have any real internal policies around AI usage. Um so this led to a strategic shift in internal AI development for Samsung. um they ended up I think creating two internal um AI tools and this really highlighted the need for guard rails including proactive compliance not only for Samsung but also for other organizations in different industries. So when we look at this case study, you know, although it didn't create it it didn't lead to the creation

of new laws, it really opened everyone's eyes um on to like you know what could we have done to prevent this or like what can we put in place now for these guard rails. So some of the things that I you know would have put in place first were these clear AI usage policies as I said before. So we all know about like our acceptable use policy or BYOD policy. Those policies that talk about what we can use and what we can use. Um we talk about AI as one of those now. So what can we put into AI and you know what can we use AI for? Who can use it? Um and stuff around that. Um and then

along with that the employee training and awareness going handinhand with the clear AI usage policies. How are these policies going to be um communicated to our employees and how can we make sure that they're aware that these that these policies exist? Um and then access control and auditing. So again, who can use these Gen AI tools and also who is using them, when is when are they using them and what are they using for what are they using them for? And then the last one which I think is the most important is the AI risk assessment. So really when we bring in an AI tool, do we know what it's being used in the envir in the environment for? Do we know

what kind of data is being inputed? Um do we know who is using it? And do we know what the output is and what we're going to use the output for? So uh raise of hands, who uses AI like on a day-to-day, whether it's for, you know, content creation or for developing. Well, a recent study back in 2024 came out that over 70% or 78% of companies have adopted AI technologies in at least one business function. And this was a survey that was conducted by Mckenzian company. And we can see that not only is it being used in tech industries or just in um tech uh functions, but it's being used in marketing and sales, product

development, um human resources and finance. And so these are the different um the different functions being used in and across the different sectors that it's being used in. So of course a lot of technology, but also professional services, financial services. So we could see that it's not just being used in tech anymore. It's being used all over. So this is why it's so essential that everyone in the organization is really aware and up to date on not only the risk but the compliance portion of it and the safeguards that we can put in place. Um going on to my next question is who's aware of their AI usage policy or who knows like their if their

organization even has like a strategy for AI usage. Good. That's a lot more than I was than I was expecting, but that's really good. Um, but in another survey, even a smaller percentage of the people that use AI every day, Are you laughing because you lied? >> Cuz we don't have one. So, >> Oh, because you don't have one. Yeah. Well, um so um yeah, even a smaller percentage here we could see that are even aware that they have a even aware that they have a strategy that upper management is not aware that there's even um new AI compliance or regula compliance frameworks or regulations out that they can or have to follow and that

the organization people in the organization are not aware of these strategies either. But we'll see later that um you know people in organizations as we could see here are so ready to use AI um but we know that AI compliance is an organizationwide effort. So we're going to go into a little bit of the current state of AI regulations. Um, and while we go through these, I want you guys to like kind of see what are the recurring themes through, you know, the different state-to-state or um, the federal regulations or the other frame the different frameworks that we go through as well as kind of what's conflicting here because we'll see that there's, you know, different regions or

different industries that that might have like conflicting views and different priorities. So, we're going to start with the EU AI Act. Um, because this one is the pretty much like the first act that came out that that really put like a hard regulation on anything because if anyone's familiar with the GDPR, that's a pretty um it's a pretty strenu strenuous framework and it really aligns with the EU AI acts. So, what the EU AI act does is it split splits up AI systems into different categories that we'll get into in a bit. Um, and it classifies them by different risk levels. Um, and it applies regardless of the organization, whether it's based in the EU or not, or if they're it applies

if your EU or excuse me, if your AI system affects EU citizens, very similarly to what the GDPR um applies to. So again, this is a classification of AI system. So we have the unacceptable risk, the high risk, which we're going to focus on because that has the most stringent um requirements, the limited risk, and then the minimal risk. So when we talk about unacceptable risk, we're talking about social scoring and mass surveillance, the stuff that's outright banned in um in the EU. The high-risk, which is AI systems that are used in law enforcement, recruiting, healthcare, finance, um even HR systems. So these have the most stringent requirements around them. And then limited risks. So chat bots and emotion

recognition. So this is where like if you guys ever open up a chatbot and it says like, "Oh, you're chatting with AI right now. Like this is not a human." Um those are the transparency requirements that the limited risk uh type systems are required to have. And then we have the minimal risk. So the spam filters, video games, um if you open up your Snapchat and you're taking like a little picture of yourself, those filters that come up, that's that's the minimal risk. those really don't have any requirements. Um, so talking about the high-risk AI system, um, what you'll see if you are so compelled to read the EU AI act, it's on it's online. Um, but the

it talks about governance throughout the entire AI systems life cycle. Not just when you're creating it, not just when you're deploying it, but through the entire life cycle, right? So the first thing that it talks about is risk management. So you have to have a risk management system throughout the entire AI's life cycle. Um so we have to make sure that we're governing all the data that's being put through and all excuse me all the data that's being put through the system and going out of the system. And we also have to ensure that this data is relevant, accurate, and complete. Um, and you also have to make sure that throughout the systems life cycle when you're developing it or

deploying it, all the data inside of it is compliant with the GDPR or any of the other regulations that you're required to follow. Um, you have to maintain documentation, technical documentation of the systems purpose and use. Um, where the data, the training data is coming from, as well as the risk controls and monitoring that you're putting on the AI system. Um the AI system has to uh be designed for recordeping and logging. So the performance has to be tracked. Again, AI very new stuff here that we're talking about when we're talking about developing and deploying. Um so the performance has to be tracked uh for for the AI system and this documentation of the logging has to be secure, traceable

and retained appropriately. The AI system also has to be transparent and the information has to be um available for users and clear for users. This is kind of where we see like a crossover between the GDPR and um the EU AI act. If you're familiar with um one of the one one of the legal excuse me one of the legal ways of getting consent um it has to be transparent so you have to make sure that your user is aware of what they're consenting to um that's another thing with using AI systems and the AI system also has to have human oversight so it should be monitored and supervised by humans and they should be

able to prevent or minimize risk and They should allow for human intervention or override just in case something goes wrong a human has to be able to fix that. And of course for security there has to be appropriate levels of cyber security and built-in mechanisms for failure. So, I want to go over the US um AI policy regulations and laws in a timeline framework first because there is a little bit of stuff being shuffled around um going through 2019, 2022, 2023, and 2025. There is different priorities um through, you know, the different areas of government here. So, we're going to start off with the American AI Initiative and then go into the blueprint for an AI bill of rights

and then the two different executive orders and we're going to compare them to each other. So, as of right now, there's no single federal law that regulates any AI systems in the US. So, that's why we see them kind of uh differentiated by industry and sector as well as by by region. Um but we'll see a lot of the themes here are for advancing the AI R&D in the US. Um as well as some of them being shifted towards security and privacy. So the first is the American AI initiative that came out in 2019 and um this was this established the national AI initiative office which we still see today and their main goal was to create

the AI policies and the sharing of resources. So these are the core objectives of the um American AI initiative. So prioritize AI research and development, enhance access to federal resources, set AI technical standards, build an AI ready workforce and engage internationally. So you can see here this was a lot about um encouraging research, excuse me, research development, collaboration not only within the US but also globally and internationally with the other um big players in the AI area. Then moving on to 2022 for the blueprint for an AI bill of rights. Um this was a policy guide. It was a non-binding framework unlike the EU AI act. Um and it emphasized ethical AI use while encouraging in innovation. So again

another um another policy guide that encouraged innovation and uh R&D. So the five core principles here were safe and effective systems. Um so again making sure that these systems were tested and monitored to make sure that they work as intended as well as documented uh risk mitigation strategies and making sure that the developers and deployers of AI had risk mitigation strategies in place. Um the second one is the algorithmic discrimination protections which we'll see as like a common theme in a few of the other frameworks where um the AI system has to be free from discrimination. Data privacy. So people should have agency over how their data is used and also the AI system should minimize data

collection and have strong security measures in place. Notice and explanation. So another transparency principle here um people should be informed when an AI system is being used. So again like a chatbot saying AI is being used here or if something is AI generated um like a photo or an audio or video uh it has to be in user friendly term saying hey just so you know AI was used here for this to be generated or altered or whatnot. Um and then human alternatives, another theme that we saw earlier. Um people have to be able to opt out of AIdriven decisions and there should be a human review process for deci decision making or decision- making

that was uh by AI. So we're going to look at the two executive orders that were passed. Um the first one being signed by former President Joe Biden in 2023. So the aim here was to prioritize innovation while mitigating risk. Um so this focused on risk centered AI governance and protected the rights, safety and national security from AI risk. So this executive order really focused on red teaming safety test results being shared by developers and um called for sta standards being developed in various industries. And then we have the newer executive order which was signed earlier this year by President Donald Trump. Um this revoked all of the safety measures in the previous executive order and um the aim

was to really develop rapidly for AI and it wanted they wanted to reduce regulatory barriers and really eliminate those obstacles for AI development. Um, the most recent executive order also called for the creation of a 180day action plan, which meant like reviewing the previous executive order and just updating it to be more uh prioritized to R&D. So, now I'm going to state specific regulations. Um, I'll show you guys some like tools and some interactive uh maps and stuff that you can take a look at because these are literally getting updated like every single day. Um, as of 2024, at least 45 states have introduced AI related bills and 31 states have enacted leg legislation or

reg resolution, excuse me. And these laws cover diverse aspects such as algorithmic bias, automated employment decision tools, and AI use in education. So, as you can see, it's being AI here is being used for such a wide range of different like functions. You can't really just put it under one law or one regulation, right? So, that's why it's so specific to each region and each industry. So, we're going to take a look at Colorado and California because those are the ones that are a little bit more developed. So, the Colorado AI Act, that's a really pretty picture. I've never been to Colorado, but um so the Colorado AI Act is aimed to protect consumers from

algorithmic discrimination. Um and what we mean by algorithmic discrimination is um any differential treatment that's based on um a trait. So like a race, disability, gender, anything like that. And the regulated use of high-risisk AI systems. And by high-risisk AI systems, we're talking about employment, housing, credit, healthcare, insurance. So we could see where it's kind of, you know, merging or doing a little play off the EU AI act. Um, and and this is kind of where I see a lot of our laws kind of going uh state by state. Um, so the Colorado AI act calls for role specific obligations for developers and deployers. Um, two of them being the duty of care to protect consumers from

risks of the algorithmic discrimination and doing the risk management and impact assessments. So what that means is that developers and deployers of these tools have to say okay if you're a person using this you know here's the risk of um you being having differential treatment because of this this and this um and they have to also you know take into consideration what the data sources are as well. Um, so moving on to the California AI Act. This is the California AI Act is a suite of new laws. Um, again, still developing. I think they only have a few that are like officially developed. Um, the first one being the California AI Transparency Act and this is aimed to

enhance transparency and AI generated content. Um so this imposes requirements on largecale genai providers to ensure that consumers can identify and verify AI generated images, videos and audio. And I think the California AI act is this strict because we have laws like the CCPA that have been um that have been in in motion and you know there for such a long time just like how the GDPR has been in in Europe. Um, so this applies to covered providers, which are entities that create code um or produce a Genai system with over 1 million monthly users and is publicly accessible within California. So the requirements here um kind of the same that we've seen on the same theme is that they have to have

an AI detection tool. So users have to be able to assess whether the content was created or altered by their Genai system. They have to have a content disclosure. So again, it has to say like this was generated or altered by AI. Um and there also has to be a thirdparty license oversight. So if the Gen AI um is licensed to a third party, there has to be certain requirements in place in that those licenses. So getting into the evolving nature of AI regulations, over 69 countries have proposed more than a thousand related policy initiatives. So thinking about that and thinking about like kind of the overlap of what I just talked to you guys about, can you

imagine how it is like being a like a seale executive or a policy maker that has to just oversee everything and make rules for one company given like all the different you know regulations and laws that you guys now know of. Um so we also see that while the EU AI act has a risk based based approach like I mentioned the US is more statebystate as well as um you know industry by industry. Um we also see a shift from principles to enforcement. Uh, one could argue though that these are there it's it's not going very well, right? Like there's there's so many different regulations that are still voluntary and then there's a lot that are binding and then there are some

that are coming back and saying, you know, actually this isn't a priority right now. We should shift to X, Y, and Z. Um, and then there's also emphasis on safety, security, and accountability. So governments are prioritizing AI safety, cyber security, data protection and transparency. Whereas before it was a lot more focused on R&D and research. Um now we're looking at government governments that are prioritizing the the safety and data protection. Um what to expect next? What I can see coming along is continued regulatory divergence between regions. Um we know that like I said there's different regions that have certain regulations, different laws, different industries that have different laws and regulations as well. Um however there are there is

going to be a rise in crossborder cooperation. Um I'll get into later on some frameworks and principles that we'll see you know different countries and different regions have agreed on would be the best for um deployers and developers of AI systems to follow and then again just some new laws and some new regulations around the authenticity safety and security transparency and accountability. So now we're going to talk about a little bit about um integrating AI compliance. So, how can we take these new laws and regulations that we just learned um and take them back into our organizations and make sure that, you know, we're following what we need to or that we're ready for an audit or we're

ready for someone a customer to come in and say, "Hey, we need you we need some proof of your AI compliance. Do you have that for me?" Um, so first talking about integration difficulties, again, there's the regulatory fragmentation and divergence. Um there's these different patches of state level and global level regulations especially for American companies, excuse me. Um and then there's that compliance and legal uncertainty. Um so a lot of these standards and laws and regulations if you read them online it's a whole bunch of legal jargon. Um and a lot of organizations are unsure of which ones apply to them. And this makes long-term planning and investing riskier, especially if you're a really large organization. Has anyone ever

tried to during COVID? Like, did you guys try to plan like a big event? Did anyone try to plan a big event during COVID? Like, you know, like a wedding? Yeah. Like, h like raise your hands if you try to do something like that. Was it How hard was that? You know, cuz like you're like in a couple years, how much money am I going to have? How much is this going to cost? Am I even going to be able to fly here? Right? It's like that because we don't know what the what the area of AI is going to look like, what tech's going to look like, what jobs are going to be available, um what

new tools are going to be out by that time. Um and then it's an operational governance overhaul. So now we're bringing in these new risks um because again AI risks are changing every single day. So now we need a new area of risk uh like risk officers and um cyber sec a cyber security area just for AI um and they need to be trained as well on AI risks. So this can be really resource intensive for organizations. And then the risks of non-compliance, we have the legal and regulatory penalties. There can be monetary fines. The EU AI act and the California AI Transparency Act have um monetary fines in place if you don't follow them. Um, and then you

can lose your licenses or certifications depending on the industry that you're in, especially if it's a highly regulated industry like finance or um, healthcare. And then of course the reputational damage, the loss of trust and brand damage and then security risks. Um, you know, a lot of these frameworks and regulations require risk risk um, assessments and impact assessments. And a lot of times if you don't conduct those, you don't know what kind of risks are still open, especially when it's like a a system lifestyle or life cycle type of assessment. And then the I always like to say, you know, we can't secure what we don't know is in our environment. If those uh risk

assessments or impact assessments aren't being conducted within the organization, you might not know what AI tools are being used within your organization. Maybe your organization only allows to use like if you're a Microsoft organization to use co-pilot for Genai, but then you have people using chat GBT and you have people using Grock if you have people using you know all these different areas that you don't know um what data they're inputting and what's coming out as well and what they're using it for. So the first kind of framework that came out of all of this for helping people and helping organizations come into compliance with the different laws and regulations was the organization for economic cooperation and development.

They've got a really really great website um very thorough um and they have what's called valuebased principles which are made for policy makers um the developers deployers and users of AI and then they have the recommendations for policy makers which we're not going to get into here but we're going to b we're going to focus on the valuebased principles. So these principles were adopted by over 47 countries including the US and EU member states. These are voluntary but again they're globally recognized. Um so principles again are are pretty overlapping with what we've seen but inclusive growth sustainable development and well-being. So this really means that AI should help benefit people and the planet um h human-

centered values and fairness. So AI systems should respect like human rights such as privacy and non-discrimination. um transparency and explanability. So the AI system should be understandable and traceable and then there also should be relevant communication about how decisions are made by the AI system. Robustness, security and safety. Um so AI should function reliably and securely throughout its life cycle and there should also be uh safeguards around these risks and then accountability. um organizations that are developing and deploying these AI should be held responsible for the outcomes. So again, this is a really great website. Um if you just look up the OECD AI principles, you could do a deep dive there um and click on like each of the principles,

it'll show you what area um what areas have like um uh initiatives for each of the principles. Um, next the next one I want to go through is the NIST AI RMF, which if you guys were at my presentation last year, I did one on the on the NIST RMF. So, the risk management framework. So, uh, NIST is is the voluntary framework that helps organizations manage risks that are associated with AI development, deployment, and use. Um, they help organizations identify, assess, manage, and monitor risks. And um this is a flexible and adaptable framework across sectors and AI maturity levels. So there's two main components here. There's the NIST AI RMF core and then the AI RMF profile. So we're going to

focus on the core um for today. So, if you guys are familiar with the AI or excuse me with the NIST RMF, it's also kind of like a circle like this and in the middle is governed because throughout the process, throughout the risk assessment process, we always want to make sure that there's the policies and procedures in place and that we're always drawing back to the policies and procedures and the governance throughout um the risk assessment process. Um so again govern relates to the culture of risk management throughout your organization. So what kind of risk treatment do we have? What is the acceptable risk tolerance? Who's involved with the risk assessment process and stuff like that. Um then we

have map up there which is mapping the context and the scope of the AI. So what is the AI used for? what are its goals? What's it what's its business objective? Um who will be impacted by this AI system? Is it going to be customerf facing? Is it going to be internal facing? Um is it going to be just like a Gen AI type of type of system? And also what are its data sources? This is super important. Who's going to be putting um information into the AI system? And what kind of information can we put inside the AI system? Um, then moving on to the measure portion of it, which is really where we're going to

use the quantitative and qualitative methods to assess the AI risk and we measure the likelihood and impact of the risk. Um, so if you've ever seen like kind of those matrixy matrices where you um score it based on its risk level, that's where we're going to do it in this step. And we're also going to try to understand how well the AI system works under different conditions. Um, and this is also where we see vulnerabilities that'll pop up and how we're going to detect the vulnerabilities and be able to mitigate them. And then manage. So we're going to prioritize and act um on the risk based on the impact. And we're going to do this proactively. That's why we

implement these kind of riskmanagement frameworks so that we're doing like a proactive um exercise, right? And then we're going to continuously improve these models and this risk management framework and align them with the business objectives and regulations that we've seen. Moving on to ISO. Um if anyone uses ISO, it's like a globally recognized framework. You have to pay for it. So I don't use it but and it's expensive >> but yeah but you know it's it's a great framework. Um it builds on type of on top of ISO 31000 um which is like their risk management framework version of ISO of uh it's it's compar comparable to NIST um but it has a little bit different core principles

here. So inclusivity um ISO says that you know AI brings in a whole bunch of new different risks. So in that case we have to bring in a whole bunch of new stakeholders. So people that we normally wouldn't bring into the risk assessment process. Um transparency and explanability. So AI risk decisions have to be understandable and traceable. Dynamic and continuous. So um you know like I said AI is getting updated and is changing every single day. It's not static and it's evolving just like the risks. Um and it has to be continuous as well. So not only are like developers and deployers updating the AI but customers are expecting more and the legal and legal and regulatory personnel

are also updating every day like I said before. So you have to learn, adapt and improve how you manage your AI risks because everything else in the AI realm is also evolving using the best information available. So this is a this is an interesting principle that says AI should be should use the most up-to-date information as possible right like AI is AI uses current and historical data as well as data that might be coming up too. So you developers and deployers have to use the most current data that they can and this is also where developers and deployers have to document their limitations of the AI. Um as well as some of the gaps that need to

be taken into account. Um and this is also where we have to see detailed records of how AI is being used within their environment so we know what to expect. And then human and cultural factors. So developers and employers of AI have to be aware of how their AI system is affecting society. So adapting to new regulations. Um this is where you know we're going to get into how you can take all this back into um your organizations. So the first thing that you guys want to do is think about what regulations do you currently follow. So are you aligned with NIST and ISO? which ones can you really bring back and put on top of what what you're

using right now. Identify your current and target state. So performing an organizationwide risk assessment and thinking about your contractual obligations. Um so when I am brought into an organization they say hey can you do a risk assessment for me? The first thing I do is say, "Yeah, give me all your VPs." And we gather the VPs from finance, from HR, from all the tech teams and I say, "What are you guys using? And do you know the risks of those?" And um usually they know what they're using, but they leave out a couple because they don't realize that there's risks there. And especially now that almost every single product in an organization is being is you know having

a new AI component to it. It's really important that we do this regularly conduct risk assessments regularly so that we can see these new AI um risks that are that are being involved as well. So conducting a organizationwide risk and gap assessment thinking about what kind of data is being inputed um and identifying your current state and your target state. Um, another thing that I really want to stress is to not reinvent the wheel. So, use your current risk management, cyber security, and data data governance structures to include the AI specific risks. So, you don't have to really go out of your way and do like a whole brand new this is a risk assessment just for AI tools. It's

something that can be incorporated. Treat it as a new asset or a new software. If you have a thirdparty risk management strategy already, just incorporate the AI into there. And then once you once you deal with the new laws and regulations, then include that as a risk. Like maybe we're at risk of not um being in compliance with this law or this law. Um and then of course build a comp or build a culture of AI compliance. So establishing the AI usage policy and communicating the risk early on. Um, just like with any new product or any new risk that you're using, you know, do an acceptable AI usage policy. What kind of data can or can't be put into the the

AI um accepted and prohibited AI systems? Um, mandate like an onboarding process for new systems and stuff like that. And for continuous learning um emerging AI threats, the MITER atlas matrix really great um matrix to look at the new threats for uh AI stay up to date on new regulatory changes. So, the IAP, the international I don't remember what that stands for, but um they got they got a new they've got like policy trackers that are being that are being updated regularly um where you could see where the legislation is at in different areas. And then the OECD again um they have an interactive map where you can see the initiatives. And then self assessments,

there's EUAI act compliance checkers um where you can it's like a kind of like a quiz thing. It goes through and asks about your organization as well as the controls you have in place and it says like you're in compliance or you're not in compliance. It's almost like a it's it's a simple gap gap assessment. Um it's a good place to start. Um and here are just some of my final thoughts. Um, AI regulations are evolving and it's really important that organizations stay informed and adaptable. And to stay compliant, organizations must operationalize these principles by aligning with these um frameworks and putting all of the safeguards in place that we mentioned. Thank you. And here's

my information. If you have any questions, please feel free. Um, I think we've got like five minutes. So,