
Whether you're a practitioner, you're either a seuite leader in in the area of cyber security working development, uh we all have state of game, right? This is basically what it's trying to show us why we should all care about AI and what brings it all together is AI governance, right? So show hands who's in these who's in these areas probably say one in this room, right? Uh from the from all domains, from everything that's happening in the environment. Uh so that's a really good a good starting point to tell to tell you guys why we should volunteer about this. Um this is a graphic from the Isaka IIA readiness. I'm not going to go through
all the questions but they basically break it down to three main right their benefits. So or ROI, how's it going to make money for the organization, right? That's first and foremost, most important to any organization, to any business to keep the lights on, right? Is it aligning to the overall strategy of the organization of the overall, you know, end line and goal for other projects, things like AI risk, right? Risk management, breaks, recliance. You you heard those things that doesn't really change AI, you know, we're not reinventing the wheel. You're bringing all these things together. uh maybe a ethical AI uses might be a little bit different space on it from the risk perspective and then the resourcing
right HR training on boarding you know procurement uh communications reporting marketing all those areas where AI seems to have a predominant you know plan to market uh to get more people to buy their product to be in line with their brand right so that's actually a really good starting point to talk to about the things that will drive AI rings for your organization So why responsible uh AI matters? Why does it matter? Right? AI makes high impact decisions. Hiring, budgeting, like I mentioned the high-end strategic initiatives happening at the top. Operational resilience, resiliency, right? Nobody talks about cyber resiliency more than we should, right? We should be talking about cyber resiliency. Risk includes your bias
risk, lack of transparency, misuse, and data privacy implication, right? The adoption of AI especially large language models LLMs and retrieval augmented generation rag has dramatically increased the speed and volume at this with the risk introducing sprawl right text for all tech dev like I mentioned in the previous talk. So this is this is from the hacker news um a really good article that just came up here a couple weeks ago. Look what it says. Consider an internal support chatbot power powered by L1. When asked how to connect to develop an environment, the bot might retrieve a confluence page if you're using an example right containing valid credit threads. The chatbot can automatically expose secrets to anyone
who asks the right question and the logs can easily access them, right? Worse yet in this scenario, the MLM is telling your developers to use text, right? The security issue can stack up quickly, right? So that's a security implication of AI, right? The situation's not homeless though. In fact, look at that. If proper governance bottles are implemented, not more AI, not more tech, not more tools, not more things throwing over the fence. No, they're basically saying the root cause to deal with the root cause and how to fix the root cause is appropriate governance. No more secrets. We talked about secrets in the last talk. Of course, secrets management, hardening the secrets, making sure that eliminating and taking
time is really important. But it's that caught my eye. hacker news, hackers talking about governance as your, you know, saving grace, which is interesting, right? Some of the pillars of AI governance, your ethical principles, data governance. So, if you have data governance programs in flight, you're right there. You're right there. You have a real starting point in regards to your AI management and AI governance, model transparency, expability, right? Risk and your impact assessments, regulatory compliance, security, robustness, human oversight. Who here show hands conducts security risk assessments in their shop? I want to see everyone's hands, right? That's a problem. You know, not everyone's doing that, right? That is kind of level setting the stage of
understanding what we can do to secure ourselves in regards to the overarching body of work. There's a lot of definitions of AI governance. Everywhere you go, there's a mixed bag, right? This is what I found as I did the research. uh frameworks, policies and processes of ethical, security, compliance, AI, personal layer network something that's near dear to your heart, right? Encompasses ethics, accountability transparency security compliance, and risk framing AI governance and associate compliance as maturity. Has anyone performed a cyber maturity assessment before? How risk assessment maturity assessment? Right. So now, now a cyber security AI, a cyber maturity AI assessment. That's a new thing, right? That's actually happening and you'll hear war stories about that
as more and more maturity assessments are getting off the ground and more and more organizations are doing maturity assessments actually work for organization that actually their full breadth of structure in regards to the bonus was meant and particularly dependent on their kind of material rating and story right this is my favorite definition that I found compliments of Alex Robinson but it's not my definition right so AI governance is not the same Not the same as AI ethics, responsible AI or AI safety, but it brings everything together. See, it's kind of like the glue, right? Uh so we we hear GRC governance compliance. We hear the arc of bringing everything together, right? Same thing here. AI
governance refers to the systems, rules, and oversight structures that guide how AI is designed, deployed, managed. It identifies who's accountable. So we talked about governance structures and accountability. You got to have I think the the last person said if there's not people, there's not a person accountable for AB, go run. go somewhere else, right? How risk are assessed and what happens when things go wrong. AI ethics focuses on the values behind the tech questions around fairness, accountability, and impact. Responsible uh AI is putting about those values through practice, through policy, through process, and internal controls. Show hands. Who has an AI policy getting a shot right now? That's good. We need everyone to have an AI policy, right? If
you don't have something from a governance perspective, from a documentation perspective and to drive that, you know, that's already you're already starting off on the wrong foot, right? AI safety is more technical. It's concerned with unintended consequences, alignment, and system level risks. AI governance connects everything. Connects all of it. Make sure there's a clear structure for decision making and accountability. That's the importance of why AI governance has been brought to the forefront here in the last few years and why it's going to continue to just be more important as we move forward. Some of the inherent risk you're familiar with risk assessment the inherent risk is what raw risk right these are some of the raw risks that we
talk about when we deal with AI identify AI specific challenges and drivers for identification change and commitment what drives this uh you know your inventory asset management right you need to know what's actually happening what's what's actually going on in the environment that's first and foremost right uh so complexity AI is rapidly evolving right we all know that so much so that AI stacks outpace out man and out gun everything that we're doing from a security perspective data demands right lots of lots of huge data loads data strains finance mitigations and privacy considerations continuous updates to AI models and virtual behaviors make oversight that much harder for most so there's really more onus on us to you know bring this in
right make sure that we have really good visibility to what's actually happening so has anyone crawl raw run right kind of perspective right who has a privacy program in their That's your starting point, right? I mean, I heard a hybrid leader say, "We just don't know how to get started. We're just going to block everything." That's not the right approach. They have a they have a privacy program in place. Use privacy to help you your your perspective on AI, right? Expanded attack surface means more a need for data governance, not just AI governance. Think PIA. Has anyone heard of privacy impact assessments? They're part of the privacy program, right? PET. Does anyone know what that stands for? PETs. Privacy
enhancing technologies also those are part of the privacy program as well know what those acronyms mean knows how they're actually being labeled identified classified right risk based approach I hate when people say we're taking a risk based approach has anyone does anyone that I hate that right what does that even mean what the hell does that even mean right look at that right the cyber risk management program security risk assessments in incorporate a question are you using any uh AI are you using any LLM any RPA into your assessment people not even asking that question so How can we actually get good visibility into actually what's happening right think making those progress from a baseline descriptive
perspective to all the way normative controls and control environment. So think the CFMI. Is anyone familiar with the CMI matrix for maturity? Right? So think that you're building from a zero to a five. Just level set, see where you're at. Start at a two, start at a one, whatever it is. Build up the foundation that you're going forward. Right? Leverage existing frameworks like the N AI RMS. Has anyone heard the CSS? I think everyone has, right? Well, AI and N have an RMS. And that's just all the framework in place, but it's a good starting point, right? and compliment your playbooks as you crawl, walk and run your way to AI governance dominance because you have to dominate this right
like I mentioned privacy use privacy to fuel that program right using privacy and using privacy programs are helped to minimize the potential risk of AI governance you know uh relationship right pets privacy privacy enhancing technology can be implemented in various applications including secure training of AI models generate analysis statistics sharing data between different parties Right? PIA or privacy impact assessments manage privacy risk in AI especially when they in cyber security. Right? By incorporating AI, PIA can help organizations identify medical potential risk, privacy issues, comp uh make sure you're complying with data privacy laws that are in place and building that trust. You're building the trust with those users at that point. So this includes data collection,
processing, evaluation transparency, the explanability of AI models, implement implementing those safeguards, maintaining data retention and deletion records, right? So what's actually happening on those once systems are offsetted uh you know and getting ready for to be sunset, right? So like I mentioned frameworks, right? AI RMS just a maturity overview of what the AMF is in uh incorporates for main functions, right? Govern, map, measure and manage, right? They're calibrated to the AI RMF categories under risk maturity reporting. 72 subcategories of AI characteristics across data training and inference models emphasizes accountability transparency reliability safety fairness privacy and resilience. Right? So, think the CMI model like I mentioned from a maturity perspective. Think CO is anyone familiar with CO from a governance perspective. I
know I know I work for an organization that that's managed CO as a governance model, right? incor incorporating that to make sure find out where you're at find out if you're maturity find out what's actually happening other resources around that MIT might be AI maturity model has the six right you're coming with a minor security attack framework right why not incorporate the attack framework into your AI governance uh relationship right to deal with that strategy technology organization and performance related items in your shop right some of the core components of AI governance as it relates to cyber security right advers serial risk and having model and risk parts, right? There's really good information of of
risk parts and risk scoring. Google has a pretty good program for that, right? Operational objective includes bringing the legal teams and regulatory requirements and I asked that in my first talk last year or my talk last year. Any any lawyers in the room? No. Oh. Oh, yeah. That's right. We have a lawyer in the room. So, it's really important to have that relationship with legal team right laws and departments. It's really important to get them in the room because ultimately they make the determination when there's an incident there's a breach it has to come from them not from the security teams not from anyone else right so having that a AI asset inventory your mass asset
management program right your application data links your CMDBs right existing AI services tools and ownership should be cataloged uh does anyone have an espock program in their shop software bill of materials right has anyone heard of espot right so that's a really important delineation to bring that and have all those things in place to make sure you have full visibility to what's actually happening, right? So, think the whole glazier picture, right? Everything on the top and then nobody else nobody knows happening at the bottom. Same thing. Who's incorporating threat models in their shop? Threat modeling, right? That's the first thing you want to do when something new is built up, something new stood up, get get to
threat modeling. get value, look at the architects, work with the architects to find out all security implications and either something like an oath requirement or you know someone that take their own adventure depending on how they're going to remediate before the cyber security teams and security risk hazards get their hands on it right so having threat modeling is definitely a really good way cost effective way to identify and risk protecting data shielding privacy and ensuring a security compliant integration with the business right testing the eval verification validation metrics right we all want KIS, KPIs around these things. That's how you build those metrics. That's how you build those guardrails and that's how you build that reporting
for the seuite and the leadership teams. A data governance matters because it builds bridges. We talked about building bridges. Everyone has given the game like I mentioned in the beginning of the presentation. Everyone has a a a part to play right from the impact decision making from the hiring that's actually happening from the finance and even people like us, right? Everyone thinks you know the AI the most you know circumstantial threat to existential threat AI is people's livelihoods right and like I mentioned when you take the CIS whenever you see a question around aging loss of life first thing right you want to select that to protect the loss of life and then you go down the down
the down the road same thing right risk includes bias lack of transparency use little privacy and we're going to talk about the OCD here coming up in the next slide governance enables trust accountability and innovation so you're starting going to see a a theme here, right? Accountability, accountability, accountability. We have to have accountable stakeholders in the room. Show of hands who familiar with the OAS top 10, right? We we have top 10. I think they just look at the top 10 for not even human identities. These are the top 10 for LM and JI uh you know uh vulnerabilities, right? But if you think about it, just kind of see through them kind of immerse yourself. They're really
committed to the top 10 what we're doing from the abstract side, from the security side. We're not really bringing them next to the wheel. a lot of them basically almost the same right so incorporate those security parameters to benchmark against those things you're going to have coverage for AI maybe some things are a little bit more AI okay but at least you kind of have majority of the things kind of tucked in thick and tight and evaluated process if you're following this sort of you know prerogative in in demands of understanding the validation of you know all of top 10 vulnerabilities right modern life governance so AI models right so usually You start with the
model design, the training of models, validation of model, deployment of model, monitoring of model and retiring of models because I think I just saw a statistic like models are being retired every what three and four months now I mean it's insane I mean just the the churn and the amount of rapid deployments that are happening right some of the key documents I can mention look at the model parts look at your data sheets look at the decision log that's building the foundation of how to study for AI governance related prerogatives as you're evaluating as you're testing as you're scaling up in AI, right? Uh from a compliance perspective, I'm I'm a compliance professional as well. I did that for a
few years. I have some folks in the room that I supported from a compliance perspective. Uh there's a lot of different things happening, right? The EU AI acts. I mean, has anyone heard of this one? Does that does it look familiar? Right. What went full force in August of last year, right? Organizations are full full-blown plan right now on the US side, right? Then GDPR when JDR came into the fold. kind of the same thing that's actually happening now, but we do have extra time to prepare, right? Um, so that's the sort of thing that we're looking at. US AI executive order, that's another one that's out there. The OECD, right? This is important. The Organization for
Economic Cooperation and Development, that was by the United States as their flagship AI governance that a lot of countries kind of, you know, said yes to. So some of the principles trust in AI responsible development and use for principles around business and organizations and leveraging the benefits of AI while mitigating the risk. So the OICD is kind of like the US kind of like signing a piece of paper to bring in a congratation and a partnership with other countries. ISO has anyone done the ISO 2700 audit? Any ISO auditor ISO analyst or have an ISO program in their shop? Right. ISO has a 420001 uh policy. If you're familiar with uh ISO, we have something called
ISMS, information security management system. A IMS, right? AI management system. Is that familiar? It's basically the same thing, but just basically talking about AI, right? Focusing on AI implementation monitoring improvement of the AI management program. We're not reinventing the wheel. Um, no worries. Uh, next AI management framework, the RMS AI, right? This one's brand new. This one's brand new. I just put that in my slide yesterday. Does anyone know what this is? Texas like eating Spanish, right? This is actually going to be signed by Governor Maverick tomorrow. It's the Texas Responsible AI governance act. Has anyone heard that? I didn't even know it's actually coming into right is a great time because I think those came in
my talk last year, I was talking about privacy legislation that just had just passed, right? Um this is in the second iteration. The first iteration was very overarching, very massive, but then the commercial leaders and seuite leaders uh like Elon Musk and other folks got their hands on it and they slashed it down. So, it's more government governmental agency driven, but it still has really good information. It's it's going to be kind of driven by D. I'm really familiar familiar with the in the house. So, the AR is going to be kind of advocating and kind of paring the work streams from a governance perspective. But, it's important that now we have AI governance regulation in flight. If it gets signed
tomorrow, it's going to come into effect September 1st of this year, right? So, think about the stick, right? We always think about compliance being the stick. Well, there's going to be more sticks to kind of hit hit people in, you know, across the head sometimes depending on what's actually happening, especially, you know, with the implications of violations and fines, right? So, going back to AI and governance in regards to cyber security, right? So, red teams always going to be looking at mimicking what attackers are doing, right? uh looking at weaknesses in the models that they have deployed. Hence, you know, quote unquote caveat, the models that people know we have deployed. There's also shadow IT, shadow
AI, and people just kind of running off their own doing their own AI without teams actually being aware of that. That's that's a problem in itself, right? The blue teams are defending and improving their resilience to make sure that red team findings are are procure are run through and have typos built out, right? the governance, the GRC folks are going to be looking at improving risk posture, risk. I I totally get what you said about, you know, the word risk, but that's why you have to have that partnership because sometimes those technical teams don't really want to market to GRC folks because either they feel they don't have the right, you know, professionalism, the right acumen, the right feedback and
the right mindset from their perspective. So remember right I security I was clashing it's really important to get rid of that whole passion mentally and work together because if you have that passion mentally things are going to you know get missed and things are going to happen you're going to have gaps right from a rent team perspective reconnaissance right any people in the house right that performant activities AIdriven data gathering adaptive planning testing scanning right whatever scanning tools you have in place real time output evaluations vulnerability analysis b management that's huge right false posit reduction, context aware prioritization, code review. Any familiarity with code review? Who has code review programs in their shop for for code? Good. Right. Exploitation, AI
suggested attack path, proof of concept, script generation, autonomous agents, right? Reporting, automated reporting, reporting visualization insight remediation, because if it didn't if it didn't get documented, then what? Didn't happen. Right? Documentation is key, guys. I mean, we're not reinventing the wheel. We need to have the documentation for those audits for those good old audit forms that we work with on third line without third line defense and that's pent testing right you're building testing you're building test specifically for your AI program uh so simulating those attacks look at data poisoning which is one of the biggest attack vectors when it comes to AI right now logic flaws and dependencies right data dependencies is huge you're not
going to know your data dependencies unless you know your entire uh perview of your data in your environment biggest uh biggest generation that I've seen when it comes to AI red thing is fishing right fishing is usually the most important initial attack vector that we know and everyone's talking about it blah blah blah we heard it right I mean I don't want to hear that whole spiel about fishing we know we know it is right from an AI perspective attackers are really going in harpy on it right generating those scenarios automating those discovery exploits predicting those weaknesses in your systems bypassing your AI defenses when you app right here tooling and adapting those intelligence attacks and bring it
all together. This is never ending life cycle. It's it's continuous and you have to continuously defend against these things from an AI perspective. Um real life scenario, right? We all know what happened with Tesla. We all know what's actually happening in the current political uh environment. Tesla just came off the ground. I think their bug bounties what pay what 2K 3K when they first started. um now it's definitely less less money but for example this real life scenarios finding uh AI powered autonomous vehicle uh vulnerabilities right so this is usually how it works simulated attacks on those cars and those autonomous vehicles researching those communication models that they're using uh data poisoning attack like I mentioned misinformed AI
looking at extract extracting data via Xfill um adversarial AI look at the look at their their sensors visioning those vulnerabilities is encryption. Make sure everything's encrypted. They go after the encryption. Right? The red teams are finding these things. They have the vulnerability. They find it weak encryption. They put that into their playbooks. They have to have adversarial AI detection tools in place. And then obviously they send all those recommendations out to the data evaluations, the end to end encryption to make sure those things are doing, you know, TS whatever it is approved, you know, latest and greatest and not not some deprecated encryption format, right? adversarial AI detection tools, redundancy and navigation system. So, this is a real life scenario of how AI
can power directing scenarios to protect an autonomous vehicle, right? Uh there's a there's a lot of things happening in this space. Um I literally went to San Francisco for the first time two weeks ago to I mean I I couldn't believe it. The thing was driving driving me around downtown. That's actually they have some in Austin now. I think there's that coming to San Antonio. It's it's going to be it's going to be a day, right? So I mean I want to make sure that when I get in that car I'm not going to be worried about that thing going up you know 50 miles an hour trying to gradually get out of the car
right so these sorts of things are important and they become you know real life at that point right you know human human loss right from a secure adversary robustness perspective look at all the examples that are out there research what's actually happening be in the know uh hacker news dark reading uh podcast whatever LinkedIn anybody you follow me on LinkedIn I'm on LinkedIn Right? So, you know, it's it's pretty good to just be out there, see what's actually happening and learn from those examples and see, okay, can this happen in our shop or how can we protect against these things that are happening, right? Look at data poisoning, back doors, everything that's out there that's being
reported. Uh, and then harden everything you learn. You want to harden your models as much as you can, right? Testing, testing, testing. Red teamers have to look at the the outputs of those of those events and then from that input those teams can be better prepared to model and harden their models. Right? This one's this one's funny, right? governing challenges in designing pyramid blackbox AI decision making and trust right adversarial AI tax evolves faster with outman on gun outpace right so if you don't have proper governance right I mean quest plan enter right all these Microsoft products and positions if one if one area fails it off right and then you have a shareepoint with four data governance
and you have co-l standing on top of it you're putting gas to the fire right It's a funny picture and I got it. I was like, "Yeah, I got to share this." Right? But if that's what we're trying to avoid here, that's what we're trying to happen. And how can you prevent this? Effective AI governance. That's just it. The bottom line, right? From a blue team perspectives, blue team validates your AI based decisions. The human element. Humans always have to be in the room to collaborate to reduce the false positives to review everything that's actually happening. So I hate I hate I'm tired of people whining about AI is going to take our jobs. It's not going
to take your jobs. But if you engage with the program and understand how you can use AI to leverage your job to make your job better or understand you know how you're going to be able to cycle it around that that's where you need to be right instead of whiting about it's going to save my job right governance and then bringing TRC ensures the rel reliability and ethical uses of these things right uh from a built-in perspective automated reconnaissance like I mentioned patient scenarios social engineering always be looking out on on that on that on that side of the house assisted malware generation Exploit research evasion of detection mechanism for your blue tailers. High urgency for de defenders to adopt AI
responsibly, right? Monitor AI behaviors and outputs. Use anonymity detection dashboarding and learning reporting to the seuite teams. If you don't report to leadership teams effectively what you're doing, you're not going to get that uh you're not going to get that back. You're not going to get that buy in. You're not going to get that money for for your shop. Right? Defense of defense against model and shadow AI. Like I mentioned, shadow ID is the same. Shadow AI is also, right? They're going to go hand in hand. They've worked together. If you don't have those things, you know, delineated and identified, you're going to have issues, right? If things are happening because if you block all
AI, which I have heard leaders tell me, right? You know, people are using AI in their phones, using AI on their own personal devices, it can get it can get murky. It can get it can get tricky real quick, right? So from a blue team perspective, right, kind of to bring it all up to wrap it up, detect anomalies in traffic, automate your triages, predicting your attack vectors, reinforcing your firewalls, your security rules, your appliances, everything you have from a security perspective, flagging initial behaviors, flag, flag, flag to create your playbooks, to create those rule sets to understand these are what we're seeing from an AI perspective and they're going to continues to change, right? So you
have to continuously evaluate them consciously and repeatedly. From a purple team side, we're all kind of in the same boat. We're working together, aggregating that data, looking at those defense cycle modeling that's actually happening. Training, training is huge. Like I mentioned in my last talk last year, if you don't have the training in place, it's not going to happen, right? The number one thing that compliance and regulators uh and other look for is training. That's the baseline foundation for any regulation for any mandate, GOBA, FYIC, whatever you want, whatever you want to put in front of it. Training is number one first and foremost, right? and then testing those playbooks. the security operator limitations and
challenges from a technical perspective. Technical uh window constraints, token window constraints, hallucination, data leakage, right? Scope creep. Scope creep is a is a real thing. It's a big deal. Non-technical regulatory limits on data sharing, ethical boundaries over reliance on AI, your costs. Money is a really important thing. I mean, you cannot just, you know, boil the ocean and just think you're going to have all those solutions and have all the money in the world to pay for this, right? Mitigations, right? uh for domain knowledge, human oversight, stringent approval workflows. The people that know what rack stands for, retrieval generation technique that combines the power of large language models with external knowledge sources like documents, databases to produce more
accurate, relevant responses. Rack systems first retrieve relevant information from a knowledge base based on users query and then use this retrieve information to augment input prompt uh to the LLN lead to more informed and uh drive the responses. But there's a lot of security implications to that, right? That's why it's kind of it's kind of ironic that everyone went to the cloud. Now with AI, they're kind of coming back in house to have their own AI systems with those darks with those security parameters because they don't want their information to be out there to be exposed, to be trained on, right? To be evaluated. Uh you're looking at, you know, intellectual property, you're looking at uh
healthcare information that shouldn't be out there. All these sorts of things are happening in AI as they're powering their AI programs and people are bringing it back in house, right? Uh I've been I work for an organization they call it GPT like chat GPT but they call it the name of the organization GPT right with the two letters acronyms which is which was really interesting. So from the GRC perspectives right who makes the decisions policies controls that's that's what that's what that is right risk what could go wrong always ask yourself what can go wrong threat modeling uh you know mitigation these what's actually happening in the risk assessment process and then from a compliance perspective the state right
are we following the laws the frameworks are in that framework the ISO 202000 GPR and now we have Texas that I got to deal with now uh coming into effect here in September just thanks to be aware of. So some of the GRC considerations alignment to the framework just have a framework to align to right uh have the OS LLM top 10 have those uh identified and accured to and spoken to trustworthy AI explability privacy enhancement fairness and transparency third party supply right third party risk is huge right so having those certifications not just the check the box uh you know security a sock two report right type two type two report because obviously that's not
going to give you the full picture into your security uh posture for that particular your relationship with that vendor, right? Um, anybody has any heard of top security alliance CSA, right? They have lots of good information, right? I'm I'm in the working group, right? AI authentic. They have many papers on AI, AI governance, AI security. Go in there. It's free. It's awesome information. Go in there and check it out. Right? Embed AI governance into your existing IT audit risk programs. Right? That's huge. Update policies. Like I mentioned, you have an AI policy, get an AI policy off the ground. Right? Have a have an AI charter. Anybody have an AI steering committee in their shot? Steering
committees, right? You need to have a steering committee, a group of accountable stakeholders to speak to what we're doing in AI for the organization, right? Uh rag continuous testing, evaluation, verification. We call that TV refers to a methodology that emphasizes ongoing testing, evaluation, and verification of AI systems throughout their entire life cycle from development to deployment. Right? training training training training your audit teams on AI during workloads redins loot teams and extracts with AI you know if I say audit teams because guess who's gonna have to come up with the findings audit team right m right if these if these folks are not qualified to to look at the work you're doing you're going to have all these
things happening you're going to be in screening matches with those those guys and I seen it right so if you can test if you can train those teams to be commenurate with the work you're doing on the ground operationally it's going to be a win uh and we'll get to the very end I there's a certification for AI auditors that just they're in the beta phase right now so people are testing um so more on that right and then that cross cross functional collaboration between legal right in the house cyber security data science all those folks you it's a you know it's a it's a it's a team sport AI risk management look at the risk classification low medium high
I mean we know that right we know qualitative risk assessment quantitative risk assessment hybrid whatever the hell you're doing I don't care just do something right thread modeling and scenario testing using the AI RF your governance framework for structure evaluation to kind of drive and cap those guardrails as you're building that uh that foundation that security posturing maturing the security uh enablement for the organization right has any heard of explainable AI X AI right so use XAI techniques explainable AI have human in the loop always make sure the human's in there validation everything that's high risk right critical decision making have no logs transparency and decision rational documents, right? Same thing with logs, right? Pretend everything is global.
Okay, well, what does everything mean? How the logs are are identified? What's the most important? Net flow logs, database logs, whatever logs you have from the security perspective. Now, you got to work up worry about logs in regards to output from AI systems as well. So, you have to make a risk assessment to make understanding what logs are most important to the organization to have that, right? some of the tools and platforms that I kind of looked at uh that are out there IBM Watson Open Scale Azure AI content safety and then like I mentioned the Google models car tool kit is huge is great resource um they help help you monitor have fairness checks bias
detections there's lots of tools out there in the environment to kind of help you depending if you're insur shop Google shop uh or you know any other shop that uses AI out there right some of the most common AI governance challenges around like I mentioned cost complexity third party right TPR RM is always going to be that X factor. Uh so as you see for worse rather than good most times right that end party fourth party I mean like I mentioned 60 to 75% of experiences come from what third party right an end party relationship something that happen cascading downstream impact that's going to ultimately have an impact on the main organization for carrying their services
or for carrying anything that that come from a data side right specialized policy required leading to talent talent talent shortage is a big risk as well synthesizing AI risk management with existing risk management programs requires first broad straighter agreement. I mean because if GRC want to understand that they want to incorporate AI into the risk profile, they need to link that know to the teams handling enterprise risk management because if there's a connect between enterprise risk management and GRC from the cyber security side, everyone's going to be doing their own thing and that's also a problem, right? You're going to have just, you know, bad things happen and not good reporting and just everyone's going to be just doing different things.
data privacy challenge will constantly be there putting pressure on systems and increasing the connection of tech and compliance debt. You know, you need to have that understanding to avoid reducing the tech debt as much as you can. Uh case study, right? This is interesting. Uh the Microsoft AI security happened in 2023. Microsoft AI based tool miss sponsored cyber espionage, right? Lack of uh crazy, right? ing you know cyber AI and the root cause analysis from that assessment was what human and machine failures combined governance right that's basically the root cause right it wasn't tech it wasn't AI it wasn't this it was just that they had bad oversight on how they procured their model right that's why
led to cyber espionage when it came to that particular case study so once again the onus is on us to work as a governance function and security teams together to avoid having that case study to avoid having those findings um as a root cause from a from a from a breach from an incident from something bad. AI governance enhances cyber security. AI can create insecurity risk like I mentioned data poisoning and adversarial attacks. Governance sets AI safety designs and usage policies. Risk process identify exploitation points. Compliance ensures a transparency. So you have your business information vis in their their shop, right? Like there the BOS is going to be doing their thing. Basel talks to the teams from
cyber security. They look at the threat investigations, the threat uh detections. They look at the security response team. They work with the security instant responders, right? BFIR, all those good stuff. Feedback and reporting, never ending cycle, journey, journey, journey, follow, right? So having somebody in the business to level set the expectation and move it forward to the cyber security teams and continuously do the work. Continuously do the work because when you have something if you have a failure in one of these areas, there's going to be an over overarching systemic failure at that point, right? So keep that in mind. Some of the governance challenges inside security blackbox AI decision- making you know you know don't have everything
in black boxes it's evolves making trusting decisions for the organization adversarial a AI attacks are evolving they're changing they're happening right compliance patchwork right I mean we still don't have a I mean we were I was here last year here we go again a year later we still have a we still don't have a US-based privacy law overarching for everyone right so it's a patchwork of laws patchwork of requirements but just you know be tuned to what you need and what is applicable to you and unfortunately that's still considered a challenge right more gaps right no global framework like I mentioned un uneven enforcement levels bias in your training models unregulated offensive security tools lack of auditor right
lack of documentation lack of you know good DRC practices from a recommendation from a governance perspective implement the frameworks early guys have a policy have a steering committee have a charter you got to start that right that's where you start and then combining with the red team evaluations and the blue team evaluations and then integrate that human oversight at every phase at every stage and that's where you're going to be best prepared. Not going to say 100% because nobody's 100%. If they tell you it's 100% they're lying to you, right? Um that's why you have a if you have a risk you add the controls to that you have a residual risk but that residual
risk is acceptable within the acceptable levels for that organization as they're hearing and disabling AI. AI governance is the team sport. Yes, it is just like cyber security, right? governance responsible AI use, defining those boundaries, aligning the security objectives, sharing those rules across teams, critical for cyber defense, right? Impacting the security teams directly, bridging that uh divide between the tech and the policy gap teams and the GRC folks, encourages encourages data sharing, enable secure AI deployments, standardized threats and responses, and all that is bringing it together, guys. So, everyone has skin in the game like I mentioned at the very beginning. Make sure you're doubling down on that here. Um, so understanding that everyone needs to have a a a seat
at the table and understand that it's really important to get these things done. The future of security and AI governance, right? Uh, global regulation, executive orders, AI and enhanced automation. I think everyone's talking the sock right now. EER, XDR, incorporate AI into your sock. That's where it starts. But it's, like I mentioned, it has to be driven by the business. So you're looking at all AI. a lot of boardroom now, but they hear AI, they're salivating, they're animated. I mean, I heard I talked to senior leaders, they have a really good interest on AI. So, they're they're interested what's actually happening. And it goes back to the last point. It needs to be a business core, business
culture and culture based driven item in that organization, right? That's true. Some of the homework, I have some homework for you guys, right? Look at that. Pilot an AI augmented offensive security case. Just have something off the ground, right? Select the lowrisk target internal staging, right? Test depth ad for the low dot. Don't do that, right? And integrate an LLM agent for reconnaissance and scanning, right? Measure that time saves coverage increase, accuracy improvements over time using that manual scanning reporting, right? Develop an AI responsible framework. Like I mentioned, have an AI policy, have an AI charter, develop that, work with your teams and your leaders team. Map existing offensive security policies to the next AI RMS. That's a that's a whole project
in itself, right? in the JRT and OAS governance requirements. Create a AI specific security playbook detailing appro approved tools scopes approval workflows and audit logging requirements upscale and structure oversight right train penetration teams red teams on element usage prompt engineering and AI risk indicators you're training your security folks you're training your teams right establish an AI oversight committee like I mentioned com combining infosc legal right and audit to review autonomous agents and approve the rule of engagement and what applies embed continuous monitoring and audit at all times. Instrument your AI tool chains to capture command invocations, element responses and decision ration or explainability logs. Integrate these into your Splunk, your Q radar, whatever you're using as your SIM. Incorporate
that for real time anomaly detection and post uh uh post test audit reviews after action reviews, ARs, whatever you're doing from a security perspective as you're identifying potential incidents that weren't incidents or incidents that weren't incidents. You do have to have AR identify that and have those AI AI AR and documented integrate dev sec ops ship. Anyone heard shiplap? I think everyone's heard shiplap a lot here in the last few years. pilot AIdriven vulnerabilities scanning earlier in the pipeline in the CI/CD pipeline feeding results into your exacting existing programs and existing infrastructure right uses AI generated remediation guidance to accelerate patching life cycles uh so that's good to help enable vulnerabilities vulnerability remediation vulnerability um you know
outputs and throughputs so to recap guys AI governance is known as the system and processes that allow AI systems to be developed deployed and run on ethical principles, transparency principles and accountability principles and have labor requirements and societal standards. Take f take facial recognition as an example. Ethics might ask whether it should be used at all. Responsible AI tries to ensure it's implemented fairly and transparently. AI safety examines how the system could be exploited if it fails. Governance sets those boundaries, who approves it, how it's reviewed, and what safeguards are in place. Without governance, the rest is just theory and theory doesn't help doesn't hold up under. True, right? I think you all would agree to that, right? So, it's
important. So, in conclusion, you know, to kind of put a flow on this presentation, demand for AI policy is growing. So, make sure you have an AI policy in your shop. Security teams are adopting AI faster than ever. Governance must evolve with those with those changing of of uh changing of the guard with AI. Crossstream AI governance and training is needed. Future security is going to depend on it. AI governance is essential for building secure trustworthy and ethical AI systems requires multi-disiplinary cooperating continuous evolution one team one effort approach everyone has to get in the game your role as AI professionals is to champion the culture responsibility think creation of AI champions who has a
security champions program in their shop or security maven security maven program I mean if you don't you know incorporate that with an AI maven program or AI security champion program right to help with efforts right about cross collaboration have everyone you know in the mix and and you know identified and informed. AI systems amplify existing security and privacy risk and introduce unique governance demands. A structured risk based maturity journey grounded improvement frameworks enables organizations to safely accelerate a adoption and prepare for evolving regulations. So last but not least you know that became a whole job family in itself. Look at that. Uh Donald Trump looking for a VP of a governance. Oh surprised right? uh one trust had a
privacy and AI governance. Remember we talked about privacy and AI governance. If you have a privacy governance program privacy program they usually kind of onboard and bring that in with their with their teams. man's looking for an AI policy manager right more and more onus on you know jobs and career fields and AI governance they have cyber security uh education cyber security training or we're cyber security operators it's building and it's growing that's one right and then from a from a certification perspective any any privacy uh certified folks in the house IP right well we have a certified AI governance professional the AIG team right and then from the side any certification folks in the house Right.
We have the ISAC advanced in AI security management. They're going through the beta phase. I'm studying for that and hopefully able to take the test here in 28 days. Hopefully be passed with the first iteration of the data test program. So that's that's bringing in security and AI together, right? I think you have to have a system to be able to uh apply for that. So it's there guys. The governance uh need is there. The certification uh the fact that the certifications are evolving around that is is is definitely proof of that. which is I think ultimately everything that I talked about brings AI governance as a fold as something to be dealt with to be
assessed to be initialized to help drive your AI programs in your organizations and with that that's my that's my LinkedIn feel free to connect with LinkedIn thank you and we'll leave it up to some questions thank you appreciate
any questions