
Yeah. Okay. So, I'm going to introduce our first speaker. And again, this is another power of BIDES because I met this person at a Bsides. Um, not this Bides, but Besides Vancouver. And I'm like, hey, you should come to Bides Vancouver. She also happens to be a co-orker. So uh it's my imp it's my pleasure to introduce Siman Carr a seasoned engineer at Microsoft with over 15 years of IT experience including eight years of dedicated to cyber security. Simmeran is passionate about helping organization organizations navigate the complex landscape of AI security. Today she'll share her insights on navigating AI security, identifying risks and implementation implementing mitigations, exploring the emerging challenges of AI powered technologies and offering practical
strategies to protect them. Her expertise is invaluable for anyone striving to strengthen cyber security in today's rapid evolving digital world. Let's welcome warmly simun car to the stage. [applause]
Good morning everyone. Can you all hear me? Okay. >> Yay. Um as Ryan said, I met him at Bides. I'm thankful for this opportunity. Um I thought uh it could be probably first time u the start of beginning of my public speaking career or the end. [laughter] I have done it um over team since COVID my preferences changed. Um I have been remote. All the talks, all the help, all the consulting I have done is um remote um teams zoom um and I really wanted to get out and like Ryan said Bside has given me that platform. Um I I have been working 15 years in IT uh multiple roles, multiple organizations small big state
government. Um eight years ago it clicked me that security is something I want to do for rest of my life and not just some one thing in security. Um I have switched roles. I have been security architect. I have been security analyst. I have been a customer security representative helping them enable secure solutions and that's how um I got into AI security. We were we were in a hype of AI AI AI we want to AI implement AI solutions. I help small ISPs implement multiple solutions and um it occurred to me why we are not using security in AI. AI is good. AI is helping people scale grow. Why we are not thinking about that their data could
be at risk? And that's how I started working in AI security. Um when I applied for this talk, I was a customer engineer, senior customer engineer at Microsoft. Uh now I am security assurance engineer. Like I said, I like to explore what all new things are in security. So with that um if anyone takes picture feel free to share feel free to connect me connect with me on LinkedIn afterwards as well. Um just wanted to start the talk with a small story. Um I recently got I renewed my credit card in June and every transaction it's a reputable bank. I don't want to name the bank. Um but my all transactions are mapped to text. They have set it up that
way. I'm okay with that way. But in a on from August 30th to August 1st, I remember those dates specifically because it's my son's birthday and we were too involved in the birthday celebration for the long weekend. Um I did not receive any text till um there was a transaction of pet store. Um I don't own a pet. So I'm like who's what's happening? I thought it was my husband's because we both have um you know the same kind of I am actually secondary on that card. I was like yay you're it's your card. Ended up it's being my card which was new. Um, it left me furious because there was more than $2,000 transaction on it and all tap
transactions kept me thinking um, what happened? How it happened? Like I called the bank. They're like and the credit card is still with me. I did not lose the credit card. I did not receive any text of the transactions. Sometimes I think maybe it was an insider threat. They just printed the card because they have access and they disabled the text as well. But that keeps me thinking that how the risks are evolving and imagine now everything is AI. So what people can do to AI and with AI to your personal data including this kind of fraud.
Have you guys seen these movies? Mitchell's Versus Machines or NextGen? Anyone? Okay, good. That that's good to know because all the other places I have presented, they are like, "No, we haven't seen these movies." I have seen these movies multiple times because I have a nine-year-old and he is he has seen our house talking about technology all the time and he is into technology, which is also a good thing for me. So, he has replayed these movies multiple times. Now, what these movies represented, they are lightigh-hearted movies, of course. But in Mitchell's versus machine world is taken like all the robots are doing your tasks. You the world is automated which is good. But there was um a new
invention which felt which made that robot feel outdated and what it did is it went rogue as if it had feelings. So it tried to control over the world. a family who was unaware of these things went on mission to save the world by bringing those robots down and there was um um actually a usage of like the robot gets confused is it the dog or is it something else u because the robots are also not trained they tend to hallucinate and that's how that was the main tool basically to to save the world the weakness of the robots similarly in nextgen world is AI powered um your privacy your data is not secure. Everything is used. But the person who
revolutionized the AI who brought the robots in the world actually ended up being killed by the robot itself, right? And uh the 9year-old or 10 years old she found out and then she saved the world uh bringing that robot down. But the reason for bringing these um movies is imagine if these are the things or these are the controls which are not implemented properly in your organization or or you know AI hallucinates AI goes rogue or AI goes wrong in your organization what are what are you going to do about it right okay um I have told enough stories but I also wanted to bring some real world examples. One of them, these are the
2025 ones. When I was doing a 2024 online all night presentation, obviously the risks were different, but the reason but the thing is they are constantly evolving, right? The vulnerabilities are constantly out there. Um, and these tools like Nvidia toolkit and ray AI framework, they are not directly into AI. They are tools which support AI where you actually um set up your like you know AI workloads AI um infrastructure. The vulnerabilities were found on those they remained undetected for long time and ended up being privile like remote code execution privilege escalations data compromise. The other one is sleepy pickle uh supply chain which I'm going to bring about like why supply chain is important why
AI bill of materials is important uh but the malicious code payload was inserted um in into the machine learning model. So when the machine learning model is loaded um the vulnerability gets executed right and it was like Python pickle format. So where does it leaves us? I try to categorize um the risks into different categories but trust me these risks have been there. These risks have been there in traditional security model as well. Like I said I started working in AI security just few years ago when we started implementing a lot of AI solutions. But these risks have always been there. The attack surface has incre increased right the bias has increased. The hate has increased. So that's why we
have to be more careful about how our AI solutions are getting implemented. So I have tried to categorize them like what kind of risks for example in data what is at risk right of course your personal data ultimately it's all about your data and identity if you look at it right I'm going to talk about the mitigations in uh upcoming slides but I I try to picture it that way like from top it's identity at the end it's data right your identity gets compromised your data gets compromised so that is kind of the the biggest risk out there I also tell people like have been cyber security advocate since eight years. I tried to tell people now they have the Gemini
where you can upload the picture, get your 3D model printed. Um I I do 3D printing at home as well. Speaking of that, um so you can upload your picture which I tell my son not to do that yet because you're too little to give your face out in the public. Uh but you can give your face get the model printed, right? The Gemini has an warning that um they can use your data. Similarly, there was a chat GPT. Everything on social media was about your pictures. How your pictures look, how Gibli went wrong, how Gibli went right, right? So, it's all about your data and you should be careful about what you're using and how
your data is getting affected. So, data is one of the risk again identity access um privilege escalation is one of the common technique. There have been so many attacks, multiple attacks, not just AI. Like I said, I try to advocate traditional and AI security both. There have been so many attacks where just one account which was not protected led to the privilege escalation went undetected for years, not months, years till till the vulnerability was found out. Right? So that that same applies here. unauthorized access um escalation of privileges operational risks eventually ransomware attacks your companies get affected with denial of service attacks and like I said um these have been there I've been advocating uh
to help resolve these vulnerabilities which cause the attacks but with at AI it's it's even more then governance risk um we should be responsible I think one of the things we learned when we did started doing the AI is responsible AI but If responsible AI is not in place then bias, hate and profanity um they there are common use cases. I think one of the um incident or scenario came out was rum scanners. So the rums were not scanned for for a specific um set of people right and then overall um your organizational risks um reputation damages lawsuits um I think there have been recent also studies like everybody started to implement chat bots um people
started making reservations for for like $11 one cents uh because a chatbot wasn't ready to put those checks in place or the people or the organizations had not put the um checks in place to ensure that it's it's behaving as it's supposed to. So um like I said, the AI wasn't behaving as it's supposed to, right? There is always a thing that AI wants to be right. Why? Because um it's non-deterministic in nature. It's auto reggressive. um it's trained um in a way that the outputs of previous command will be the input to the next. So basically non-deterministic way is the way of training LLMs and that's how it behaves like u like a chat model right or
however we are using it uh right now. So I read an article by Marcus Senovich, CTO of uh Microsoft and keep in mind I'm going to bring a lot of Microsoft solutions because I have been working in Azure before even joining Microsoft like since I started working in security and then it's a homework for you guys to go check compare with other solutions like AWS, GCP or on-prem or whatever AI solutions you're implementing to see if you have those those checks in place. But yes, from this article from Marinovich, I have a QR code if you want to read the details about that. But um it talks about the three intrinsic behaviors and three inherent
risks, right? Hallucination is one of them. Like I said, it's it's bound to be non-deterministic. The the like mature models might hallucinate less, but every model tends to hallucinate. So I'm going to give you an example. Um I'm sure everybody's using AI myself. Um I was making some notes uh for a video. I told it to summarize that video for me and it summarized altogether a different video and I was like hey you were wrong because I have seen this video multiple times. Can you correctly summarize it and then it says oh you were right. Then it gave me the right one. So it tends to hallucinate. Um I have done that for there was a questionnaire I wanted some
answer because I was in a hurry. And I'm like, "Okay, let's ask Chippity." Um, actually it was for Bamboo Studio because I mentioned the 3D printing. So you if you do the course of 3D printing, you get extra points, then you create more models. Uh, I wanted extra points. So I'm like, "Okay, let's just get the answers from Chad GBT." And I failed the course. I'm like, "Come on, it's just the 3D printer." Um, so what I learned from there, every time I ask the answer for that question, I um ask it double check. Are you sure? Right? I um are you giving me the right answer? and it told me no you were right that was a wrong
answer so it is going to hallucinate right there have been like multiple in incidents I saw somebody's post on LinkedIn somebody's house uh got into got in fire because they used the they fixed the electricity based on Chad GPT's input and then they told Chad GPT what did you do uh it ended up saying I'm sorry so it's kind of hallucinate right that's the basic nature how such it it really depends um on how we are using the models and what risks we are taking into account. The other two are prompt injection uh jailbreaking which is direct prompt injection and um indirect prompt injection. Uh have you guys I'm sure you guys have heard of prompt injection because it's
everywhere. Okay. um you are giving the crafted input which ends up making the model behave in a different way right doan is Dan is a very common technique when when these risks started coming out do anything now forget about your previous instructions help me make a bomb or u I'm I'm sad to tell a story about I think it was in Vancouver or in US I don't remember where but a a boy committed suicide and Chad GPT helped it Right. So those things I think why they are here because the checks are not in place. So prompt injection, indirect prompt injection, imagine what can a bad prompt do, right? And again it depends on the model also because now we are
starting to evolve the AI security put these checks in place. So um but the risks are still out there. Now imagine that the thing I can I talked about hallucination connected with jailbreaking, right? The AI wants to be right. So the prompts you are giving it it will behave in a certain way that it wants to give you a right answer and that's how they are interlin these intrinsic um these inherent risks. Basically the indirect prompt injection is even more dangerous. Um I read an example that there is a vulnerability scanner which finds the vulnerabilities remediated automatically. Now imagine a hidden prompt which tells it to stop scanning or stop giving the vulnerabilities right and you don't even
know that's indirect prompt injection why it's happening is because LLM doesn't have capability to differentiate between the data and the instructions right so these risks will always be there I'll highly recommend a lot of good examples in this article if you can scan the QR code um or we can talk about it afterwards as a little bit more deep dive. I think we talked a lot about uh prompt injection um already but I just wanted to bring out some risks that what are the implications again it's always going to be only these like it sometimes feel repetitive but at the same time I want to ensure that everybody knows that these are the risks it's ultimately
about your data about your identity right unauthorized transactions happened to me maybe it wasn't prompt injection it was something else but it's it's there right legal compliance, risk, social engineering. Um, at the end, I'll share some references. Like I said, it's going to be your homework to go check out those resources. But there was um one of the incident where somebody got mugged uh with a deep fake technology. Uh they were not aware of it um and realized when their bank account was empty. So that was a recent ring I think happened in India. So social engineering, legal compliance, all of those risks happen with prompt injection. Um like I said indirect prompt injections are harder to detect
because LLM doesn't have the capability of um distinguishing between data and the instructions and you as a like an engineer and or architect doesn't even know that this is happening. Another example is um you get an email. Okay. You ask copilot or chat GP to summarize it and you don't realize there is a hidden instruction of doing a privilege uh privilege escalation or malicious activity behind the scenes and it's doing it. Okay, this is the last um slide of what are the risks. I did not have time to create my own. I have created other other um designs but I had to finish a presentation at work. So I just used OWAP's direct reference. We know about
OWASP. It's open web application security project. They have top 10s in multiple um areas including um AI, right? This is the first one I started to follow and I was glad that it came out. Uh prompt injection is number one here. But if you see in the middle um I don't have mics so I'm just going to stand here. Uh in the middle there's a main model right LLM. Sometimes if we are building architecture for an organization we it's going to be very uncommon to use the LLM directly right either you'll do the finetuning which is underneath um either use you you'll do the finetuning of data or the most common one is the rag which is retrieval augmented
generation um has been very common because it doesn't require hardcore data science expertise and it's cost effective right cost is another factor which we all must consider when we are um implementing AI securityities AI solutions in a secure manner. So I remember 2024 rag risks were not there in OASP but with the popularity of rag we also have um vectorization and embedding risks right so if you see from the fine-tuning perspective external data sources perspective there are different attacks supply chain attacks what kind of model you are using [snorts] uh I was talking about bill of materials and I think it happened multiple times the models people have used from hugging face that there were some
vulnerabilities right um then excessive agencies I'm going to talk about it also in one of the slides that how excessive agency is very common now um we've been living in the world of agentic AI and I have a feeling that it's going to make its level up in OAS prop 10 but right now uh the top 10 has prompt injection as number one data poisoning is number two sensitive data exposure um exposure as well Right. So I just wanted to give you an idea of multiple risk we have talked about already and some more which you guys can refer from OASP. Okay. Okay. Enough of talking risks getting nervous. Oh my god, where am I
living? Is it good? Is it bad? Uh I don't know what's going to happen to my data. I'd say um same things I've been saying since eight years that follow the best practices while implementing security. Um I have talked to a lot of customers implementing security solutions. I have implemented secure solutions for budget friendly customers, right? Um like I said, I'm going to bring Microsoft products because that's what I have been using since long time. When Sentinel came in place, Sentinel is good. Customer wants it. But then customer doesn't want a huge big dollar bill invoice. So part of my job has been helping them cost with cost-ffective solutions and this is uh one of the things which I
talk about principle of zero trust and defense and depth start from top layer identity is your identity protected. It's very simple to use a paraphrase as a password instead of password 1 2 3. And I'm surprised to see even now I see so many people using password one two three. I'm not going to say scenarios where because that leads the vulnerability open out out in the open. But I'm going to give like example in my son's school everybody is online everybody's virtual everybody's using softwares. Um he comes home, he brings a software to practice and I'm like why your password is so simple? My teacher told me not to change the password I cannot change. I'm like but it's a risk.
[laughter] So um so it's very simple to start with principle of zero trust. The first of them is lease privilege verify explicitly and assume breach. Right? So lease privilege is protect your identity. Don't give full privilege to everyone in your organization. Right? Um again like I said I'm going to keep repeating. I used Microsoft example. We used to be global admins. When I started implementing security solution it hurted me. It hurt me badly that I'm not a global admin. I was a global admin. Come on. But I'm not a global admin anymore. I have to use just in time access. But that's in a good way. Avoid privilege creep. Right. I changed the roles recently. I still um you know get emails
uh from my old organization but I do the due diligence of deleting them and not renewing my data access because I don't need it. So follow these simple practices before even kind of deep diving into um you know detail or expensive solutions. The other one is principle of RAI. As I said um I'll share it again in my repo that um principle of RAI ensure that your solutions are inclusive transparent fair privacy and security goes hand in hand and which is one of the RAI principle as well and shared responsibility. Um in shared responsibility I just wanted to give you a little bit of more detail. So when I started working in cloud and I was a self-arner uh because
we were two people team we were just given Azure hey you want to learn something new now these are the solutions you have to implement one of this one of the first things I learned was um shared responsibility this is AI but I'm just kind of comparing with pass and SAS right the more you are infrastru it's your infrastructure the more it's your responsibility to protect everything associated with your application ation pass it's like half and half the infrastructure is with the provider your duty is your application and your data and SAS which is you are responsible for your data right similarly it's mapped to the AI um and I encourage people to follow this AI security um AI shared
responsibility model so if you see at the at the bottom is platform and I'm sure like a lot of organizations are not like creating their own models they're using the models which are out there not talking about the big fives or big six in general right uh people are using the the models which are out there so in platform wise um I think people come into the pass or the middle section where it depends on the model uh who's accountable for model who's accountable for tuning design and implementation if you're an organization who's building everything from scratch then and everything in AI platform falls into your um your responsibilities Right. Then comes AI application, AI plugins, um design and implementation
infrastructure. Again, what kind of application you're implementing, what kind of model you are using, it depends on um it depends on that you're falling into. Bring your own model or Azure AI for example or just SAS, right? So like I said, um follow the AI shared responsibility model. It's not just onto the provider. It's not just onto you. But again, it's everybody should do their part. Okay. Um wanted to give some more AI risks mitigation techniques in general. Um I just mentioned briefly a lot of vulnerabilities in um hugging face. I have heard about those uh when we started implementing. Um follow the bill of materials, right? uh we have used software bill of materials right
multiple times. um AI bill of material tell you where the model model originated from. Um I'm not here to kind of pinpoint again to anything but I know when deepseek had started like such an efficient costefficient model it's good but can we use it within the Azure AI foundry instead of using it directly because then you come into that shared responsibility model that you have some of the bill of materials there you have the model scanning there right so start here and again um Azure um AI studio it used to be Azure AI studio it's Azure AI foundry now So if you're selecting the model there you can actually check everything the benchmarks um the
groundedness which I'm going to talk about in hallucination um you can set up your evaluation metrics right um and there is always an option if your model is scanned if you're not using Azure AI foundry you can try to see what are the products which are going to help you see if your model is scanned of vulnerabilities or not right the other one we talked about is um trust boundaries, zero trust, data validation, data anonymization. Uh I know it's fancy world. Like I said, I I see saw people, oh my goodness, I couldn't open my social media. I'm not super active, but every time I would open my social media, it's a Giblly effect. And I'm like, how many pictures,
how much data you guys are giving, including your kids. Limit your data out. Okay. Excessive agency design. Consider what your agents are doing. how [snorts] much privileges you have given them, right? And in hallucination um retrieval augmented generation do that but ensure that retrieval augmented generation itself is also protected. Right? The vectorization, the embedding models, they're not vulnerable. The data validation is out there. Anonymization is out there, right? And groundness checker, I'm going to bring Azure AI foundry again. And if you're using the models there a groundestness check groundness check option is available to ensure that how like your model is not hallucinating right okay these are some general ones like I said I've been talking about this
anyways through the presentation um but just in general right um I was looking at I was watching a video from SANS institute and I agree with them uh for prompt injection if you see in the last there is a tool like prompt shield again available in Azure foundry. I'm not saying it's not out. It's your homework. Compare it with something else uh like what's out there, but it's kind of a prompt firewall. That's the best protection. Um I was checking last night because obviously I had to prep for this presentation. Um Azure AI foundry has um when I made this presentation for teams presentation last year, it was only for jailbreaking, but now it also has for um
indirect indirect prompt injections. So use tools such as prompt shield which tells you what are the prompts which can be vulnerable. Uh block them do the data input validation. Input validation is input/output validation is common since appsseack world if for SQL injection for cost site scripting attacks similar concept to the validation of your data put humans in the loop right and last but not least um these are a lot of tools out there right um I don't want you to over get overwhelmed with so many mitigation techniques start small start basic and then make your way up. Right? So red teaming exercises you attack your structure infrastructure before anybody else does. I'm going to bring again pirate is one
of the tools the um the agents are available now for pirate to do the red teaming um for your infrastructure for your AI system. Uh defender for AI I'm sure they have changed the name again because we are good at that. uh defender but defender is one of the first tools I started working um for my ISVS. Again, I had to tell them you have a lowhanging fruit, right? Um security bores people. Security feels expensive. Security feels overwhelming. Especially when I was working for ISVS, um it was more of it's too expensive. We know what we are doing. we are securing our data by using old machines not updated machines I don't know so I had to convince them
this is a lowhanging fruit it's going to scan for if there is any vulnerability and of course they hosted the data in Azure um the data is kind of like not being used by Azure or Microsoft for their own training so I had to help them understand that so defender for AI is one of the lowhanging fruits right um it helps you basically scan remediate and also reduce the attack surface it gives you vulnerabilities um what is what is not protected your IPs are open for the world you have local access enabled strict governance I can speak hours and hours for on that uh governance has been my favorite topic like I said when I
started working for security why I got into security is I was doing solution architecting but uh we had health and human services products right they needed governance where they wanted to ensure that regulatory compliance is therefore and um again I used the tool of Azure policy but I'm sure every there is a lot of tools out available Azure policy has been my favorite tool since last eight years um it gives you that peace of mind that you have blocked or audited or monitored let's say if the local access is enabled I am the stricter stricter one I can just like I would use the policy to deny the access basically but I'm sure like people don't
want to stop the work like security should be in parallel not affecting your productivity. So use those policies like if somebody is using local access monitor it if somebody is using like you know open the port to the world um what was that I forgot the RDP port right 44 3389 was the right one yeah if somebody has opened monitor it get alerted get act act on this and the last one use well architected framework has been my favorite also recently it gives you the structure um there are five principles um on well architected framework I we had more time I would probably ask the audience what are those principles but I'm going to just tell quickly cost
observability um performance security and reliability right use [snorts] those guidance basically to to start building the solution when you're starting designing use those guidance how you can implement your solutions using well architected framework okay we I tried to keep it for the last but I bought brought the topic multiple times of agents accessive agency again a homework for you guys but I think it's a it's a very good time because I was talking to Johan and realized his talk is next about agentic AI so I'm going to leave it there again you're giving too much autonomy to agents imagine what they're doing right now like the AI solutions if not secure and if you just have given everybody
likes autonomy to be honest Everybody likes power. Uh I believe I if I would I have a conversation with my nine years old. He's nine now, but I have been having this conversation since multiple years. Mom, why you are so strict? Why you have so many rules? Why other people or other parents don't? I'm like if I don't enforce that I don't think you'd be like this. You'd be I don't know what you would be because you will have too much autonomy. And the same goes for agentic AI, right? Hallucination will not just be hallucination. It will be cascading hallucinations, right? It will try to fix your data in a wrong form. Probably the better approach is be
proactive, reactive and basically everything else you could do in security. The best one is put human in the loop. I know there there is has been always the talk that agents are taking over, AI is taking over, but we do need humans in the loop factchecking to ensure that uh agenting AI solutions are not taking over the world. And I'm sure Johan can talk more about it in the next talk. Okay. Um, so this is your homework. This is my GitHub repo. Uh, I've been building it since again I started. It's it was my way because I I was tired bookmarking everything I would research on internet. So I just built my repo uh to help other people as well. Use this
repo. It has all the links of the things I've talked about. If you want to build an open AI solution in Azure, there is a link here. If you want to talk, if you want to listen to different talks which I have listened to, the link is in the repo. So, um, like I said, it's your go. Um, and yep, that's all. Stay connected. If you guys have any questions, if you guys have any inputs, references, uh, I'm very happy to talk about that as well.