← All talks

Shadow AI and the Silent Breach: Securing the Invisible by Rameez Ali

BSides Edmonton · 202541:4911 viewsPublished 2025-10Watch on YouTube ↗
Speakers
Tags
StyleTalk
About this talk
BSides Edmonton 2025 This video was captured using a locked-down, unmanned camera. As a result, there may be moments when speakers are not fully in the camera shot. Additionally, the audio quality captured by the podium microphone is dependent on the proximity of the speaker to the mic. This means that variations in audio clarity may occur if the speaker moves away from the microphone during their presentation. We appreciate your understanding of these technical aspects. ___________________________________________________________________________ Shadow AI and the Silent Breach: Securing the Invisible by Rameez Ali As enterprise adoption of AI accelerates, so does the quiet rise of Shadow AI—unsanctioned, unmanaged, and often unknown machine learning models or tools operating outside official oversight. Much like Shadow IT, Shadow AI introduces a complex layer of hidden risks. From data leakage and compliance violations to biased models and insecure code, the consequences can be severe.
Show transcript [en]

[music]

[music]

[music]

Sorry about that. So for those whom I haven't had a chance to connect with uh I've been uh in the field of uh information security for 15 years and u I've been in a different role including security operations uh penetration testing cloud security review security architect so you name it except for OT security that's the field I'm I don't have any knowledge about but uh yesterday there was a presentation from pashant I don't know if he's in the room, but that was an impressive presentation. So, since the hype began in November 2022 uh with Chad GPD, it also caught my attention and I've started my journey about learning AI, how does it work, what are the security risks that are

associated with AI systems and what you can do to prevent them. So today my topic is about one aspect of uh AI which is shadow AI. Uh but before I dive into the topic of shadow AI, I would like to briefly touch upon shadow I. Now most of you have already heard about shadow I but I still want to go uh go through shadow I and touch upon the uh definition of shadow I just to understand what are the risk associated with shadow I so that we can understand what's going on with shadow AI it is described as unauthorized hardware software or cloud service used by employees or department within an organization without the formal review

or approval by the IT department. Uh if you take an example of your USB stick which you plug uh if you plug it into your work computer that would be considered an example of shadow ID. If you're using Dropbox to copy work related data and so that you can work uh work on those file from home that would be considered uh as an example of shadow ID. Now for those who doesn't know about Octa's incident in 2023 that's a perfect example of shadow Iiki because the cyber criminals were able to get access to HTTP archive which you normally provide to technical support team for uh for them to troubleshoot what's going on whether it's a problem on the

customer's network or whether it's a problem on their side. So the attackers were able to get uh access to Octa's customer support system and they were able to get HTTP archive which contains sensitive information like session tokens or cookies which they were able to leverage and impersonate as oas and when octa uh conducted uh their investigation they found uh they found out that one of employees use uh Google account to check his email on the work computer. Now there is no harm if you are using if you're if you're using your work computer uh to check your emails. Every organization have different policy different approach about it. I've seen many organization uh which allow allows you to check uh

your emails on work computer. But what happened here is that uh the Octa employee sang into the Google account and little uh after that he might have signed into Octa's customer support system and uh when he entered the credentials it got synchronized to uh his Google account and all the defenses that octa octa put in to protect its system their network it became useless because attacker only had to compromise the user personal account. So that's how they got into uh they got into it. Uh they got into a customer support uh system. Now if I go before pandemic this was a very common scenario. You must have heard about this story uh these stories as well that employees

sharing uh emailing themsel the work related uh files to their personal inbox so that they can uh work from home because remote connectivity wasn't uh wasn't an option at that time or even if uh if remote uh work was available it was only available to few executives or people those who are out in the the it was it was quickly limited to a few a few executive. Now when when employees uh use a personal email to uh send the uh send send the work related files via email that's considered an example of a shadow ID and if I take my own example I have a child who is two and a half year old and

his mom has set a fixed time for for screen time. He can only watch videos during dinner because cats makes uh makes him uh that's makes it easy to feed him and finish dinner early. One day he came back from daycare and I was resting on my couch in the living room. He might have asked his mom for for screen time and the obvious answer was no. So he came to me with his cute little babble and asked for this phone and how could I deny that? I said yes you can have my phone. After some time when my wife arrived into the living room she saw what's going on and guess who faced the consequences.

So if a child can think about this query, look at his creativity if he can figure out to walk around then what an adult can do at work. Now you could argue that you cannot compare a 2 and 1/2 year child with an adult who's grown up who knows what's good, what's bad. But

Here's the here's the question somebody asked on Vic. What was the worst worst case of shadow I or the biggest problem caused by shadow I you have encountered during your work? There are number of responses but I have picked up some interesting one. had a developer b a bunch of cloud infrastructure in their personal Azure account which build their corporate query card developer leaves qu card is canled and 30 days later so is the infrastructure the next example in the earlier days of Microsoft canning a user took it upon herself to set up her own canning under the company name domain then left the company that was a pain to get fixed Now the the employee didn't mean any harm

there but what she didn't know about it the day when she leaves the organization or if she moves uh to another role that could be a problem then the next one is my personal favorite public facing website sitting under an employes guest with a public IP address. Wow. A group working in the lab decided it was easier to scan up a local Windows domain than follow corporate scangles and services domain controller was compromised via blue key and attackers can use their domain credentials to move literally years ago again acquaintance uh who worked at a medical facility uh said that she and her colleague shared patient records via Dropbox and that's an example I gave you earlier about uh

Dropbox. She used Dropbox because it was much easier than the original patient record software. She also added that it was good because it also work on their private home computers. So shadow I is around that's a human behavior that if they have to do a job they and if there is a alternative way of doing something that's easier people would go for it. So, shadow AI is no different except it is far worse than shadow I because there are countless of uh AI applications being built uh every single day. You have the AI application that can summarize your conversation. It can summarize key report for you. It can summarize key for you. uh it can do sentiment

analysis. There are multiple use cases of AI and these are being adopted by uh users rapid uh rapidly which is again making it into the workplace which obviously uh boosts productivity but it is equally uh risky. Now when the uh when chat GP was launched it took only five days to reach five uh 1 million user and that was the article published on Forbes. Imagine the imagine that in such a short amount of time no no other application on the internet has ever seen an organic growth in in just 5 days. But chat GPT it to uh it it took like just uh five days to reach 1 million users. So what's driving uh shadow AI? The

first uh driver is accessibility of tools. You have got a bunch of tools that are available for free or it has a low subscription cost. It doesn't require any setup or minimal setup. If you think back to shadow it, if you want to use your if you want to use an application which you like, then there may be some barriers which your employees may have to pass. For example, an application might require admin permission and not every user gets the admin permission or the application might require communication on non-scangled port like uh 543 or 112 112. These ports are not generally wirelessly scale on firewall. So they will have to put a request and then provide

justification why they need to open the port and that's how it can be caught. But with shadow AI that's not the case. It's communicating over HTTPS and uh that's allowed uh in the organization. The other driver is AI for everyone. You don't need to have you don't need to be a technical geek to deploy an AI tool with a little technical knowledge or even if you don't have a one you can uh still use AI tool for example marketing analysis she can take the customer purchase program to chat GPD and uh return the wizard on high value customers. Um I saw a video uh two weeks ago which was posted by Wesk in Wesker in Alberta and

he was talking about Chad GPD bot that he created a chat GPD bot uh using two years of historical data and once the bot was clean all he had to do was ask a question build a pre-cache building model over 60,000 square ft which uh with 23 units uh three stories high and Chad GP was able to provide everything the project schedule the development plan and everything and he mentioned that it used to take me two to three days and obviously some uh money was also involved here to pay the expert but with Chad GB he was uh able to do it within two to three hours. The next driver is pressure to innovate. The drive to innovate often leads

employee to bypass IT governance. Um if I think about an example uh there was a presentation yesterday uh on responsible AI use uh where Seth mentioned about walking up to a booth uh to understand how the product is powered by AI but the sales representative didn't know that what's the use of AI in that tool and the response was everybody is following AI So we are also putting the word AI to sell our product. That's about it. But if you uh if you need more, you guys can leave your contact and I can follow up on that. And I witnessed that uh forcan when I was at the upper bound conference this year uh which happens in Edgington is

the AI conference and there were there were many AI experts and leaders there who were sharing about their knowledge their stories on how their organizations are leveraging AI to automate the task. But during the Q&A session, it really highlighted the challenge that people didn't know where to start, but they still want to follow the trend. They still want to deploy AI and make use of it regardless of uh what use case they are trying to uh to solve or what risk uh they will be exposed to. Um the next driver is gaps in organizational AI strategy. So if you don't have a organizational AI strategy, then this is what shadow AI thrives on because if you don't have one, your

employees wouldn't know where to start, what AI tools uh are approved by the organization, uh what kind of AI tools uh you can use. So these are some of the drivers of shadow AI. What's the harm if uh if an employee is just trying to boost productivity and yield better output in a short amount of time? The problem is the GA leaks. It's it's the amount of data that's been uploaded to AI platform and you know about it or you already know about it. A lot of people are using AI tools. But it's not just the amount of data that's been uploaded. is the number of users that are uploading the data because if I

look uh if I go back to my previous example of chat GPD within five days 1 million users uh started using chat GP I don't know what's the number right now but it's key is key volume as well and to make it further worse the AI self-arning capability is reading your data it's uh it's training itself on your prompts. So that's the problem here. Uh the next risk is regulatory compliance risk. All these frameworks that you have heard about PCI, GSS and so to HIPPA, GDPR, these are not the frameworks that were built for AI. AI AI wasn't there when these uh frameworks there. As a matter of fact, AI sides scripts these uh frameworks.

So if there is a breach due to shadow AI of of an European European citizen then you could get fined up to 4% of your of the global revenue. If there is a b a breach of PCI DSS then it could cause a reputational damage, financial damage and so on. unauthorized AI influencing business decision. So you know about the AI hallucination and we saw a couple of examples last year. Uh there was a uh there was a story on CTV news where a British Columbian uh British Columbia uh lawyer cited two cases that were generated on uh Chad GP and when it was f found out that it's AI hallucination he was ordered to pay the

fine to compensate for the time that uh that was took by defense council to review those cases and another similar case was happened in Manhattan [clears throat] and how many of you heard about the Air Canada chatbot uh incident. So that's again another example of AI hallucination. Another risk is security vulnerabilities. You don't know what who is uploading the model. You're just downloading the model running it in your environment. But there is a possibility an Aayaka might insert a malicious piece of code and you know what uh your your environment can be compromised. Now after that I know about hugging face that they introduce clam AV scan. I don't know how effective is that but they have introduced clam a scan and

then they also introduced pickle module scan but there are other platforms out there you can also download the modules from uh github you you don't know from where uh your users are downloading uh the llm models or machine learning models and the last but not the least operational risk Think about a drink uh a drink AI. If you're using a drinkic AI to automate high-risk task as such as loan approval, one single hallucination and it could uh it could cause severe business impact.

So with shadow AI, what can you do about it? Now I was thinking about uh going through uh through the points and talk about uh uh what can you do but this morning when I woke up and I was reviewing the presentation I thought about making it making it a little bit ex uh exciting. So I'll be doing an exercise here for which I'm going to need three volunteers. Anybody? I need somebody from the front uh front row. So, I already know this guy because we work uh at the same company. What do you see in front of you? >> It's a loony. But what you don't know about it is that it has a misprint

caused by a printing error at Royal Canadian Mint and that's why it has a it has a value in thousands. Yeah. So, what I'm going to do is I've got four I've got four envelopes here. Shuffle it a little bit.

Now, you don't know which envelope has your coin. What I want you to do is take two envelope, give it to the person uh give two envelope to the person on your right and two on the left.

>> No. Uh sorry. Uh what's your name? >> Evan. >> Evan. So uh does anybody want to take a guess who has the envelope with a coin? Evan or B? Just kick a guess. There is no fire penalty. Evan, Evan, do you want to check your envelope?

>> I hope you don't have the coin. >> Because I have the I have the script in my mind. I don't want to mess up with it. So, it's it's part. Now what if I had put a camera a camera that was recording everything and if you were able to see everything on the screen here then you would have probably figured out that who has the coin now. So this is what you need to do first. You need to get the visibility into your environment. You need to start using uh you need to start identifying what AI tools are being used by your organization. And I won't recommend you to buy some fancy tool to figure out uh

who is using what. You can use your you can monitor DNS queries. If you have got the web security uh web gateway uh like zscaler uh cloudfare net scope that can also give you an insight. You just have to put a filter on URL category and say that you need the list of all the Gen AI application all the traffic that's came to Gen AI applications or you could monitor API calls SAS security posture management if you uh if you have depend for cloud app that can also give you give you a visibility into what kind of AI tools are being used. Now after you have identified uh which AI application uh AI applications are

being used. Coming back to the real world example we know that Evan and Bot are the two only person who could have the coin. Similarly with shadow AI you you need to start building an AI inventory and the reason it's important is it would help you identify the risk that uh that your organization could be exposed to because if you are let's say if you are a developer company and you see a lot of traffic going to clo uh clog AI then the possibility is that your developers are using clog AI I tool to help them with writing the code and there is a possibility that they might be uploading profile code uh on uh

there. So both uh since you have the coin here, I would like to move on to the next step, take any person on your right hand side and give the coin to that person, but that person should not be wearing a glasses.

Thank you. Um can I get your name please? >> Nicholas. So Nicholas has a coin but does anybody know what uh what I did here? I've given an instruction to B that he can do he can give give the coin to anybody but he should not be wearing a glasses. So with shadow AI you need to define your AI usage policy. You need to define that if you want to use AI uh AI tools what should be uh what kind of AI tools they can upload what kind of sensitive information you can upload there these are the list of uh approved AI tools that they can use and if you if you don't happen to have

one you can e either use chat GPD or copilot to build a template for you or there is a website which uh has gen AI usage policy generator. It asks you a couple of basic questions. For example, where is your organization in its AI implementation journey? How quick would you like your policy to be? What use cases do you want to support such as code generation and review, customer support, content creation, data analysis and what kind of tone uh would you like to set uh in your policy? There is one more question here. Do you have any of the following resources such as a formal process for AI application intake prompt library uh approve application list and that's

about it. It would generate a uh it would give you a template which you can use to uh which you can use to start building your AI usage policy.

All right. So, Nicholas, uh, I'm going to need my coin back. Okay. Before you hang, uh, give the coin back to me. I want to tell you the quote. I just used a social engineering technique to trick those guys into believing. And not just those guys, but the rest of your class get the way a miss spring. It's not uh its value is not worth in thousands. It just be it just a dollar. Do you still want to keep the coin? >> Okay. And before you give it back to me, I want to tell you how important this coin is for me. This key only coin that I've got in my wallet which I use to

unlock grocery card. I don't keep cash. And if you don't return it back to me, then what would happen is that I might have to lift heavy hand cord and it might sprain my arm muscles. So I give you two options. Either I walk up there, you throw the coin at me. The chances are that I might not be able to catch the coin, it drops on the floor, rolls under the table, and you know things do happen. mysterious power come into play when small objects when it goes under the sofa they are nowhere to be found. So there could be it could be a possibility here or you can

keep a candy envelope for me and give it back to me safely. >> I guess I'll put >> Thank you. So, does anybody want to take a guess what uh what security control it translates to?

>> Close closer. All right. So, is security education and awareness training. I told Nicholas that how important this coin is for me and you could do the same for in your organization. You have to let the employees know please don't have the simple training you should not be clicking on anything that you don't recognize. If you let your employees know about the impact, what's key, what's key importance of this GA for the organization. What would happen if uh if your organization is breached and your GA is exposed.

So uh that equates to security uh education and awareness training. And believe me when uh when you let your employees know about uh the importance of the ger or and how can you safely handle it just like Nicholas did that for me he safely put it in the envelope and uh will it back to me your employees uh most of your employees would uh would be following that the next step is I know that it's an uh this coin is uh very important bottom for me so I can put an imaginary cape around it and put it safely back in my uh pocket so that I don't lose it. And this equates to cloud native application protection uh

platform if you already happen to have one your if you are deploying an AI model in your environment use defender or if a uh if you're using the SAS security posture management offered by AWS look at the recommendations that are provided by them and start to work on it's got to secure it. A simple example would be SQ3 bucket if it's exposed on the internet. Make sure that uh you don't have it exposed on the internet. If you uh if there is no role based access control then you need to enforce role based access control. Now the last uh last thing that you can do is I've got four envelopes. It would be it would be

easier for me to identify which envelope has a coin if I mark it. And that translates to data classification. You need to start classifying your data and use data security posture management because that would help you identify what's the sensitive information here and what do you need to focus on.

So I'd like to categorize AI security risk into two categories. One is the AI infrastructure risk which I just talked about. You need to make sure that uh the infrastructure that's hosting AI workloads they they are protected. If you don't have a SAS security posture management refer to the Wangle documentation and follow their security best practices. uh I won't be focusing too much on AI infrastructure risk because these are the uh risks already known. I would like to focus more on uh native AI risk such as prompt injection system prompt uh system prompt leakage topic abuse or topic output. And if you're the security architect or a security person who has been tasked to secure the AI system, you need to start

looking into AI security platform. Now I must say I have I've looked into couple of uh vendors who provide AI security platform but they are still in their infancy uh stage. There is not one single solution that can protect you against all AI security risk. But these are the things that you need to uh uh that you need to evaluate depending on what kind of uh AI risk that is more applicable to your organization to your data. So some of the things that you can look for is whether the AI security platform can enforce an organizationwide acceptable use policy. If your organization doesn't want to doesn't want your employee to use chat GP and they have

copilot as the approved tool. What you can do is uh using Microsoft Defender for cloud app in conjunction with defender for endpoint. You can set a policy and say that anybody who visits Jad GP or any other any other gen AI application bring up a prompt and say that copilot is the approved AI tool. So you need to evaluate uh when you're evaluating AI security platform see what uh what is your acceptable use policy and whether that tool can help you enforce that. The second uh the second thing that you need to evaluate is the guardways. Now if if you host your model in Azure then Azure has got Azure content safety uh that's key guards uh which you can uh

implement to prevent against prompt injection and all the AI native risk or if you have the gen AI application deployed in data bricks then you can use mosaic uh security uh mosaic security Grab sorry I forgot the last name but it's called mosaic. On top of that uh the LLM models also provide some guards. For example, if you're using llama model then it has uh llama guard wheel that you can uh implement to prevent against prompt injection. But depending on your security requirements, it might be possible that uh as your AI content safety or mosaic or guardrail may not meet your requirements. So when you're evaluating your AI security platform, check for the uh check for the

requirements if it can allow for topic migration. For example, you don't want uh your users to be uh to be asking question about financial performances or PII data. Is there a way you can uh impose topic moderation in AI security platform? The third thing uh you might need to uh consider is whether the AI security platform can enforce vex sec uh vex security. Now this feature is still in the early phase as I know uh as I know uh you cannot enforce authorization at the uh gabase level at the vector database so far I know that there is uh pine cone database which allows you to enforce authorization but uh it's not too mature right now. So

there are two approaches that are offered by AI security platform Wango. One is that you in which your embeddings with me dera. So when application is pulling the data, it looks at the mega data it looks uh in the mega data if uh it looks at the data owner information and depending on that uh it makes a decision whether the user is allowed to retrieve that content or not. And the other approach is uh ra security gateway. I was just reading about it. There is one vendor who offers security gateway. I think it was AWS guard rail bedrock rail. I've got it in uh in my next uh next slide. So rack secure gateway it sits between

your LLM model and the front end interface that's exposed to the user and whenever user submits a prompt uh it would check for the authorization whether the user is allowed to is authorized to access the content or not. You also need to uh consider whether the AI security platform can perform model scanning and if it if it has got AI usage report which you can uh present it to your executive and give the numbers on how many AI tools are being used and what kind of data being uploaded uh there. These are some of the wangles which I've been able to find uh that uh offer a AI security platform capabilities. Now I would like to reiterate that none of

these uh wangle offer a single uh offer a solution that has all the capabilities. Let's get here. So it totally uh totally depends on your organization need what are you trying to prevent it depends on the use case here. So I'd like to wrap up my uh presentation here by saying that shadow AI is already happening. Start with the visibility see what applications are being used because that would help you understand what kind of security tools that you need. what uh what's the level of uh tone that you are going to set in your acceptable use uh usage policy at this moment if you don't have the budget for securing AI you can leverage the existing tools and pol uh policies

you already have in place build control that support a safe innovation And I'm a big fan of this uh this gig ming here. You don't want to be a roadblock in the path of innovation. If you have a if you get a request from the business that basically AI tools that they uh that they want to use, understand the use case and if you happen to find a risk, explain the risk that's available and uh see what you can what you can offer to protect against the risk and aligning your shadow AI response which I just mentioned according to the enterprise uh responsure you don't have to be too strict about hey you cannot upload your PII data at

all. First understand the risk uh involved here you how your organization can be impacted. Based on that you make a recommendation whether if it can be mitigated by implementing a security control or you just don't feel secure enough you don't feel comfortable enough for for the usage of that particular AI application. So, I'm going to open up the floor for questions and answers if you have got any

if not then I'm around the west of the Gree and here is my LinkedIn profile if you if you would like to connect with me on LinkedIn. Thank you. [applause]