← All talks

Bring Your Own Breach? Managing BYOAI Risk in the Cloud - Meletius Igbokwe

BSides Bournemouth14:5220 viewsPublished 2025-09Watch on YouTube ↗
About this talk
🎀 Talk Title: Bring Your Own Breach? Managing BYOAI Risk in the Cloud πŸ‘€ Speaker: Meletius Igbokwe πŸ“ Abstract: As generative AI tools become more accessible, employees and developers are increasingly bringing their own models, plugins, and AI assistants into the workplace often without security approval. This rising trend, known as Bring Your Own AI (BYOAI), poses a rapidly growing threat to cloud environments. From unauthorized model access to data leakage, prompt injection, and compliance violations, BYOAI is introducing a new layer of shadow risk that traditional security controls are not prepared to handle. In this session, we will unpack the unique security, privacy, and governance challenges posed by BYOAI in cloud-based systems. Attendees will learn how well-meaning users can unintentionally expose critical assets, how malicious actors can abuse home-grown or third-party AI tools, and why cloud-native environments are especially vulnerable. More importantly, we will explore proactive strategies for identifying rogue AI usage, implementing Zero Trust controls, and building a governance framework that allows innovation without sacrificing security. If your organisation is embracing AI or suspects your workforce already is, this session will help you stay in control before creativity becomes a breach. βš“ This talk was recorded live at BSides Bournemouth 2025 on 16th August 2025 β€” a community-driven cybersecurity conference bringing together researchers, practitioners, and enthusiasts to share knowledge, skills, and ideas. 🌐 Learn more: https://bsides-bournemouth.org/ πŸ’Ό Connect with us: https://www.linkedin.com/company/bsid... πŸ“Ί Stay tuned for more talks from the event, and don’t forget to subscribe for updates!
Show transcript [en]

Can you guys hear me? Good morning everyone. Um my name is Malicious and um I'll be speaking on bring your own bridge managing your own artificial intelligent risk in the cloud. Um before I proceed I would like to confirm how many of us are using AI either as personal use or within our workplace even myself as well. All right. Um so how many of us know um the type of AI we are using? What what can anyone just give me any names at all which they use in their place of work? Okay. >> Which which one? >> Open AI. >> Open AI. Anyone at all? Any other person? Okay. All right. All right. Thank you so

much. Um, so um before I go on, I would like to explain what B Yi means. So this simply means bring your own artificial intelligence just as we have you know bring your own device traditionally when it comes to managing our devices you know within our enterprise environment. And so what this talk is all about is all is about how meaningful you know employees, people use artificial intelligence just to boost the productivity and then you know streamline um processes within their workplace without knowing that there there are so many risk you know involved using these tools without the right compliance you know um risk assessments you know governances and putting their organization in risk. So I'll be talking about how we

can manage these tools to achieve our work, boost productivity and also ensuring compliance and then safeguarding our data from being compromised. Um before I proceed I would like to talk about um the popular tools AI tools which we use today. So as we can see almost since for the past 2 years now almost everyone has started you know developing AI tools every now and then and the recent ones which we are popularly aware of are these ones like tachibility everyone knows myself I use it the copilot which Microsoft developed you know which is integrated with Microsoft 365 to boost productivity within workplace and we have the cloud AI for our software developers guys who are here. So cloud is mostly used in you

know code reviews um technical um evaluations or anything that has to do with technical requirements so cloud is your guy and then we have the croc um I don't know if we are aware of croc but I think if you use um s which is formerly known as twitter so you should be able to know croc because it integrates with um twitter's um so much in the sense that it analyzes you know people's data within um Twitter or X as you might might you might want to call it which is the recent name. So what Grock uses what so the use of Grock is it helps to analyze and get informations within the platform and

what I want to talk about these tools is in as much as they help us to streamline our work and then you know make us more productive in our environment. There are some risk and concerns you know using these tools in our environment especially if we are using it to integrate into our technology stacks. So it could be we kind of want to use it to maybe evaluate our documents. We want to use it to you know check in codes that we want to use to run in just to maybe want to use to maybe develop our systems or anything that has to do with workplace. So there are some governances that a lot of organizations are not you

know really following which are very very important and which leaves a lot of um organizations open to getting compromised and this is where it is really really important that we need to ensure that before we use all these tools there are some sort of you know user education and also awareness about how these tools should be used and We also need to ensure that we are covering about you know the data access to retention policies integration scope. How are we integrating all of this is with our um with our uh environment and our technology stacks. Now another thing I want to speak about is chachibility because a lot of us use chachibility but what most of us don't understand is that

the free version of chachibility the the information we put on are being used to improve the model and now this is where it is you know a little bit scary so imagine that you're using chipity to um let's say you want to um improve your business maybe process or vendors requirement and you're bringing everything about organization and putting in chipity and maybe you're using the free version. You're just helping them to improve their models by giving by giving your business idea to use as a knowledge base for them. And although they said you know there could be some sort of differences when with the paid version but then we we also need to be very careful of what we give

even if we they said okay the paid version is different but this is where we we need to be very very very careful and know the type of informations we are you know putting in and the key concern about all of these things is that there there is there is a recent study that shows that there is 75% of adoption with AI tools especially you know within you know the enterprise size small medium individuals but only 30% have formal policies on how to use this to make sure that you know they are not getting you know um the data not compromised you know they are not being fined by ISO um they are following all the security you

know standards that are required for the business to work as expected so um here I'll be on this slide I'll be talking about the real you know bridge cases and I'll be going through with the recent one which we all almost all of us are aware about with the deepseek incident that happened sometime last year in 2024 the problem wasn't about the tool it was about how the data were being collected so what happened was that because of how good the tool was and because it was a little bit you know cheaper when compared with other AI ities around you know the western you know the western region. So a lot of people you know thought that oh it was

good and then a lot of organizations integrated their work processes you know the APIs connected all of the systems without appropriate you know governance and the problem was that it was after some time some security researchers noticed that this data that were being you know filled inside this tool were actually used to understand how organizations are you know building um their their environment you know how they are using to build their ideas technology and which was a very big huge concern and this is where this is where appropriate compliance risk assessments you know access control come to play because if all of these were in place this could be is could have been easily been avoided. Another one is about um

the incident that happened with at Samsung where it was noticed that some of Samsung employees were uploading you know meeting recordings and some codes used to develop you know some of the systems and it was such and I think this was happened with this happened using chachipity and it was such a it was very concerning because for them they want to boost you know productivity. They want to help the business to you know go to the appest. But what they didn't understand is that there is risk of you know putting the business in you know in so much problem that could either f them you know lose and and also cause you know the customers from you know losing

and trust and maybe getting fined as well. So and this is where you know we need to make sure that users are being trended and also they are being aware of the type of data they feed in because I know that everyone wants to ensure that you know they are being seen as you know working hard and then also being seen as you know improving businesses processes maybe automating the whole things but we there are few things that really misses and which can you know put everyone in problem and this is why Amazon noticed that some of the employees as well we are using um some AI assistance tools to write some meetings to write some um um

you know financial reports you know even with the banks and they had to make sure that there were some um policies to stop that from happening. So and and this is what we are talking about and this is why it is important that everyone needs to make sure you know these tools are being compliant they meet all the security standards that are required you know for the industry before integrating within your workplace. So, so um when while using these tools there are important things that we need to ensure that these tools meet up before we you know proceed in adopting them. Now the the big challenge here is that at the moment there is no standardized

like compliance security standard for um you know me for AI I would say because I don't think at the moment I don't think there is any standardized security standard that could check um the model um the type of model these AITS are using uh the inject the prompt injection vulnerabilities or um where this where this um data are being stored or where they are being processed. And this is why we need to ensure that the current the current standards which are in which are actually invoked are being utilized to ensure we are being you know we we are safeguarded from being compromised for you know in our data and also users as well. And this is where for financial

um you know financial institutions it is very important that you check across these tools to ensure that at least they are certified with stock 2 or ISO 2700. You need to also understand you know where is the data being processed. You need to ask some questions where are they being processed. If I ask if I request for my data to be deleted can that be can that can that happen you know at you know at at upon my request. You also you need to understand what type of models are they being are they are they using to you know analyze this data and for um health sector. So this is where HIPPA needs to come to place as

well to ensure that you know this um all the security all these um frameworks which are being laid are being met. GDPR you need you need to know what um what security or framework you know governing the governing the country are they using to analyze you know people's data so all of these things need to come to place and this is where we need to understand our data you know the data handling how our data being handled how the processing location where our data is being stored the retention how long are these data being stored when we you know make this request when we put in our prompts when we you feed AI the information because most of them use

what we give them to train the models. So we need to ensure that all of these things are being covered before we on board them to our environment. Another another thing that could be a sort of concern is the geographic concern you know the locations and also the legal distrions as well. So and lastly when we lastly the most important thing again is ensuring that some of these tools um have you know a good access controls that that are auditable you know we we also ensure that there is a single sign or multipart authentications you know that we can use to make sure we are actually being protected now so I'm going to I'm going to be a

little bit brief here about the implementation of best practices. So first of all we need to you know carry out compre comprehensive risk assessment of the tools that we bring in. Then there are some good tools like Microsoft Defender for cloud apps that could you know give you an overview of you know this the posture of any tool you want to integrate within your environment. So you want to check with the Microsoft um cloud um clouds app um defender solution. So that's an example I'm going to use. So you can confirm if this tool meets you know the cyber security requirement the standards also check the compliance with scores and all of that. So all of these would you know help you

to manage this effectively and also control users um access using these tools.

So lastly, so um I'm going to conclude that you know with all of these solutions in place you know making sure we have the visibility to the tools we bring in environment the policy we have good policies and user education not really you know telling them just by the just by know word of mouth but actually you know having a demo where you use sandboxes to show them how they can effectively use AI in the environment and also using good controls such as just in time access um just entire access and enough access just to ensure that users assets are being revoked just exactly um at the time you have set it and also making sure that people you

have enough access to do what they need to do. Thank you.

>> Any question?

All right, cool. Thank you very much. >> Thank you so much. >> All right, cool. There's your >> Thank you. >> Thank you very much for coming. >> Thanks.