
All right. Hello everybody. Welcome back.
So we will make the next step now that uh Aner Aniler Bhan will join us and he will present AI security challenges and opportunities for innovation. So looks interesting. So let's wait for him and uh be curious about how AI is affecting uh innovation with respect to cyber security or security. So we'll see what happen.
>> Thank you.
>> I'm ready when you are. >> Perfect. Let's start. Thank you. H all right so good morning audience I'm calling you from uh a remote place in India right now it's called Orurangabad and I'm I'm going to be talking about AI security can you guys hear me well anybody give me a >> yeah thumbs up >> all right cool awesome so oh thank you thank you for the thumbs up uh there's a small uh cyber security company called mini orange. I founded that about 14 years ago and uh it deals with a lot of uh identity and access issues around the world and we also founded another uh uh conference called identity shield. So that's a brief
background about me. Let's dive into the subject which is AI security. Now um yeah AI being very the popular buzzword uh these days the how it affects us um AI is reshaping search uh you would all agree with that we ask a lot of questions on chat GBT these days and uh um it's a I think a lot of u um earlier on a few years ago until last year I U majority of the uh search was done by Google and now AI has taken all the different chats strategy and Gemini they have taken over a major portion of that and uh that is uh something that that has an effect on AI security. I'll
explain in a bit. Um and every company in the world that I see and I also in my own company um we are trying to automate all kinds of workflows uh the major workflow that's in question is uh how to write code better how to write code faster how to generate code using agents and all of that and but even in other departments like sales HR a lot of the workflows are being uh automated now what's happening is a lot of uh LLMs in the world they are uh being written for a very generic use case. And then um things like uh there are other tools which are are written as rappers and claude just came up with cloud code
which seems to be a really good uh agent for writing code but there are others as well. Um so those are happening. So I don't want to give you too much detail about the the AI landscape but I know these are the terms that you see uh outside LLM is the brain um which has orchestrating complex workflows but there are other two terms which are be equally popular which is MCP and uh A2A which is agent to agent and uh a lot of people realize that as soon as uh a different version of the LLM comes in uh a lot of the context goes away so So the concept of MCP came in and a lot of
context for the business that that wants to not start from scratch every time a new version of LM comes in. Uh they started using an MCP to store all their context and uh uh obviously LLMs are explaining a wider problem uh exp are are able to able to take care of a wider problem but uh we don't care about wider problems. We we want to we want our current problem to be executed much faster and that's where agent agent uh agent tick AI has come into being. Um rest of it I'll explain as we go along. Um but every now and then what do we hear uh when we talk about uh AI? What kind of attacks are we hearing? So
there's some MCP vulnerability. that's um hackers execute code uh remotely and uh that has come up. I recently heard there was a dating application in New York which was written using uh AI and uh which which was getting very very popular with uh some of the females there and suddenly but it has no it had no um uh security built in and uh since it did not have security built in a lot of the personal information that their that the consumers of that app shared with the app it got leaked major major issue um some AI powered tool coding tool wiped out a software company's database um Google has accepted these kind of mistakes that you see um
and every now and then you know these kind of prompt injection attacks they they're getting very very popular. I'm assuming you guys are already familiar with some of the uh some of these terms. So I'm not going into too much detail here. But I just want to tell you that what AI has done is it has increased the attack surface quite a bit. Um there is an LLM that we have to protect. There is agents that we write and we are trying to automate workflows. There is MCP servers where we store context of our business and we hope that they are um they are safe. There are agents like cursor like cloud code uh that we use to
write uh code and we expect it to write safe and secure code but is that all is all of that happening? So um when we talk about AI security um it's not one problem. These are the four that's the crux of this talk. What all we mean by AI security. So what the first thing is security from AI. Like I was saying um AI has accelerated attacks quite a bit. there are too many defects uh that have that are being created using AI or there are a lot of fishing attacks hyperpersonalized fishing which is happening and it is automated and it's all possible because AI tools are being used not just by the good guys but also
by hackers. Uh any recon or exploitation or fraud has been accelerated. So that's what I mean when I say security from AI. You have to defend yourself more because there are too many attacks going on. Then security for AI uh protecting the AI system itself. Um obviously we are uh uh all the elements of the world they are they are being uh uh created with the right mindset but uh you know hackers find out find out find out a way to induce prompts which are not uh what they were supposed to be used for and uh a lot of misuse is happening uh through that. So that is security for AI. Then there is a question about security of AI
which is you know it hallucinates sometimes we have all heard that term hallucination uh there is uh no input validation no output validation sometimes there's bias uh the model can be trained in one part of the world but is used in another part of the world. They could be bias, there could be policy. Sometimes kids start uh asking questions and sometimes some bad actors start asking questions and we have to safeguard the AI against such things. We cannot answer every I mean it's a very powerful tool that we are making available to uh um our kids and uh we hope that it's asking the right questions. I was in a open air summit yesterday in uh New Delhi in India,
largest AI summit in the world and on one of the one of the uh uh vendors they were demonstrating how he can protect uh kids from uh uh like if the kids start asking wrong questions if probably say self harm then how it can prevent that and I was furious you can just deny it and they were like they saying no if you deny it then it won't be right they will try to find out the information some other way and I I was also saying that they can just uh let the parents know and they were saying no we can't do that it's not uh fair to the kids that way and they were trying to figure out how to make
responsible AI work so that is a term that you will you'll be hearing uh very often responsible AI so that is the third type the fourth type is security using AI now if uh uh we have this huge huge superb brain at our disposal. Why can't we do triages uh quickly? Uh why can't we correlate intel very quickly and alert the right people? So in my company and in hundreds of other companies more and more AI is being used for example for animal detection uh and uh be little stuff. So I'll talk about that uh in a bit. But majorly what I want to tell you is that when when when we talk about AI security these are the
four areas that I can think of uh currently which is security from AI because the attack surface is more security for AI because uh AI needs to be protected from the bad actors security off AI don't let it hallucinate and security using AI which is uh uh enhancing our security capabilities uh using the AI tools. All right. So, so let let's get into a little more detail about security from AI. Um, a non-expert can generate a a very convincing uh call script and I I I just heard a so there are so many voice agents. I'm sure uh you've uh seen some voice agents which can speak in your dialect even and when you I spoke to one
AI agent and I come from a place in uh India where this particular language is important because it's called Bangla and I could uh just ask that AI voice agent to speak in my language and it did it flipped in no time. It made me very comfortable and uh it's very um I mean so that can that thing can be used in a in a very uh positive manner but it can also be used by hackers uh because now uh see I mean now there's so much of the world is accessible uh if you're talking the the local language with with local person and there are so many scams that are happening in India. Um one of the
major scams that are happening in India is digital arrest. You'll not believe some people pretending to be uh police and uh central bureau of investigation and arresting people on video calls and uh somehow people are giving into that even though there's a lot of marketing going on from the government saying it's not uh there's nothing of this sort that can happen but people give in and pay a lot of money before they realized that uh that this is fake and uh obviously we have all heard of uh uh fake videos and audios. Uh nowadays it's so easy to create uh uh those videos and with any anybody and uh some of them are vulgar in nature. Some of them are used to uh
exploit uh the young men and women. Um then AI helps find vulnerabilities in code and app. And uh just like I was in Nikl's uh session a while ago, he was saying that he was able to find zero days and all of that. But guess what? With the help of AI, a hacker can find all those vulnerabilities much faster. And a vulnerability found in one part of the world uh let's say it belongs to a particular library which is popular. He can then figure out what all applications around the world uh is using it uh and just go try to exploit uh uh it some some place else. Um a lot of this is available a lot of
our information is available online. So um and we we just don't fail to post anything do we? I'm going to post about this this particular besides uh Gottenberg that I I spoken. So guess what all the hackers are also listening into that and they can put two and two together and get all lot of details. um some whenever I travel I'll tell you an interesting story when I travel to US my staff in India starts getting messages from a WhatsApp uh in which has my display pick and they say you know I'm busy pretending to be me they say I'm busy can you can you I'm with a client and I want some Amazon uh cards to be
purchased right away and guess what the staff thinks oh my my god this is my moment to shine with the CEO So I'll uh I'll buy it. And uh after he buys a couple then he realizes that oh my god uh looks like this uh request is not coming to an end. May maybe it is not the CEO and then he realizes that a lot that he has been uh scammed. These are all real life incidents and a lot of them use uh AI. Um so social engineering at scale that is exactly what is happening. So what are the solutions for this? Uh solutions for this uh this slide is a little upside down. I would want to
begin with uh the first one which is training and awareness. You guys made an excellent choice coming to coming and attending this uh online uh conference today. Um fantastic stuff. you will know what all is happening in the world and then you can at least save yourself if something is uh too good to be true or something is being done to you and you can go train or educate five of your family members or neighbors uh yourself. So training and awareness huge huge uh solution. I'll also like to point out that a lot of the applications that we use online, they already have spent millions of dollars into building cyber security features and what you can do is
uh just use those features and twostep of verification or or multiffactor authentication or two-factor authentication it's called and it's it's just an option that you want to go and search for where where it is in the online app that you use and click it and enable it and just use it. that will make you insanely more immensely more uh safer than without it. And uh there are there are other things that are listed here as well. Um we'll jump on to security for AI. uh lot of uh the a lot of companies are are in the race to create the best and the best AI models and we all know OpenAI came up with chat GBT and soon sooner uh
soon uh Gemini came out from Google and now Microsoft is Microsoft and Facebook they they're kind of pushing not Microsoft sorry Facebook is pushing uh uh its own try trying to push its own models so they are in in a rush to uh kind of push their own models and guess what when the models comes comes out it can do a lot of things uh but there's not a lot of things that they have done for security yet and prompt injection is one thing one such thing which we're going to talk about um in a bit um so u we have heard of uh agentic AI a lot of agents are being written I recently heard there's a uh
there's a social media platform guess what the social media platform for agents where in one day 1 million agents uh joined and uh they chit chatted around very very different topics. So I I wonder that in a few years they may be billions of agents who will be available and uh what are their what are they calling themselves? what are their identities and uh they're designed for something and uh if they befriend another agent on our social media platform like it's happening with our children then what stops them from doing something that they were not designed to like disclosing some sensitive information. Um right now the focus is completely on making use of AI to gain
productivity but are we doing it in a secure way? Not at all. U so common issues and and stuff that is happening when agents are being designed they are given uh a too much access. They are given system level access because the focus is to get productivity. We are trying to figure out whether we are productive or not. A lot of agents that are being written the results are not that great right now. uh there are some uh u algorithms that are being discussed which is called uh RLM um uh automatic learning and u and they're trying to use agentic AI with uh with u reinforcement learning models and uh trying to improve the efficiency of
agents but we are not quite there yet. So again like I said the focus is not on securing AI right now. The focus is on making use of AI to gain productivity to do stuff fast to cut cost. Uh that's why you hear all these things and in order to do stuff like this what agent developers are doing they're just uh uh not worried about how much access they have given to the agent and because of that uh a lot of bad stuff is happening. Um so how do how do you secure such agentic AI systems? Not any different from uh how you would use how would you you would secure non-agentic AI systems which is human
systems. We all start with authentication and authorization and uh I'll go over all of these in a bit but I think uh later slides um in the authentication and authorization world there are uh some uh protocols that are being written. So in general the authentication protocols for uh and single sign on protocols for humans SAML oath um all of these uh protocols had been written many many years ago and identity providers like mini orange and other other providers have implemented this uh these protocols. Now similar protocols are being written for um agents as well and we are supposed to uh not just give them complete access but use these protocols and have limited uh use uh basically
have the agents authenticate to an identity provider just like a human would and get a token which are also short-lived which tokens expire just like human tokens expire and give them only those permissions that are required to do bare minimum work. The problem doesn't end there. You have taken care of agent authentication authorization but uh the agents have uh u access to MCP servers and uh different servers and those servers have intern access to other applications which you are actually you're you're writing the agent to do some accomplish a goal. For example, I want I get a thousand emails and a day and then what I want to do is I want to quickly figure out which ones
are the things that require uh quick turnaround time. I want to focus on them. I'm right now integrating an email agent with my email so that uh the 10 that need my attention jump out quickly so I don't miss a deadline and uh so that for that I need to give him that access the agent that access and uh so the agent has access to my entire Gmail right now there's no fine grain access because that's not been built in so that's just one example uh you may in order to automate more workflows. Maybe you're in some automat um uh accounting department uh where uh you're trying to automate some stuff some uh regulatory compliances that has to happen every
month. So you have to give access to a lot of your data and uh the other problem is where where all do you install your LN huge huge uh uh question going on in the world currently. I'm a big fan of uh uh onremise LLMs. Um and there are some LLMs like Llama and uh some others which are available um to be downloaded but the accuracy is in question there. Um but I'm installing something on my office premises and writing agents which are in my promises. at least that the data that the workflow that I'm trying to work on any of the data doesn't go outside but rest of the world I don't think we are when I'm a
when I'm a consumer I'm not doing that I I use chat GBT and I ask a question I use Gemini to ask health questions where is the data going there l belongs somewhere else it's a major uh question that uh it'll come up again and again I just want you to know that uh onremise offline LLMs are available uh it's a little more cost but and it's a little bit more uh work to keep updating them but it's available and you should be thinking about that if you are worried about the privacy of your data this is just another uh I was telling about the uh authentication authorization protocols that are being written uh ID JAG is what it's called
you should look I mean if you are a developer uh you can look at look into that it's one of the I mean I one of the titles was entrepreneurship in uh titles of my my this this session was entrepreneurship. So that's one area. It's a very new area. You can start writing uh implementations of id it'll come in very handy a useful thing. There are various uh various things you can do uh verification and integrity to make sure that uh all the data that comes in uh uh is trusted and you only get access to you only give access to what uh people need. All these code signing and all these things are very uh the very old
concepts. I'm not going to go into the details of that. Network isolation is again another thing. There's one picture that I want you to look at uh which is u this one. Don't try to create a super agent. No mega III agent because then you'll have to give it access to everything. Do a little more work and create an agent like this one. What are you doing? You're doing sandboxing meaning you're creating multiple smaller agents and you're giving access to each uh very limited access to each. That will be a little more work initially if you are into agent development but it'll be a much much more secure way to automate uh any agent workflows.
All right, there are claude has given something bad happens. How do you figure out uh Claude has given an API to figure out uh any historical stuff any of your employees may have put into uh Claude and it can help you prove uh something bad that has happened in the past. Make use of that. It's a very good piece of information and audit logging and monitoring is just exactly used for that. Um I'm running a little short on time so I'm trying to see if uh okay security of AI um has hallucinations is a major major problem. uh I'll give I don't know whether most of the audience is in software development background or not
but I have a software development background and in my company I was trying to uh implement uh claude and making sure that um the productivity I have about 500 engineers who are writing code and uh they I wanted to make them act like 5,000 and I thought AI is going to help me do that but guess what a lot of uh we are finding out that uh sometimes it has a limited capacity to hold uh uh stuff the old code that you have written. It depends on the number of tokens and the token support size support is is growing with every version but it's not quite enough. So my old code which has about say a million lines of code if I don't
make the uh LLM understand all of that then it can't write a small uh feature for me because in if it does if if it can't hold everything in memory then it just starts giving me some hallucinated code wrong answers. So we are still a little far from uh and it's a a little far both in terms of technology and also in terms of uh the cost involved when it comes out you know that feature comes out I'm assuming it'll be very expensive to use because it'll be a very powerful thing but for newer applications fantastic no hallucinations quick good answers it still needs good guard rails my senior engineers have been able to improve
their productivity but my junior engineers uh they are still a little shaky there. Um, how to uh make sure there's no hallucination and all of that. You can read about it offline, but you just I just told you there's a this is a topic of interest. Um, security using AI is a very interesting topic to me. Uh, me being an identity provider, I'm implementing identity protocols. Uh, I'm implementing anomaly detection using AI. Uh I'm uh also figuring out that if some agent goes rogue uh the best way to figure out uh that rogue agent would be to analyze its behavior and I'm trying to think in that direction where I can write a write a
product uh which can see if I have sandboxed some agents in a certain areas and I ask them to do a certain task are they doing just that or are they doing a little more and uh although I've taken care of a lot of their access control and they shouldn't be allowed but you know agents being agents uh you know they are supposed to be able to think by themselves and if they're doing that then maybe they are uh doing something which is uh which I I didn't want them to do. So I'm trying to develop something on the behavior front as well. So this is what I mean by security using AI. Um and
all right so key takeaways from uh this talk much because of AI there's the attack surface is increased when whenever you hear about AI security there are four angles to it one is security from AI because social engineering attacks at scale are happening because AI tools are available to attackers as well security for AI which is protect the AI I system itself it's like a nent baby you have to teach it using guardrails if it's taught the wrong thing it'll do the wrong thing then security of AI um because of different aspects a hallucination may happen and uh you have to know how to use it correctly and security using AI which is creating more security features
uh uh using AI so and developers uh miss the obvious stuff and uh there are guardrails that you should be and and all that stuff. So one of the more popular things that are happening is uh human in the loop. Very very important thing happening. A lot of automation is happening but at the end a human has to go and approve it. That's a very popular concept that is being discussed and is getting uh getting adopted worldwide. So with that I would like to come to an end of my u session and uh >> thanks that was very interesting and very innovative part of how to take AI into the next step. We have run out of
time. If anybody has questions >> you can connect me offline as well. I my email is anban@minarange.com. >> Perfect. Oh thanks a lot. Thank you. Then have a nice day. >> And move to the next session. >> Thank you. Thank you.