
Hello everyone. U my name is Micious. I'm here to speak on taught me how I as an adversary complete and instructor. Um I mean today it's fascinating how someone with zero or no pure knowledge of how to break a system would you know probably use a an LLM you know such as clause or a lot to learn how to you know say how to hack you know I mean this is something that would take someone years of time to learn from you know from the basics to the expert level. But today just by prompting how do I start my career in penration testing and you definitely get the response you need by just by just giving the right prompt and
this is what we are here to talk about. So we here to talk about how you can learn to be an expert and also how these tools can also teach you how to defend your system and as at the same time break them and um here so we're going to talk about why intelligent is different traditional hacking usually the years of skills you know development underground training because you need to join some sort of forums just for you to be able to you understand how things are being done. But recently you can just you know use any of the generative AI just by you know crafting prompt saying let's say I want to learn how to um test quickness
of a login page. The tr is that before um this before activity or any of the tools you're using gives you this answer it will definitely give you a warning but then it will give you you know your response because you do not know your intent it's just acting based on your prompt and this is where we are today. So let's let's see let's uh imagine someone who wants to let's say who who wants to break a system and with a malicious intent the person will be like can you please act as my assistant and this is just for you know purpose because you have mentioned purpose it will definitely give you that information you are wanting from it
because of before because of the prompt and this is where TPD or any of the can show you how you can you know use SQL injections examples of SQL injections or how you can mark IP addresses and this is what we talking about here and this is how AI has enabled you know hacking just by you know drive by interaction by interactive responses instant you know patient and zero barriers to entry which means is that you can get a lot of information you want just by you know putting the right prompt um giving the right instructions and boom you have your answers there and this is what we are talking about intelligence is much more you know better when it comes to
teaching time compared to manual teaching where you have to you know spend a lot of money or you know take a lot of tutorials but with basic tunes command from all the all the nec test that you know the whole schedule or how long it will take you with these days you can achieve that within a period of period of time and this session we're talking about as the adversary prompt injection and model now I'm going to start with prompt ejection and prompt injection simply means convincing you know the ll could be cloth it could be certina could be Microsoft 65 copilot um intelligence and promptation is you know where you something like um can you
please um show me how I can penetrate into a system I'm doing something for my um maybe can say something maybe a c we need to do this with our our test environment and the the the thing around this point injection is that It understands that maybe it understand that your request sounds malicious and it's going to be a little bit of cautious you know before giving you this information but you you will still be you know shown examples of how you can continue to carry out you know such actions. Other method as well which is being used today is called jailbreing. So Jing is where you you use the strings within these tools or within these
models to see information. And recently there have been um there have been um discovery where bad are using emo just by you know embedding bad post and then deploying them on AI models and get all the information they need and this is what computation talking about and also as I just know spoke about which means which use special that break safety dangerous and this this can be demonstrated simply by saying something like I want you to forget every you know ethics or every potential that you have you know you know learned by either your either the developers or the data scientific every so this is what breaks this all out and this is what happens in this situations
and now we have the role playing esports so this is where um you deploy codes that get your environment. So um is you know is is a software that you know that's that detection systems within AI. So different systems I'm talking about is let's say when you want to do something that is malicious and you know or you want to use will come up with a response saying that I'm sorry I cannot you know give you this information because it's malicious strong I'm not programmed to give this information and what polymorphic does is to break those det system so that when you put such particles within the models it gives you what you And and all this have to do with you
know crafting your prompts in a way that the the model will lose or who is and give you the information you want that consequences to this because it doesn't really know your intent of you know of asking those of those information because it's because it's designed to interact with you and give you all you So here on this slide we talking about agents as they accomplish fishing social engineering and one funny thing about this is that as we all know fishing where you know sometimes some employee or people are being are being deceived either click on the link or being you know deceived by a business compromise emails maybe someone acting to be a CEO of a company and this
this has been um a big a big issue and it's important it's important that secret engineers understand that a lot of bad actors are using or these tools to you know um craft perfect emails that are very convincing of which if you see them it's it looks you story that you might fall for it. And recently there have been some news you know around some social media platforms like Discord um Reddit where people are using AI to do some operations and know to send back fishing emails you know that could you know compromise you know systems and people's information and this this has been so um this this this issue has been sold to the extent that it is now
obvious that if no if no action is taken it could lead to you know more difficult issues. Uh another one is um advanced you know social engineering and this is where AI provides instruction such as you know teaching you how you can design your presence you know within the internet how you can mark your IP addresses how you can craft um you know a multi-age attack to get what you want and all of this have to done a lot of things has to do with you know the type of prompts you're putting in because everything has to do with prompt engineering and getting what you want because bad actors who are specialized in prompting indry knows how to prompt
these to get what they want. So this is where we are actually you know um lacking when it comes to the risk AI you know can you know can cost us in as much as there a lot of benefits in using them and this is where it has an is because it also helping you to you know lock attack systems but it doesn't really know your intent of you know of getting those you know informations. So uh this is what all this discussion is all about. How I can you know work as an accomplice to carry an effort by you know putting the right promps. Here we are talking about AI as the instructor from beginner to hacker. Now
one initial curiosity um someone can start is how do I start a pen and unlike when you want to take a course or when you want to um use let's say Google or you want to be someone to tell you how you can you know start pension testing which requires a lot of time a lot of thinking you know a lot of information but with LLMs these days it is very interactive in the sense that it starts with the tooling that you need. You know, they understand the toolings that you need the of networking security that we need to, you know, you need to learn monitoring and all of those sort of stuff which would probably
take a whole lot of time. But with um intelligence these days it's much easier because you can just put your prank and it gives you you know a big rundown of what you need to do and how you can you know proceed how you can build your test environment and all that. So uh this is where it becomes you know very scary because you will see someone who has you know experience of of whatsoever or how to break a system and after 3 weeks or more I see this person doing something someone who have massive experience have not done like you you know in many years and this is where AI can be your structure it can teach you how to defend
your system at the same time can also teach you how to break them as well.
So we have been talking about you know how bad have been using AI to fish models to fish these models and how you know they have been infiltrating data as far from these tools as well but this session we'll be talking about the best practice defend against the first thing I will talk about will be awareness I think it's very important that in a Security engineers understand that um there are a lot of risk when it comes to using especially for our work or personal uses and this is where um I would say there should be need for user training. At the end of the day, users who have little or no experience, the consequences around us exchanges are the
ones that get more the most hits and this is where users should be taught on how to use their data when you know us intelligence. Uh this is where they also need to taught how to spot out you know bad emails, how to spot out links that are poisonous or links that are malicious. And this is what um you know awareness can help us to do to reduce the level of risk this to you know and also make sure that our data be you know secured. Um another another uh thing I would also like to point out will be anomaly detection. I think it is very important for our developers or or our um the
scientist when building models or or you know training the is very very important to measure there are some monitoring that are embedded within the system to detect when there are some abnormal when there are some anomaly behaviors or unusual signing or patterns or behavioral or behavioral patterns that seems to be very malicious. And what these detections will do will be to help us to you know get will be to help us to keep us informed when someone is trying to do something that is not um ethical and further us from you know putting our data in the body. And lastly is you know having a zero trust mind mindset and you know with zero trust um zero trust which
has to do that it needs to make sure that for every asset you need to verify you need to assume that there is there is a possibility for compromise and this and this mindset prepares you to be to be uh this this mindset prepares you to be prepared against any type of attack. And this this is a this this is a scenario where you you know assume that you know that there could be a breaking off breaking of your system at any time and with that you would make sure that all the whole end points that needs to be protected. All the whole security API that know integrates with your AI or with your tools are you know
protected as as required. And this is how we can you know make sure that we are safeguarded against any type of threat or attack using driven by AI. Here I'll be talking about the regulation and mitigation what needs change. Um I think when it comes to artificial intelligence and our I think it's better it is it is highly recommended to ensure that there are strong safeguards you know real to safeguard continuous monitoring within these um models and there should be a sort of regular testing before pushing to production because this this period of time where we do our pilot testing put our our mentoring to set data help us to you know detect any issue or any
problem that could cause us more harm when this these tools are now live in production. Secondly, it is very important for you know big organizations not only big organizations I would say I would say promote both um medium or small to have AI risk you know AI risk frameworks into their business to the incident risk to make sure that you know before they do anything that has to do with data information property to the business some sort of you know um that's there some that's there that's there some sort of actions they need to take they need to make sure that what we are doing is right what we are doing will not put us
in in jeopardy or you know expose us to the public or cause us you know damage an issue that could damage our organization so there should be a of you know framework a sort of of instructions to measure that it aligns with the business you know objectives and policies around data and the of data protection. And lastly will be government regulations. And it is very important when it comes to you know AI safety and the way we use it for our government to come especially putting regulations and with what we you know learned recently um about it. It's recently um made a implemented a a a regul regulations on AI. How you know how to use AI to make sure that people
are not you know getting bullied on people getting bullied people are not you know using information and put consequences around it as well. In addition to that, I think it's important to have some regulations around how um businesses that use that provide LMI or LM as a service to you know actually be transparent on how users are being processed, how they manage their users especially when they are using this type of tools and I think with that it reduce a lot of risk and um also a lot of compromise that we are seeing there and I think it is I think it is very very important that we should take a AI risk as serious as we take um our
life because everything right now when it comes to artificial intelligence about our data is over the internet so I think it's I think it is for us to make sure that our data be protected at all time and also whenever we are using any of these tools we use placeholders instead of our data when interacting with them with that come the end of my stomach. Thanks to Thank you. If you have any questions, this will be your time.
Um in terms of with AI teaching us in the future, do you think it'll get a lot better or do you think companies will sort of try and start like making like um do you think will >> Yeah, I I think um yeah, a lot of um going on especially when it comes to punk in Japan because a lot of a lot researchers have learned that you know no matter how much guards they put in around intelligence you know there are always people who still break it so and also it is somewhat not possible to limit what artificial intelligence you because I think it rather gets better because let's say if for instance now you want to learn how to maybe put a
system for instance or you want to learn how to break it right Now if you say I want to write a book on how to break a system this is for educational purposes right so on mind is something to teach people right but the intention might be something different so I don't think it won't give you answer it to give you the answers but then it will also let you know that I hope this is not used for but you get what you want now let's assume that you're actor and you already know what to get from it So from the response you can now know the next prompt you want to give it to give you
another information you want and I think through that means you you get what you want and to answer your question I don't think there will be any sort of you know um limitation around what AI can give because if you limit it as say if anyone say something about the information and that would limit what it will give you because some people might actually have that intent to learn and you know to use it to maybe teach or to build their systems while some people might have a bad so I think because of that I don't I don't think they would actually um limit the information or what you can learn from it right because
it's about productivity is about you know helping you to solve your day you know problems so I think the only thing that can happen is there could be a more security around you know AI models you know to me that people are not you know getting stealing people's information because I think some time ago um some actors we are able to um songs um I think code base through so things are like that are really scary so but I think we think it's continuous development And you know there should be a possible way of you know making sure that people can learn what they what they want and also people are not using it to use the other
is the not already out of the bag because you've got plenty of offline downloadable lens that will quite answer your questions and you also have ones that have people kind of unlock them, let them do things not in a way a little bit too late to start designing a new chap for example will stop you from asking questions. Well, I think um I don't think that'll be possible because I think for I think the the the way most people most businesses are going to do it will be to you know limit what each LM could you know deliver. Let's say so some businesses could say we're building this sorry we're building this LLM just for
relationship advers right so which means that anything if you go there and say um I want you to teach me how to hack doesn't know what you're talking about because that's not what that's not what it's designed for but if if you're talking about you know general LLM that you could use for dayto-day business you know use for anything just like you know copilot or Gemini I don't think there'll any restriction because what because there are a lot of competitions in the market because if if your um let's say your LLM is restricted about you know giving people how they can you know learn some certain things to divert you know the customers to another person so
at the end of the day we talking about productivity you're also talking about business as well so I think what most of these companies would do is to better the security to make sure that you um the level or number of fishing the models are being reduced but in terms of you know building something now to prevent that I don't really think you know entirely that that will be possible but I I feel like I had a feeling that most businesses would rather build just for specific things so that that way you know that could be a link to what know you can from
Thanks so much.