← All talks

Pięć prostych zasad jak zabezpieczyć chmurę publiczną, i jak niektórym się to nie udało

BSides Warsaw56:372.1K viewsPublished 2017-10Watch on YouTube ↗
Speakers
Tags
StyleTalk
About this talk
Autor: Piotr Berliński
Show transcript [en]

Hello everyone, my name is Piotr Berliński. I work at Prevenity as one of the prelegents Mariusz Piątek. The company that deals with audits, pentests, security training. I've been working in security for 4 years, and in IT for 10 years. I use cloud in private. Amazon from 2007 with different intensity. So it's still a while. Generally, I won't put pressure on any of the cloud suppliers in the presentation, because when it comes to the biggest ones, they suggest similar things. Let's move on to meat. What will it be? At the beginning I will say a few words about the general concept of security in the public wall and such basic definitions of what it is. And then I will give examples of the

flaws, it is said badly, rather a fall in security related to the public wall from individual developers who are doing something hobbyistically for big companies and big millions of dollars. So, cloud is a basic definition that will help us understand what it is about. If I talk about cloud computing in this presentation, I mean the public cloud, which is what is most often considered a cloud, not a hybrid cloud, private cloud, what is offered by Amazon, Microsoft, Google, but also our local E24 Cloud or OctaWave. And here, maybe useful, may be the definition of the public cloud, the most acceptable by the American NIST. And this cloud must meet five conditions to be recognized. This is self-service. It means

that the client who wants some accounting resources or other service, it does it itself. It doesn't require any interaction with support. What the client clicks, gets. There is no philosophy here. Access from every place. Since it is public, everyone must have access to it. This is quite obvious. If it is not available for everyone, it means that it is not public. Co-divided resource pool. This is also such one of the main concepts, that everyone uses shared infrastructure. So this brings consequences, also when it comes to security. Elasticity and scalability, that if I suddenly want to have 100 servers, I click and after a short or longer time I get these servers, rather these are minutes

than hours. And if they are no longer needed, I click delete or whatever and that's it. They are no longer there, everything happens automatically. And the payment model is also very key. how fast these resources appear, then at the same time they will start to pay for it. It also depends on the specific implementation. If they are suspended or removed, then I don't pay for them. If I suddenly want to have four super strong machines, if they start to work, then I don't pay for them, if I cancel them, I stop paying for them. This is also quite crucial. When it comes to security, something like shared responsibility is very important. This means that responsibility for the security of the entire solution

is shared between the client, who has its share, depending on the model, whether it is larger or smaller, and the supplier. No, it's a practice, it's not a law, or it's a contract of specific suppliers. If I'll be honest, then it will be possible to discuss with sense. If we have a classic infrastructure, we have our own server, we are responsible for all these, these are already cloud solutions, so We are responsible for everything, from the security of the server, whether someone will not enter there, not called, whether there is a server-based PRONSTER, whether there is air conditioning, how the network is organized, whether these boards are of the appropriate quality, i.e. servers, and then all security of the entire programming,

the entire solution. If we go to the cloud model, let's say the most popular one, i.e. Infrastructure as a Service, then at this point We rent the server from the cloud provider, so the whole bottom layer, we don't have an impact on it. The only impact we have is some contract, we can file an audit, we can entrust it to some external audit. Whether there is a security of the server or air conditioning, we can only either make an audit, which I assume in the case of smaller suppliers is to be done. I assume that if a small company said that it would like to audit Amazon, they would rather be laughed at. and that would be it,

but at this point we have to believe that external companies have already done it and we are able to obtain such documents from them. But in fact, at the level of hypervisor, i.e. the virtualization layer, that's where the responsibility of Amazons ends. If we put here some operating system, whether it's Linux, Windows, whatever, then in fact its security is our duty. And we can't complain about Amazon or Microsoft that we didn't patch the last bugs in Windows and someone broke in because they don't have any impact on it. And all the layers below, whether it's a database, an application stack or our application that actually works there. And if we go to the next... into another model, i.e. platform as a service, then the operating

system doesn't care anymore. It is also provided by the cloud provider. Depending on the platform we buy, whether it is a database, an application server, this is all provided by the security. We can demand from the cloud provider and as we will see, they are the ones who are responsible for it. Our duty is to ensure security of the application itself, to make sure it's well written, to ensure that the access is safe, to ensure that data for logging and such things are not leaked. And if we have a software as a service model, meaning that we rent only a specific application, CRM, or any other, for example Gmail mail, or Microsoft Office 365, we don't

care where the server is, whether it's Linux or Windows, and who patches the latest bugs, it's just a delivery man's duty. In fact, what is our duty, the safety of the last layer. And understanding this model is quite important if someone, firstly, plans to implement it, because if he wants to buy a platform, infrastructure, he must have specialists from all these lower levels. So he must do it, he must take care of it, if he buys more detailed services, it doesn't have to have Windows administrator, Linux administrator, because the provider will do it for him and usually does it quite well. Is this concept clear now?

Now, specific cases of what happened. In the last months, this topic was widely used in the security media. But here, going to specific cases. The first story is from December 2013. Sunday morning, hot because it's happening in Australia. Luke is a hobby developer, who also throws various open source projects. He wakes up and gets an email from Amazon that his account has been compromised. He logs in and gets a bill. Luke is a young programmer, so it's impressive. He checks what's going on there and finds out that at night someone created 20 of the biggest, strongest virtual machines on Linux. And they've been working like this for two days. It immediately stops, deletes them, leaves one snapshot to check what's going on

there. It turns out that someone who got into his account has been digging Litecoins for two days. It's not written with any great success, because it seems to me that it even gives the address of a specific pool in which it was bought, it probably doesn't work anymore. changes log data, writes to support. I have to admit that support is quite easy to deal with these failures, at least from what I found on the Internet. In most cases, such payments for unconditional use of a strange cloud, at least Amazon has cancelled in most cases, at least in those cases what they were writing. What later turned out to be that two days earlier Luke decided to open

one of his projects on GitHub, which he claimed was no longer particularly important, and could be useful to the community. and he didn't really check what was there. It turned out that a few years ago, maybe not a few, but a few months before, he just sent the keys to his Amazon account in the code of the app and they just stayed there. As far as GitHub or other places are concerned, they have been scanned quite intensively for some time, who are looking for such things, as well as suppliers who are trying to do something with it. So, in a few steps, what could be done better? Maybe it's not about this particular case, but something that I assume is quite obvious for most people in this

room. To enable multi-factor authentication. And in the case of most suppliers, it's very simple. We get an SMS or use a virtual token like Google Authenticator. One click in the console Generally, it's done. Interestingly, as far as I know, one of the big IBM's developers doesn't yet make it available. What's next? It's obvious to change the password to something more advanced. Regardless of whether it's a super corporate account or whether we just created an account because we wanted to click and see what this cloud is all about. It is also important that in most large clouds to create a test account we have to submit a credit card and they check if this card actually exists. If it's not a virtual

card, I know I had quite a few chances to set up a test account on Microsoft Azure, because they checked if the card is not only current, but also test-tested, they withdraw one dollar or one euro to check if the account exists, so it must actually be a card that works. Even though someone may not want to use any more advanced services, limit themselves to parts that are available for free. Most suppliers have a period, whether it is a year or half a year, when from the smallest instances, for example, operating systems, we can use for free. We can take care of it, because it is a test account, there is nothing big there. But if someone logs in, the bill will increase, regardless

of whether it's our test account or not. What's interesting is that these are requirements for the Amazon password. A few days ago I wanted to change the password and it's not required at all. Amazon was super willing to accept the password 123456. So, yes. I have a multi-factor authentication, so it won't work. I hope that the login won't appear somewhere during the presentation. The next important thing is the RUT account. In case of cloud services, the RUT account is the main one that was used for the purpose. And just like in operating systems, maybe everything is uncontrolled. And just like in operating systems, it's a bad idea to use it on a daily basis. First, it should be well secured, and secondly,

it should not be used. Create accounts with more limited permissions, which will serve for the administration, more everyday, and secondly, if we have a dedicated application, it would be good to use it. and we get the log keys, which are mainly used to log applications or scripts. We don't use password and user, We also use the username and password, but the username is a long random string and the password is an even longer random string. Therefore, we use such keys, but not in the code, but in additional files that we do not put on GitHub. Next, about users. If we don't use root, we create an account. Here, for example, Amazon immediately, when we log in, says basic things like: remove root keys and don't use

it. multi-factor authentication for account, user creation. And when we create users, we can define very precisely what they can do. If we create an application or use some external application, for example for backup, it should not have to create virtual machines, why? Or it should not have the right to read logs from our account, why? So here we can manage these users very nicely. It actually works similarly in every major cloud. It is also worth changing these data for logging, for example. If they appear somewhere, there is a chance that they will not be used. And when it comes to access policies, we can define them at the level of IP addresses. If the script will work on one specific host with one specific

IP address, then why would the whole Internet be allowed to limit the area? or due to some kind of failure or other compromise of data to log in, this data will leak out. We can also use certain services, this is quite obvious, but also a certain time. If we create a test account or for the need of some project, If we give them to someone, not necessarily 100% trusted, we can limit the time they can use the account for a certain time. Next thing is monitoring. Monitoring is actually done by attackers, because they scan the internet in search of this kind of keys. You can say that it's bad, but that's life. And because these keys have been around for some time, it was a

case from 2013 and there are still new ones. Therefore, first of all, Amazon monitors leaks of such keys and you can get emails from Amazon that your login data leaked, as it was in this case. I didn't manage to get such an effect. I've put test keys on GitHub, on Pastebin and nothing. They're still there. Even in the deleted Pastebin they're still on Google. Nothing happens to them, I didn't get any... But what we should do as users is first of all to browse our code before it is available to anyone. If we put our scripts on Stack Overflow to help us, then definitely in these scripts should not be covered key to log in. If we

share a repository on GitHub, yes. I would suggest even if it is not a public repository, because it is easy to miss the buttons and share something that should not be shared, and really quickly collected such data. If we work in a large corporation, we can monitor users equally well. to avoid the loss of the keys. At least in the case of Amazon, quite simple rules of expression are able to monitor such keys. such keys. Here is an example email from Amazon that someone got a finger stuck and his keys leaked. As I said, many people write that something like this happened to them. I didn't manage to get such an email, although I tried. What I think It's also quite important that some people write on

the internet: "Haha, I've put a prepaid credit card on Amazon and it's only 1$. They can't do anything to me." And unfortunately it doesn't work like that. The payment for service is collected once a month and what about the fact that there is nothing on this card if the bill comes at the end and Amazon forgets about payment. It just doesn't work like that. As I mentioned before, the operating card is required to to run the account, so there is some kind of card there and they are able to identify the user who actually created the account. Another story, definitely fresh, because it's from April this year, and definitely similar to the one from 3-4 years ago. All in Data

company from Amsterdam, I guess. Its boss Walter is also a programmer, he recruits people who should know about cloud technologies. As part of the recruitment process, he gives candidates tasks in the Amazon cloud. They have to do something and send results to him. Other engineers check how it is done. quality or if the candidate is sensible. It looks like a pretty cool idea for recruitment. Wolters created a separate account for this so that nothing happens to their main corporate account. What turns out to be during some case of reviewing of the project submitted by the candidate. It turns out that he put it on public GitHub What a surprise. And he forgot to remove the keys. The engineer who checks the app tells Walter to be careful,

because something like that has happened, but Walter ignores it completely. It's Friday, he still has a lot of things to do, it's a weekend with his wife, why bother with such stupid things? After the weekend he logs into Amazon account, or even longer than after the weekend, after a few nice days. And he gets such a bill, almost 100,000 dollars, which is already a lot compared to the last 3000 dollars. It makes a much bigger impression. If he checks further, of course, he removes these, changes these data for logging, which reveals that in each region, the supplier has someone who has created 20 of the largest machines And something is happening there. He didn't write in his blog what was going on there. He probably didn't create

any copy to reach there. But with a high probability, we can say that someone was also digging bitcoins, litecoins or other currency. It's a very simple business. And for a short time, we could do something there. He also writes to Amazon, he had many different transfers, he had a separate account, but he started writing to Amazon from the main account. It took a while before he removed the instances and received an offer from Amazon. Later there was no follow-up whether the debt was forgiven or not. But I think that most companies would have been impressed by the IT boss if something like this happened in a few days. There are also a few simple things that could be

done and actually everyone who uses it should do something like that. The first are alarms. Alarms that can also be set in every cloud. If we are used to not using any paid services, we can set such an alarm for $1 or even zero. And if it turns out that someone is using our infrastructure, or we made a mistake ourselves, because even among experienced engineers, it sometimes happens that some instance is forgotten in such a test project, or some VPN which nobody uses at the moment but there is one or another one of 100 services offered by the supplier which theoretically is not used, it is not expensive but if it will be unused for a month, it will be a

bit expensive. It's a bit of a pity. Therefore, it is enough to set an alarm that if we paid 10 dollars standard, then if If it turns out that we have to pay 20, it means that something is wrong, some error or breakage. We can expect it. And it really is about the fact that the estimates are calculated for the next 6 hours, for example. And if it turns out that the infrastructure that is currently there will be running for the next few hours, then such an account will be generated. So why Why pay for something we wouldn't want to do? What's not included in the default is Cloud Trade, it's called that in Amazon, Google Cloud Platform or Azure. It's called quite

similarly, these are audit and administrative logs. Who logged in, which channel, was it a console or API? What did my experiments with from multi factor authentication. And such logs are very useful later for some analysis of who did something or created things that we don't want to pay for at all. What is important? It's also good that users we use daily shouldn't have the possibility to modify something like that. Very simple scenario, if we get the main root account, the first thing the attacker does is logging in and deleting those logs. and that's it. We can then go and look for the wind in the field, who came in and what he did. So here this section of these accounts is quite important. What can we

do with it? First, we can put these logs somewhere else, whether it's on some storage, some analysis or some other solution. We can also generate alarms if a user but for example it shouldn't be logged, then let's set an alarm that an email will come if it ever logs in. Another case, and recently very widely discussed in the media, because it makes an impression, but from the point of view of security, as I will show you in a moment, it is very stupid. One of the researchers, the topic is known since 2013 and not much is happening in this topic. But I will tell you why it is not happening, because it is not possible. One of the researchers who deals with the topic of storage in the cloud,

i.e. in the case of Amazon, one of the flagship and probably the second service that was created in 2007 is Simple Storage Service, i.e. in short Amazon S3. This is a kind of You can't say it's a virtual disk, but something like that. You can upload various objects, files, you can also set a simple website or backups, which is quite popular. One of the researchers from UpGuard, Chris Vickery, looking for various available resources on the Internet, he found a resource that interested him, where there were data that he thought should not be publicly available, data related to National Geospatial Asso... organization, which supports US intelligence in satellite and air images analysis, so Pentagon, Department of

Defense and these climates. Apart from data The strange ones he wasn't able to assess were, for example, SSH login keys, data for logging into other Amazon accounts, which he didn't log into and didn't check what was there. At first, he identified that one of the cooperating companies is probably the source of the leak, which already has the security issues on its account, Booz Allen Hamilton. And he wrote a message to them that something is wrong, that data that should be available only for people with access to Top Secret closures are lying on the internet and anyone can watch them. He didn't get any answers. He wrote to the government agency and after 9 minutes it was fixed. And he got some kind

of answer, but the next day the company called him and said that according to them everything is fine, nothing is available. So it's also clear that it doesn't work as it should. Later they explained that it was some kind of test sketch, there was no data. But nobody really checked it. Chris is probably a US citizen and he was afraid to dig into this data because he could get too curious and get to some odd places. It's a simple matter, but there were a lot of leaks lately. Alliance Direct company, data of credit capabilities of American citizens, Deep Root Analytics, data of US voters, quite a big leak, World Wrestling Entertainment, So, it's an organization dealing

with guys who fight in the ring and get a lot of attention. Personal clients who were shopping there, also from Europe. Doe Jones, the subscribers were also on a public storage resource. The subscribers of, for example, Wall Street Journal, Wall Street Journal, Verizon, at least two of them are related to logs and data from call center system and also data from production systems. Tigers One, also a company associated with the US Department of Security, personal data of people who have access to Top Secret clauses. This is a case from outside the US, credit score, data of people's skills from India, it's several million records. The thickest is the data on credit cards, from what I

remember, it's full data with CVV code, so you can make purchases. For the exchange, tracking data of cars from such monitoring devices, cars and leaks from In September, but announced 2-3 days ago, Accenture, a consulting company, gave us the customer's data, certificates, everything that someone would like to use. There is a page where you can follow this type of falls. You can be surprised how big companies, because they are not small companies or companies that don't have money for IT, have such embarrassing losses. Security of this service hasn't changed for years. It is because of the default storage container. The only thing that is not available is the right to be outside the system. We can

put the data locally, from other applications. If we want to make it available outside, we have to set such an option. So that someone can actually specifically click "yes, this data is to be available outside". The only way to find this data is to come up with an HTTP address to this bucket, or bucket, as you can say in Polish, which can be random, but it can also be the name of a company, or the name of a company, a thinker, a test. There are scripts that have been doing something like that for a long time, and someone had to make it available and someone had to guess the name of the container and started to look at it. Why did it happen?

First of all, the interface was changed in Amazon in recent days. A few months ago there was an option "Place this resource for an authenticated user group" of verified user groups. The first thing that comes to mind is that these are users from my organization. I click and they will be able to view it from themselves. It turns out that Amazon understands it a little differently. Verified users are all users who use this cloud. All corporations, but also all students who have just set up a free account and are checking what the public cloud is all about. I don't know why such a certification was supposed to be available, but something like that happened. And this is one

of the most common reasons why these resources leak. Someone said that the most reliable users are my users, and they are all the users of the cloud in the world. So in fact, it's public access. Public access in such a common sense is all user group, it's just everyone. And here the change in the interface was that it was not so easy to choose the group, you had to write down what it was supposed to be called or choose a resource reference. And in fact, you could practically end the security AWS, S3 or storage services on it, because there is a whole secret here. If you don't want the data to be publicly visible, then simply don't share

it. As you can see, many companies didn't do it. This is a notification from Amazon, which, on the wave of these next falls and thick fish, also sends a notification that something is wrong with your account, you share some resources, check for sure if it is this. This concerns my account, because I had a website on it for some time. It's very easy to do, it's one of the supported features, so everything was fine in this case. But you can see that in the wave of new big organizations Amazon is scanning and sending to users "Hey, hey, did you want to share this?" What can be done more, if someone works in his organization, because

if someone plays with it hobbyistically, he will just look around and see if he did any stupid things. There are a lot of tools that help the user in this. These are the tools: Detective, CloudSploit, Some of them are also used by the breaches to search for such things. But let's say that they can also be used for good purposes. For auditing such a bucket, which will check whether it is available or not and whether it is suspicious. Amazon itself currently provides up to three different tools that approach the subject a little differently, check whether whether it is not publicly available, but the last hit of Amazon is of course artificial intelligence, which goes through our storage resources and checks

whether the data is sensitive or not based on some analysis AI analysis, what is there, whether there are credit card data or not. It's not particularly cheap, but it is. I guess these organizations mentioned in the previous slides could easily afford it. Another case, now for a Microsoft Azure exchange. One of the researchers discovered, scanning the Shodan screen, because such resources are also very elegant, Shodan also indexes resources in the cloud, it turned out that the organization that organizes NFL players of the American Football League It stores logs from its available application in Microsoft Azure in Elasticsearch instance. So it's where you can put your logs and analyze them later. Interestingly, it doesn't consider them to be useful, it will protect

this instance. Here you can see "Please read this" - it's an obvious sign that someone has found this resource before. And it's probably a demand for buying, that someone will reveal or delete this data. One of the most popular ones for some time is also malware, which scans such data. He can encrypt and delete public resources, but he wants to buy something to do nothing about. In this case, it's obvious. If you are interested in American football, you can check out Colin Kaepernick. And his data, and the private phone, apparently since the announcement of this he got a few phones with fingerprints. Because there were probably some political issues in this league. Definitely a large amount of money involved. If someone thinks that in the NBA

or in the American hockey league they earn a lot of money, they earn money compared to the league's players. so here is a big drop. Similar case, also available instance, Elasticsearch. Chris Vickery also found data of voters from Mexico, located on Amazon. The number of records is 90-something million. As I checked, the population of Mexico is around 120-something million, so probably all voters who could vote were there. Interestingly, he tried to contact the Mexican embassy in Washington, but was completely ignored. He tried to contact some government agency, but was also ignored. He held a conference and decided to talk about it. Then someone became more interested in it. Apparently, in Mexico, every party has access to a database of all voters. And to make

it easier for them to analyze the voters, they put them into the cloud. And they put them into the cloud so that everyone could analyze the voters' data. And here, too, not much can be done, because if someone has made resources available, it means that he did it with some kind of a mind. The simplest things that are available in infrastructure are simple firewalls, static ones, setting rules for ports, it is quite helpful, you don't need to have any amazing knowledge. where the connection comes from and where. Here is the rule, if we don't want to display something for everyone, then why give someone a public IP address? You have to pay for it, so if

someone wanted to display something outside, they had to do it with a mind. So it's not a good idea. In case of services like Elasticsearch it's not a good idea. It's not a service that is intended to be posted on the Internet. Especially without a password. In many cases, it would be enough to connect with such an infrastructure via VPN and in every service you can do it, or by a more hopeful method, simply set up a small-power virtual server that will display the OpenVPN service or buy the ready-to-use service from the supplier. And here, something like that will make things easier for us, so if we really need to expose something outside, let's not do it. Another

way to help is to use one of the several tools for scanning the network. Anyone who deals with security certainly knows them. And those who don't do it, they are not really difficult tools and such basic things can be done in a few minutes. Simply scan this address or those addresses that we bought from the service provider and will be public. Are there definitely things that should be there? And the last case, I think the most spectacular one, surely here murder in Amazon sounds terrible, but probably in 2012 or 2013 among developers there is a service called Codespaces. We can say that it is a competition for GitHub, generally it is a company that helps developers to

host repositories, whether they are GIT or SVN. They boast, I don't think I have it on the slide, they boast that they are super resistant and have a cloud-based infrastructure that is not afraid of anything. What happened? One day they were attacked by a fairly large DDoS movement, but they could have survived it, It wasn't that important, but at the same time he wrote to them with a demand for a purchase, that if they don't pay some amount, probably in this period, because bitcoins may not be so popular yet when it comes to forcing a purchase, they will have problems. They thought that it was some amateur and they will handle it and they won't ask any other company for help in solving the problem. They

did it their way. They logged in to their machines that they had in Amazon. It turned out that everything is fine, no one logged in, the data is safe. That's great. They also logged in to their Amazon account to check if everything is fine. It turned out that it wasn't, that someone was already there. They changed the password, they started to secure it more, but the attacker left some back door and observed what they were doing. He said they were not talking about it, there was supposed to be a bribe, there was supposed to be a fine, they wanted to make some sort of order and he marked all the instances one by one. and just deleted them. First he deleted virtual machines, then

backups on Amazon S3. From what I remember, there was very little left. So within a dozen hours, the company realized that there was no infrastructure. It was impossible to reverse it. The cloud works automatically, if someone wants to delete a machine, they delete it. Backups were only in the S3 cloud and within one day the company collapsed. They decided that they don't have any resources to restore it, they don't have backups anywhere else. So they closed down. I guess that now few developers are dealing with this. I remember that there was a company called Spaces which had a good reputation back then, that it was selling cool services. Now even this domain is unsold and the company has been completely forgotten. There is absolutely no trace of

it, apart from the history of this breakdown, that someone came in and when they thought they would manage, they completely erased everything. What are the conclusions? Well, we don't keep eggs in one basket, it's a rule that applies to many different fields. And in this case, what if it was a great resource and they had many servers that were resistant to everything, when it was enough to log in to their account and just delete it all. So here the solution would be to keep backups somewhere on the local disk or in a competitive cloud. It doesn't have to be anything complicated, just a backup somewhere else. And in fact, this is the rule that backup administrators have known for decades,

that a backup that is on the same server is not a backup. It also helps to understand how these types of services work. And here we have cases from the last few months when some services that were hosted in Amazon didn't work. It turned out that they were simply designed or implemented in accordance with the rules. In most suppliers, this is just the case of Amazon, it looks like this: Services are sold in so-called regions. For example, in Amazon, Frankfurt or Dublin, or in the US. It works the same way in Azure or Google. And in every region there is something called Alibity Zone. And you can compare it more or less to a separate server. And if we buy services in the region,

so if we want to have services fast for Europe, then we choose Frankfurt, or we choose Dublin, or some Nordic countries in the case of Azure. The availability zone in this region is theoretically, at least, because cases confirm that not always, Availability zones are independent, one of them should not spoil another. Good practices say that if you want to host a service, do it in at least two availability zones. It's quite easy to click out in the interface. As I said, there were cases when someone made a mistake and the whole storage service in one region was lost, but these are much rarer cases. Or in the case of Azure, several regions were lost because

they decided that they would implement corrections for Hurra all over the world. Later they changed these rules. If someone wants to have quite resistant infrastructure, they should have virtual machines in two availability zones in one region. In IT world it is quite obvious. If we want to have more resistant, we should use two regions. It is more complicated, because you need more advanced replications, but you need to know about it. You also need to know that, for example, Amazon, because in some other clouds it's a little bit different, counts the availability if in a given region, availability, meaning if they can return us money, we can demand from them that they did something wrong, if in a given region there is more than one server, more

than one availability zone, if there is one availability, then according to Amazon is fine, it can work like that. It can work that one server is burning and everything is fine for them. If we have requirements for high availability, two zones are a minimum. And here, going towards the end, All these stories show that errors are on the side of the user, who either has no knowledge, no specialists, who can handle technology, So, can I make a conclusion, if I have a team of super administrators and I also involve some security teams, which will help me to get security on these higher layers, such as penetration test applications or such stories, if I'm super safe and that's really the

end of my participation? It turns out that not entirely. Also, there are losses or vulnerabilities in the infrastructure itself. This is a new issue. Suppliers are quickly noticing these things. This is the story with the bug in virtualization software from a few years ago. But for example, Amazon forced people to restart their servers. They just had to restart to make a patch. They just restarted them. Apparently, they told bigger clients that "Haha, be careful, we have such a hole and you would definitely want to have some fixes." But I suppose that smaller clients didn't take it into account and just restarted. And that's the end of my presentation. And I would like to finish it in a way that's not typical,

because usually you ask questions and Prelegent answers them. And here I would like to ask a question, because preparing for this presentation and following various safety issues related to the wall, I was looking for cases when the supplier was guilty, who did something wrong. Something like those cases, but it was actually used and I couldn't find anything like that. Maybe I was looking for something wrong, or maybe I don't have any data that is not publicly available. And if someone would like to tell such a story, I would be very interested. Thank you. Who would like to take the floor? I have a question. Unfortunately, I don't know such case. I would like to know. And the question is: do you have any experience

with Lambda functions and any problems with them in the context of security? No, to be honest, no. It's quite high in this whole stack. Exactly. It's not something I know about. Anyone else? OK, so thank you. Thank you.