
Hello, thank you and hello everyone. Great to be again here uh besides in my hometown. So that's um like special privilege for me and a difficult task of waking up everyone after lunch. Um but I'll try to do my best. So just to quickly introduce myself uh my name is Bon Dya. I'm CTO of a company called Infigo uh which does security in the region. I'm also a sense certified instructor. I teach sect 542 which is web application pent testing. I'm also also the co-author of that particular course and I started teaching 565 last year which is red theming. It's a cool course but um yeah enough about um this let's jump into interesting things
because I have quite a bit of slides and I want to go over time and of course I want to wake you up. So supply chain attacks, this is something that kind of gets me worried a lot and I think that we are unfortunately yet to see some largecale supply chain events. There have been a lot of supply chain events in the history and today I'll actually talk about two well very complex supply chain attacks. Uh one is very very fresh. It happened exactly two weeks ago. So I think it will be interesting for everyone here. Maybe you heard about that particular case where $1.4 billion went away in a company called Bybit. The second one will be about XZ,
which is probably the scariest backdoor I've ever seen in my life. So these attacks have been around for many, many, many years. And obviously all the regulators around recognize this. So today we have NIS 2, right? Everyone in NIS 2 talks about supply chain. You have to take care about supply chain attacks. Yeah, easier said than done as we will see. Then we have Dora for banks. Oh yeah, you have to take care about supply chain attacks. Well, of course we do. As I said, they have been around for many, many, many years. And probably the most well-known is stuckset which is 15 years old. Right? So I think everyone here knows about stuckset where um a specific
piece of malware was planted on USB um drives and then brought into separated environments. But it's kind of old story. We'll see what we have to deal with today. So our ecosystem of things that we use daily today became unbelievably complex. So if you use docker this is the dependency hell you have in docker. So when you do apt install docker or something like that this is what you pull into your environment. This is so small and there are more dependencies here that we actually trust every single day and there are many many many such cases as we will see in next 30 plus minutes. Another problem with today's software is that with this particular meme that
probably a lot of you have already seen which is unfortunately very very very true is that a lot of things that we use actually sometimes just were was created like hobby project. Probably again the best example for this is the vulnerability called heartbleleed. I think again a lot of you heard about hard bleed and the funny thing about OpenSSL where hard bleed actually um existed was that at that point in time open SSL had one one full-time employee which was a developer single person and we know that every software almost every software today uses OpenSSL libraries right So this is unfortunately very very very true. So let's take a look at three most commonly seen attack
vectors for supply chain attacks. First we have deception with fake packages or impersonation of other people. I will show you one interesting example with GitHub that is not that new but not a lot of people know about this. Then we have typical hijacking of developer accounts. So if you are a developer here, if you work for an interesting company, then you are probably an interesting target, especially for nation state attackers, as again we'll see in couple of minutes. And finally, when I mention nation state attackers, we have those longterm attacks that could take years to develop because nation state attackers have unlimited budgets. They have unlimited time. So they're trying to get that perfect moment when something will
actually work. So let's see what we can do or what we cannot do with developer um faking impersonation or deception packages. So today it's quite normal that when you install a package you pull all the dependencies um when you install that package. So, is there anyone in the room that ever installed something with I don't know for Python or npm for node and that went and checked manually every single dependency what no one who does that right everyone does that so the problem with this is that hey it's open source anyone can actually audit it Right? Of course, of course they can. But who does that? Who audits every single piece of code that they install?
Sure, there are environments where you need to do it. And I can see my colleague over there who actually has a lot of problems with that, figuring out how to do that. But most developers today, the vast majority will just blindly install packages that they require. I mean, it's it's there. It's in a repository. I trust that repository because someone else probably checked the code. It's open source. Probably check the code, right? So couple of common attacks that um we've been seeing for last couple of years or even more usually start with typo squaring. This is easy to do and something that can be very very powerful. So you simply try to find a popular package which could be the
Python requests library and you create another package which will have a similar let's see if this works. Yes. No, doesn't. Oh, it does. So we'll get something like this. It's a type of scoring attack. If we want to make it as perhaps stealthy as possible, we can take the original request library. We can put a back door over there. So for majority of users who do the typo, everything will work okay until our backdoor gets activated. Of course, repository owners are doing their best to kind of prevent this from happening. And if you do publish some packages like this yourself in your company, for example, one of the suggestions that I would have for you is
to create these packages yourself. So you register them, right? You can keep them empty but you own the name so no one else can do it. We can have dependency confusion attacks. This happens when you are using at the same time internal repositories and external repositories and most package managers unless configured differently and guess what's the default configuration will always try to fetch the latest version of a package. So what attackers do and for example even people in my team do that when we do some penetration tests when we see that certain package is being used but it's your internal package we can try to register that package in a public repository unless it already exists and then if your
developer does something like npm install the package manager again unless it's configured differently will check the local repository. Oh, good. I have version 1.0. It will check the public repository. Oh, look at this. I have version 1.1. Let me fetch that one. So, that's dependency confusion. Actually, easy to fix, but you need to modify the configuration. And finally, we could do some repackaging with small changes to actually take popular um software, maybe add a feature and then publish that as a new again perhaps popular package. Let's take a look at GitHub. And one thing that I find particularly well weird about GitHub and you can test that later if you want. Again, not something that is new, but
not a lot of people know about this. So, everyone here knows about GitHub. I can skip about uh facts that it's the most popular repository. A lot of people visit it. A lot of people host their own uh packets over there. So if you want to perform a supply chain attack and if I want to perhaps attack certain company, I'll do my intelligence gathering around that to kind of prepare for the attack. But let's say that I want to somehow make that company that target believe that I am a real developer. So how do you do that? You create a repository and you start pushing some changes into repository just to appear as if you are developing something. Do I
need to do that for a couple of years? Not really. Now this at the top is my repository. Actually not too active, right? And if you take a look at this you'll like developer. Yeah, right. I mean you pushed like what seven things in one year. How about the one at the bottom? What do you think about this person at the bottom? Is this a real developer? Someone with a lot of time maybe. Right now the problem the problem with GitHub is that GitHub will blindly trust whatever you put as the date when the change actually happened. So if you do something like this, if you simply let's see if this will work. If you simply modify the
date and push that into GitHub, then GitHub will say, "Oh, cool. This is this is a change that happened immediately after New Year's Eve." Yeah, right. One second after New Year's Eve. So that's when I'm going to mark it in the heat map. So if you want to create this at the bottom, this is a shell script which needs to run for couple of minutes and you get this. So it's actually very very trivial to perform something like this. And what's even better with GitHub, if you want to kind of hide what's going on, you create a private repository. You push things into a private repository, but they will be reflected on the publicly readable heat map.
So if you're applying for a developer job, you know how recruiters some sometimes check your GitHub repository to see how active you are. That's how we do it, right? They can't check whether you did something or not. But with GitHub, what really scares me is that it gets even worse. So with GitHub, you can push changes that will appear as absolutely anyone in the world. So when you fetch something from GitHub, if you want to see that this was actually done by that particular author, they would need to upload their PGP key and sign the pushes. Now who does that? Let's see. Is there anyone in the room who has an account on GitHub who's using PGP to
sign to sign things? No one. me neither or maybe couple of people over there. So why is this a problem? So see this particular commit over there which was done by Linu. So I wanted to I wanted my repository to appear as if Linux is doing me the honor of contributing code to my repository. All you need to do is change the author to match one of the accounts that have been registered with GitHub. And when you push something to your own repository, GitHub will happily match the addresses with existing accounts and populate their information in your repository. So this is my repository where on the right side you can see that I had a commit done by Microsoft. This
is Microsoft's official account that they use on GitHub and then Linu after Microsoft. Of course, they didn't do it. I just changed the author's address to match the one which is over there. The problem is that these addresses are actually publicly available. All you need to do is find one single commit by your target anywhere on GitHub. You take that address, put it in as the author into your commit and it will appear as if they did it. GitHub will do this happily and this will work today. So the proper way to do this is to actually sign your commits with PGP but as I said people don't do it and even if they do
it very rarely we check whether those commits are official or not. So this is one example of how we can attack people. But let's take a look at a supply chain attack that happened exactly two weeks ago. So this is very very fresh. So I think it will be interesting. And you probably saw in the news that this was the biggest crypto um high that happened. So a company called Bybit which is a second I think biggest was second biggest um cryptocurrency exchange I think um they lost something like 400,000 Ether which was worth about $1.4 4 billion two week ago. Today it's a little bit less because of them. So what happened? Um ah 1.5 billion, right?
So how do you steal this amount of Ether? Now by bit actually did a lot of things correctly. They split their cryptocurrency into cold and hot wallets. So a cold wallet is something that you keep offline, right? it's not even connected to the internet and you keep it perhaps on a special piece of hardware. Hot wallet is something that you juggle on all the time because you have to do it with your um crypto exchange or whatever you have. So they actually did everything fine. Not only they stored the cold wallet offline, they also required three signatures for anything to happen with a cold wallet. So you needed to attack three people simultaneously to do something with a
cold wallet. So how do you do that? How do you attack three signers simultaneously? I mean you could try to hack their workstations not that easy, right? You probably have to then somehow get persistence be there wait for the signing to happen or you go through a supply chain. So again what bybit did was besides requiring three signatures they were using a product called safe wallet. This is done or created by a company called safe. You can see the URL over there. they still exist and they actually provide you with a an online wallet that allows you to also do certain thing with smart contracts and all sort of different um uh chain activities that you might want to
perform. So as the attackers probably did over here was that they first again performed some intelligence to figure out what's going on and they probably found out that uh by bit signers use this particular application to sign their transactions. So they started investigating the safe company and at one point in time they hacked into safe company. So again, as they were analyzing how this works, they noticed that the safe wallet, the application that Bybit was using to sign transactions when they move them from cold wallets to hot wallets or somewhere else is actually a web application. It's something that they use in their browsers, right? And the safe wallet application again is hosted on AWS in S3 buckets.
Hm. How about if I get control over those S3 buckets? Can I do something bad over there? Of course. There is one JavaScript file that you can see here. They removed it. It's it's not there over anymore because they repacked the application. So, it's just a different one that's being different ID that's being used. But this JavaScript file was actually responsible for signing different things. That's what runs in our browsers when we actually want to sign something using the safe wallet application. So they took the JavaScript file, they analyzed the JavaScript file and they backdoor the JavaScript file and they begged it sometime before 19th of February this year. Now this is a huge JavaScript
file. see this um 3.6 megabytes. Good luck reversing that, right? It's minimized. This is what it looks like on the right side. Anyone wants to reverse this? Didn't think so, right? It's it's like pure horror, but they probably somewhere got the source, they knew what they need to do and they actually injected backdoor code into that JavaScript file. So on the previous slide I had both versions the backdoor version and the fixed or or legitimate version of the JavaScript file and when you do diff on both of them this is what you see. So this is where they actually put the backd dooror. So let's take a look at what they did with the backdoor.
What I like about well like what I find interesting about the attack is that they didn't want to immediately move $1.5 billion dollars somewhere right? They actually with their back door created a smart contract that did this at a at a certain point. So they just wanted the signers to sign the smart contract that they injected into the eter code chain. So this is the backd dooror on the right side. It's actually not too difficult to analyze it. It's just some a little bit minimized JavaScript. So at the top we can see the contract addresses that the attacker wants to attack. So they found what by bits was using. Then we have the um oh sorry these are yeah these are
oops these are the um safe contract addresses. These are the addresses of the ciders. This is the smart contract that the attacker put into the blockchain. And this is the payload that they will actually inject into this particular um signing transaction which will reference that signing contract that we have over there. Now we just need to put some guess. We have enough money to do that. We have all the money in the world. Um we want to check whether we are in the context of the signers. So for anyone else using the safe wallet application, everything worked fine. No back door, right? We just check if we are in the context of the um targets of the
victims. If we are modify the transaction to include our smart contract and execute it. And once we do that, there was a delegate call to the smart contract that you saw on the previous slide. And the result was on this slide. Um these are just names as people were analyzing different wallets. So by bit exploiter is the attacker. The cold wallet is the cold wallet. And you can see that cold wallet had 400,000s and after this was executed it had what? Zero, right? While the attacker who was very poor and had zero before after this became unbelievably rich and had $1.5 billion. So the question to think about later, how do you get those $ 1.5
billion dollars out of ET, right? Um they probably got a lot of this they created zillion transactions after this through various mixing services through converters which convert things from Ether to Bitcoin. So it's very very difficult if not if not impossible to actually track all of these things. So this is a scary attack. But the one that I will talk about right now, I just have enough time to cover that is even scarier at least for me. So the last supply chain attack that I want to share with everyone happened a little bit less than a year ago. So um end of March, this was actually Good Friday before Easter 2024. And um this email kind of saved the world. And
you'll see why in a second. So this was an email to the OSS security mailing list by a person called Andres Fry who is um posgress developer working in Microsoft that's fun right because he found something in open source software and he said I was observing some few odd symptoms around one particular library. I noticed that something is behaving weird. So I started analyzing this and the and I found that the upstream XZ library repository was backd dooror. So XZ is a compression library same like zip but more powerful better compression ratios and actually Linux distributions most of them if not all use XZ as one of the default packaging compression mechanisms. Makes sense we kind of
compress things better right? So what I found very very interesting here is that how he actually got to this email. So he's a Postgress developer and he had a relatively large farm of Linux servers with a lot of processes that were using SSH to connect to each other and he was as he's testing things he's a developer right he's always pushing the very latest unstable build for Debian just to test posgress on this particular build and after the build that he pulled few days before is he noticed that all of his servers started using more CPU than before like 10%. Now, a question for everyone in the room. If you installed a new version of
a package and you saw that it's using more CPU, what would you do? Would you go investigate or do you just revert back? Undo. Undo. Undo. Right. I think 99% of us would do undo undo. I mean, someone else will take a look at this, right? Luckily for us in this particular case, someone else was Andreas front. So, in order to explain this story, and this is this is like a movie story really, right? We have to go back a couple of years in time to 2021, three years ago. So again as I'll explain how the backdoor works what we need to remember here is that there is one actor here called G10 with this particular um
handle and this is the first commit that people who did some analysis of this particular account found that G10 actually did in 2021. So this is again a diff. In red is what G10 actor took out. In green is what G 10 actually did with the code. Can anyone tell me what what's wrong here? Can you see? Can you spot the difference? Check how difficult this is to see. You're looking at three lines of code. Can anyone see what they did? Just say loudly so I can hear you once you see it.
So red he removed green he they added whoever he's so line 376 here or which one? Yeah. Notice that line 375 376 from the second print f they just removed safe print f. Notice how long it took all of us here to find this one simple change. And what they did was that they changed the function. So save print f cannot be attacked with format string attacks while frintf can be. So they were obviously messing up with something here. This is the first commit that they did. Now no one knows who is G10. Even today we have no idea who is G10. There was a lot of effort into trying to figure out who exactly this
is. This is a nation state attacker. There are more than one persons behind this because once we go through all the details about the back door, no way that a single person could have done this. I guarantee that. Absolutely no way. Now, probably Singaporean or Malaysian because um as people were checking how they write things in all of their commits, what is the time zone and so on, a lot of small details probably somewhere from Asia. doesn't necessarily have to be of course this is just some intelligence gathering. Okay, let's say that we don't care who is G10. Let's see what else they did. So their goal was the following and this is important to
explain the story about the XZ backdoor. They wanted to backdoor every single SSH server in the world running on Linux. So if this was your task, how would you do that? like you could you could try to hack into open SSH maintainers but you have people over there I don't know is it still tat and simple and similar people like they're really really strict about security so breaking into open SSH injecting a back door over there good luck that's really difficult because those guys are paranoid about security and that's good for all of us so we can't hack into open SSH so What an attacker would do is try to figure out what else SSH depends on supply chain
attacks. Remember, so as these attackers, G10 and people around G10 analyze SSH, they figured out that on most Linux distributions today, SSH is linked to systemd. Anyone likes systemd? Okay, good. Um, does has nothing to do with this just generally. So, it's linked to system D for logging and system D is again linked to another library which at the end is XZ for compression H. So, if I could something perhaps push into XZ library, I could actually have a supply chain attack and come all the way to SSH. Okay, let's see who maintains XZ library. Perhaps it's easier to hack into XZ than into Open SSH. Well, fortunately for them, fortunately for attackers, the XZ
library was maintained by one single person. And that one single person had some health issues. So, they were not constantly online because they were battling with some sickness or whatever. So their plan was the following. Let's somehow try to move this original maintainer away from the exit library. Let's try to take over the exit library ourselves because if we are the owners, we can push a backd dooror in much much much easier. And that's what they started to do. This is why I said that this is movie material. So back in 2022, hey, two years before the actual attack happened, they create a number of fake accounts. They started pushing legitimate commits to the XZ
library, perfectly legitimate code, actually new features, but the maintainer is not merging this into the main repository. Why? They're sick. So on the mailing list they start bullying him like you know what there haven't of any updates to ex library and there are all these new features that people are contributing someone else should take over the library. Clearly the original maintainer cannot maintain it anymore. So they keep on bullying this poor person for several months to add another maintainer. Right? Clearly, you don't have enough time to maintain the exit library. But hey, my name is Gaten and I'm willing to take this over from you. If you just give me access to the master repository, I'll take over. I will merge
new commits. I'm also already a developer. So, it will be all great and nice for the world. Right? So, they start bullying this person. Over the period of time they make over 700 commits all legitimate imagine all the time they have in the world right so they actually contribute to open source software we should maybe thank them or something I don't know and they start pushing this person again and again and again until January 2023 where we can see the first merge of G10 into the master branch. And this is where original maintainer actually gave up and said okay you know what I don't have enough time to look after this clearly this person is very
very interested into ex well yeah right of course they are right but they're contributing really so I will allow them to become maintainer as well and now things start moving quickly The original repository was hosted on the website of the original maintainer. So they changed the DNS pointer to GitHub and on the mailing list they say, "Oh, you know what? We're going to move repository to GitHub because it's safer over there, right? It's not on the private site of this particular maintainer anymore. It's now on GitHub." Cool. Sounds good, right? What they achieved with this is that it's now fully under their control because they own the GitHub repository. They prepared the back door and I'll talk about that at the end.
Just about seven minute left. Once everything was prepared on 25th of March last year, so almost about a year ago, they start pushing Linux distribution maintainers into including the latest version of the XZ library into their distributions. Hey, look at this. You haven't updated the XZ library for many, many months. I am the new maintainer. we added a lot of bug fixes and um a lot of uh new features. So you should actually add it to your repository and Debian does that. So if you are using Debian somewhere luckily it didn't make it to other it was only in unstable but if you're using Debian somewhere with unstable then at one point in time if
you were regularly updating you actually downloaded the backd dooror to your machine as well. So Debian did it they tried to push everything into Fedora as well and they succeeded. They pushed it into Fedora row height which is the like unstable version of Fedora. They tried to make Ubuntu do it as well, but Ubuntu maintainer said, "Oh, you know, we're waiting for Debian to go through their cycle, then gonna put it into Ubuntu." So, they didn't make it into Ubuntu. Do you know which very popular security distribution uses Debian Unstable? Anyone knows? Kali. So if you are a security expert, you use Kali and you every day do a update, right? You got the backd
dooror into your Kali machine. Um when I saw this, I went to ask my team, hey, did we install the backd dooror? They were like, oh don't worry, we haven't updated in one year. So good. Okay, saved this time, right? So they made it into Debian. And before we go into details of the backdoor, I just wanted to challenge you with one more fun commit by G10. Anyone knows what they did here? So this is the opposite just to explain. Um green is what was there before. Red is what G10 added. So I only got the revert back. So this is the revert. Can anyone tell me what they did? something with sandboxing. Yeah. But
first of all, can you see what they what they did with the change? They just added what? Dot. They added one single dot into code. And you know what they did? They broke the code. So the sandbox didn't run in this particular case. And it didn't break the build process. Amazing, right? They just found where they need to put one single dot and that broke the sandbox part of this particular file. Pretty cool if you ask me, right? What else do we have here? Now, as they created the backd dooror, they wanted the backd dooror to be as stealthy as possible. Remember, we don't want people to find out about something that we spent three or four, maybe five
years working on. So they modified the build files that um the XZ library is using when you're creating packages. So the backdoor will be executed only when Linux distribution maintainers actually build it in their CI/CD pipeline. So in other words, if you downloaded the backdoor code from GitHub and you did something like dot /configure make the back door didn't run because they it checks during the make process whether it's in the build pipeline or not. So we do couple of tests over here. I'll show you one cool things and a couple of cool things and again there is a there's a question how do you inject backdoor over there? I mean, you could put it in
source code, but I'm not that happy with this. It's in source code. Again, does anyone check source code? Someone else does, right? But it's much easier to check source code than binary files. This is a compression library. So, when you download the package, it contains two binary files which are used for testing. a good properly compressed binary file and a corrupt binary file. Isn't this perfect for a backd dooror? I have a corrupted binary file that I can legitimately distribute with my library. Perfect. So that's where they put it. So now in the build process that should fire up only when Linux distribution maintainers are actually building the library. We want to get our back door to
fire up. So again, how did they hide it? Well, there is a simple shell script over here. Let me beautify this a little bit. This is what I do in that corrupted file. They simply use blocks of 2 kilobytes which contain the binary code. Then two kilobytes of garbage. 2 kilobytes of binary code, two kilobytes of garbage. Garbage. This is what it looks like, right? So with the shell script that you saw on the previous slide, they just concatenate all of this into one binary file. Now this is the backd door. It's not readable yet. It is still compressed and encrypted. So in order to extract it again, let me beautify this for everyone. They created an O AWK script.
Who here knows how to create an O script that does more than calculate just columns? Those of you who are old as me, that's that's as much as you did, right? You can you calculate something in columns. Well, that's about it. They built this is RC4 encryption in Octo. How did they find a person who knows how to do that? Puzzles me completely. So they decrypt the binary and now what happens is that if everything is built into the the particular package you installed it on your Linux distribution and once SSH starts it will link with systemd and it will then load these compression libraries. So the payload will actually work. Now, if we work if we go a little
bit under the hood, I'll just take a few more minutes, but I'm I'm fine with timing. Um, at this point in time, as this is part of the analysis I did, I had to actually go into the binary file to figure out what's going on. And they did all sort of crazy obfiscation things here. Again, they wanted to be as stealthy as possible. So they modified a function called underline CPU ID that the compiler is using or linker sorry loader when it loads the file to figure out what's the best architecture the most efficient architecture code for that architecture. So they call it very early. Why? So they don't have to remap or change permissions on memory later. This could
be caught by an EDR. So they do it very very early so they don't need to do that. What else? They stored every single string as a radics tree. Does anyone know what is a radics tree? People some people not nod their head. If you ask me I I will nod my head but then if you ask me how it works um can I go to Wikipedia? Right. I remember I heard about it. I had to go and check. So this is a very efficient tree but it stores strings as binary basically. So when you do strings on this zero visible strings nothing is visible it's all stored as a red x tree. What else do we
have? Ah anti-debugging. This is my favorite. So they actually check whether you are debugging the code and they are looking for this instruction and branch 64. This is a relatively new instruction that Intel added to prevent return oriented programming attacks. ROP attacks. So when you do a rope attack, you basically do something like jump and then um a register and it has to land on this instruction. If it doesn't, the processor kills the process. That's how Intel tries to stop ROP attacks. They do the same thing. They check whether end branch exists. If it doesn't, then you're perhaps debugging the um the the program kills. It doesn't work. They mess up logs. So if they see
that there is a log which will say that someone tried to log in, they'll mess it up. They'll change the log. So they really went through all the possible steps to make the code as invisible as possible. And they did all of this to hook these three functions. These are the functions that SSH uses to allow public and private certificate authentication. Question for the room. Who here has ever used SSH with certificates? Not with keys. Not with keys. Keys are another thing with certificates. anyone? No one with certificates. Me neither. But it exists. It's a feature. You can have your certificates. You can sign them with a CA. Your SSH server needs to trust the CA. And then you can
log in without a password, but this time with a certificate, not SSH key. That's what they hooked. So finally, in just few more slides, I'll be done. Um, they do a couple of other checks. They check if you're running at SSH, if the buggers are uh running. They parse the environment variables to see if there is a key that allows them to kill themselves automatically. Right? And then we come to the back door running on the server. But now the question is how does an actor consume it? How do they actually execute code over there and not let anyone else do it in the world? So what I decided to do is to send the
payload in the first SSH packet when it's using certificate based authentication. In this packet among other things you need to send your public uh public key right which is signed by the uh you do some exchange later doesn't but you need to send something which is signed by the CA that you trust. So there is plenty of space to put back door commands. So there are a couple of bytes that they put over here. Then they use the chaa symmetric cipher to encrypt everything. They encrypt it with a key which is actually hardcoded into the backd dooror. So when people saw this initially they're like oh what it's we got the key right we can
decrypt all the commands. Yes, you can, but you can't send them because they need to be signed by a private key, another private key that only the attacker knows and the private key is built into the um into the back door. So that's how they prevented anyone else for abusing the same backdoor. What's even better for them is that there is another signature over there which is your SSH server signature. So what they could do is build like a front end and then tell you oh look look at this public available SH server it's backd dooror and we can give you access only to that server right to no one else because it will be tied with the public
signature of that particular server. So once you run it this is what it looks like. Um there is a package you can get on GitHub that will allow you to simulate this lessons learned for the end and I will just finish on time. We'll see if we have time for questions. We got really, really, really, really lucky this time. If that one posgress developer who works for Microsoft, okay, didn't spend time figuring out why his servers are using more CPU. This would have made it into every single Linux distribution in the world and every single SH server would be backdoor and it would be question how long it would take for us to actually find this
for anyone to find this. So what can we do here? Well, supply chains are difficult things to defend against. There are some general recommendations here. Of course, I can always throw big words around like adopt a zero trust approach. Check your software secure development life cycle. Check your supply chain. Yeah, right. Easier said than done. All of these things. But again, if we do all of these things or we try to do all of them, then perhaps hopefully we can actually catch when something like this happens. But in any case, start slowly. We can't do this overnight. Start slowly verifying your suppliers. Start verifying your supplier security because as we saw in the XZ XZ
case and with B by bit the attackers will go against our suppliers not perhaps against us.