← All talks

Well well well, if it isn't the consequences of my own actions

BSides Canberra54:50638 viewsPublished 2025-12Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
"Well well well, if it isn’t the consequences of my own actions" - the time I got in the middle of 100,000 Linux machines and their fwupd/LVFS firmware updates 🙈
Show transcript [en]

We have a great talk now by Justin Steven. Well, well, well, if it isn't the consequences of my own actions. Please, let's welcome Justin to the stage. [applause]

Thank you everybody. This is a talk about actions and their consequences. And my name's Justin. I'm the head of research at Tanto Security. Uh, and I like to make computers weird. And good news, if you enjoy this talk, I'll be here same time, same place tomorrow because my hubris knows no bounds. So, I'll be joined by my good friend Mario, our technical director and co-founder to talk about a template injection bug that we think is quite cute. Uh, if you don't enjoy my talk today, don't hold it against Mario. We're really excited to share that with you. But this is one of my stories from the vaults, and it's a story about luck. I really felt like I was in the right

place at the right time doing the right weird stuff to squeeze myself between 100,000 new friends and their firmware updates. And I'm really proud of the work I did. I'm really proud of the root cause analysis of the bug, which we'll get into in the latter half of this. But I definitely felt like I got very lucky. But I also think you can make your own luck in some sense, and we'll talk about that at the end. But as I said, it's one from the vaults. This was like late 2019, early 2020, and there was a Thunderbolt or UEFI bug or something like that. All I knew is I wanted to update my laptop and my laptop ran Debian. I use Linux on

the desktop to do a lot of my productive security research work. Got a MacBook for, you know, faffing about on the couch and making slides and stuff like that. But my Debian machine is my my research workhorse. And I'm an insufferable person. I don't use a traditional desktop environment like Gnome. I use i3. It's a tiling window manager. And I'm one of those lunatics who does everything from a terminal that he can. I don't know if I think it makes me look cool, but I don't even have like a file manager. I've got move cs. What more could I want? And this isn't just a weird flex. It becomes relevant shortly. But my machine had firmware that needed

an update. And I found something called LVFS, the Linux vendor firmware service. Uh it's a place where vendors can go to upload firmware things like uh for wireless mice, for docking stations. Vendors can upload firmware blobs to this cloud service. And then there's a reference client. I think it's a reference client. It's probably the only client really. There's a client called firmware updator. And that's something that users can run to pull down firmware from this cloud service and flash it to their hardware. And if we zoom in on how firmware updator works on your machine, it's this client server model. There's a server firmware updater that's running as root. Then there's this client utility it

comes with called firmware updater manager. And they communicate over debus. So the client can ask the root service to download updates and flash them and and whatnot. And the reason for this separation is because that root service has to run as root. It's doing things like flashing things onto hardware whereas that other service can run as just a a regular user. Also details that will become important shortly. But me late 2019 wanting to update this firmware. I installed firmware update manager uh firmware updater and did a firmware update manager refresh. This is to pull down the catalog of available updates from that LVFS service so I can pick what I want to install. And I got an error message in my

terminal when I did this. It said that it failed to download a file from S3. And because I'm a curious kitten, I curled that to see what was going on. And I got an error message that said that the bucket could not be found, which was concerning. Why is my firmware updater accessing some non-existent S3 bucket? Why would anyone's firmware update manager access a non-existent bucket? As for why mine was, turns out it came down to the fact that I was running Debian oldstable. Uh, and if you're not familiar with the the Debian release model, it centers around these two additions called unstable and stable. When I used to hear these two terms, I thought of stability in terms of

crashiness. If you run Debian unstable and it crashes, it's your own fault. You shouldn't have run unstable. It kind of says on the tin that it's not stable. But that's not what Debian means when they say the word stable. They're talking about volatility. The point of stable is it's kind of a snapshot in time. It's frozen in time. And if you've got stable installed or old stable, like a prior stable version. Let's say you've got engine X installed with a config and the config is finally tuned and you do a system update. You're not going to get a new version of EngineX. You'll get critical bug fixes and security fixes, but you're not going to get a change

that breaks compatibility with your config for EngineX. you're not going to get new functionality that might not fit your use case or expose your uh expose you to additional attack surface. It's kind of the point of Debian is it's frozen in time. So new packages, new new shiny tools for Debian flow into the unstable release. Then they cascade down into testing which is kind of like the the next stable in development. But then there's a bit of a wall. It's a it's a dotted line though because the Debian package maintainers will kind of cherrypick little bits and pieces out of these new versions and this is to fix critical bugs and security issues. So if

there's a CVE and Engine X has a new version and they change some bit of code, someone from Debian will isolate that change and pull it back into stable. But there's no new features, there's no new functionality and your version numbers won't increment. You'll stay stuck on these old versions intentionally. It's a feature, not a bug, but you still get these uh these security fixes. They're frozen in time as I suggested. And what's curious is like it's this is an aside. I think it's cool though, like testing and unstable have no security guarantees. Um there's security teams that focus on making sure if there's a CVE that it gets fixed in these old versions quite readily, but

the new packages flow into these new volatile versions anyway, so they're going to get the update pretty quickly. That's just an aside. But I was here. I was running a version of stable that was one version behind. So my packages were getting quite long in the tooth at this point. And as far as firmware updator's relationship with the Debian packaging team, there was a version in 2016 that came out 074. And then when Debian 9, which was my version at the time, kind of froze itself in time, that was the version that got baked in and pinned as the version that Debian 9 would always have unless little fixes flowed down. But then about a year later, firmware

updator released 106 and there was a novel change in this version. They change the default config that ships with the product

and they changed the URL from which firmware updator would pull updates. And the reason they did this was they wanted an interstitial host name in the cloud they control. So that in case LVFS wanted to change where it was storing all of its stuff in the future, there was that pivot point where they could just flip a switch in the cloud, repoint the host name, and everyone who's using a config that points at that interstitial host name would follow along. So these old versions were pointing straight at S3 with their default config. This new version pointed at this interstitial host name and it was probably CNamed off just as a pass through to S3 for now.

But at some point, LVFS did what they were planning and they switched CDNs.

I was editing slides until about 5 minutes before this. I may have got something out of order. LVFS changed CDM and they shifted all their packages across to some other host somewhere. And as part of doing this, they repointed that inter firmware updator repointed that interstitial host name. And then at some point, I'm told it's November 2019, uh, in my communications during the disclosure of this, I'm told it was November 2019 that LVFS actually deleted that old bucket. They were thinking everyone should be on a version of, uh, firmware updator that had that new config. Everyone should be flowing through this interstitial host name. We don't need this old bucket. No one should be talking to it anymore. Should

be. And I was here running Debian 9, which was still pointed at it. and I still had vulnerable firmware and I wanted to fix it. I think at this point I gave up, booted into a Windows Live environment, flashed my firmware there and happy days, but it wasn't happy days. I was still nervous about this whole thing with my version of firmware updator pointing at a non-existent S3 bucket. Cuz the thing about S3 buckets, and Amazon's not shy about this, if someone deletes an S3 bucket, anyone else can grab the name. It's a global namespace. If someone deletes a bucket, Amazon goes, "Cool, anyone else want it?" And I could have grabbed it. Anyone could have grabbed

it. And I struggled with this for some time. Is it appropriate for me to go and pick up an S3 bucket that someone decided I don't want anymore, but that some people are still clearly communicating with? It's not like I was going to be doing actual hacking to someone's computer. I was going to grab a bucket and if you turned up to me, it's your own fault. But I struggled with this for some time. I didn't know if that was responsible to do. Then I did it anyway. I thought this only affects old configs. It only affects people with old versions. It only affects people who are doing firmware updates. How many people could that possibly be? What's the worst that

could happen? This is foreshadowing. So I grabbed it. I was now the proud owner of LVFS bucket. Turned on the logs and went to bed because that's a smart thing to do when you do something rash. And I woke up to 41,000 requests overnight. which was a few more than I thought I was going to get. The reason why I was off in my estimation of how much traffic I'd get by orders of magnitude is remember how I said I I'm I'm uh insufferable and don't use GNO. I thought this was how people worked and and exercised worked with and exercised this firmware update process. They dropped to a terminal and did it manually. Turns out there's stuff like

Gnome software and KD's equivalent and Canonicle's equivalent and system 76. These all act as clients to that system service if it's installed. And at least in the case of Gnome software, it does it every 24 hours. Turns out in the end I had about 100,000 hosts phoning home to me is the lower bound of my estimation. Uh and I feel like I felt like I'd bitten off a bit more than I could chew or more than I was expecting to at least. Once I calmed down, I I thought about it. what could I even do with this if I wanted to do something? And this is the point where I say I didn't do anything with it. Uh I ended

up controlling the bucket for about 30 days and then I gave it back to LVFS for safekeeping. During the stuff I'm going to talk about, I never put anything in the in the S3 bucket. But to know what we could have done, I had to understand what the LVFS data model even is. We know there's a bucket. We know there's files in it. We know the clients are pulling files. What are they pulling and what are they doing? There's two files in the bucket. The first one is a XML file. It's a gzipped XML file, but we we'll just call it the XML file for now. And then there's a digital signature on it. It's a PGP detached

signature produced by a private key that we can't possibly know. Now, you can pick up someone's dangling S3 bucket. You don't get their private key for free with it. They probably kept this in a vault somewhere or on their build service somewhere. So, this is out of bounds for us. But the XML file is describing that catalog of updates, all the different devices that can be updated and each has a URL and that URL points at the actual firmware file. So XML catalogs the updates. Each one points at a firmware file and importantly the XML has a cryptographic hash just a plain old probably SH family hash of that of that firmware file. Which means if that file, that firmware

file changes, becomes malicious or corrupted, then that hash isn't going to match anymore. But we control the bucket now. So we can actually populate the XML with URLs and hashes that actually match. But we can't because there's a digital signature that would then break. So this gives us a chain of integrity. We can't know the private key. So we can't create the signature which signs the XML which describes the URLs with the hashes. You get this complete chain that a consumer of LBFS could use to verify what it's pulling down. So we can't know the key, but we can control what's in the bucket. That's what we've got. That's the primitive we've got on our hands. And what does

this chain of integrity situation mean for firmware updater? If we consider them two distinct parts, LVFS offers this chain of integrity. Surely firmware updator leverages it. Upon downloading the manifest, check that the signature file matches or is correct and is valid. And then each time you download a firmware, check that the hash of the firmware matches what was in the XML to kind of exercise the whole chain. So in terms of the first step of that downloading the manifest, we control the stuff in the bucket and then firmware updator is going to do the signature check and we can control what feeds into the signature check. We're probably going to end up down here on the sad

path. We'd love to end up on here on the happy path or be able to end up here on the happy path. And what we'd love is a vulnerability in this signature check such that we controlling those two files can end up with a valid signature check. And given that LVFS uses a PGP signature, we need to understand a brief difference between two different PGP signature types. There's what I'm going to call an attached signature. They're normally called normal signatures. I'm going to be a jerk and call them attached. So, there's a clear distinction between the two. If I don't call them something that has a clear distinction between the two, I will get confused and I will say the

wrong thing. I will still say the wrong thing. I have said signature that many times the last few days. It doesn't sound like a real word anymore. But there's attached signatures and then there's detached signatures. These are two different signature types that PGP can produce. This is what they look like. First thing is that the attach signature is bigger. And that's because it's actually baking in the file that's being signed. It's one whole container that contains the data that was signed and its signature glued together. A detached signature is slim because it doesn't bake in that original document. You would need two files, the original file and the detached signature to actually do anything with the signature.

But with an attached signature, you can just have the one file. You can do a cryptographic validation. You can even extract that original document straight out of that container. And LVFS uses the detached signature pattern. That's why there were two files in the bucket. So then I wanted to know how does firmware updata behave when given different types of inputs, different XML, different signatures. What are the ways in which it behaves under different circumstances? And I wanted to start with the happy path. What does firmware updater do when the signature check passes? And the easiest way for me to do this was just to use genuine LVFS XML with a genuine LVFS signature. So I did did a refresh

and it was really boring. I got a success message saying, "Yep, I downloaded the metadata and one of your devices is supported by me." Boring, but it's nice to see what happens on that happy path regardless. Then I wanted to try an unhappy path. The first thing I thought to try was a bad detach signature, one that LVFS or one that firmware updator shouldn't trust. And the easiest way I thought to trigger this was just to use my own PGP key to produce that detached signature. And firmware updator should be like, cool, but I don't trust that key and kick out a signature error. That was the theory. Again, I didn't put things in the S3

bucket. If I wanted to try this and had done it live with the real S3 bucket and then done an update on my computer, who knows how many of my new friends would have done it in the meantime. So, I couldn't afford to do this out in the in the big cloud in the sky. Had to do it in my lab. And remember, what kicked this whole thing off is firmware updator changing that config. We can change the config, too. This is a place at which we can point firmware updator to a server we can control the contents of to do an offline lab-based exploration of these different happy and unhappy paths. So I did that, pointed my

firmware updator at my own server where I could put my own junk. So instead of phoning home to LBFS, firmware updator would phone home to my server where I could put nonsense and see how the service or see how the uh product reacted.

So, we're feeding in two files, XML and signature. In this case, it's a bad detached signature. If the signature check fails, we should expect to see some kind of signature error message pop out. Bad signature. I don't trust this key. But if it passes, we won't see a signature error. We'll see something else. What I did, and this didn't end up being necessary, but just a bit of trivia, I just put junk XML data in there so that I could expect to see either a signature error or an XML error and use that as an oracle to figure out which path got taken, just the way I chose to do the testing. So, this is the XML catalog of firmware

updates I produced. If you know what an XXE or an XML external entities bug is, this will look familiar because you know what entities are. This is not XXE. It's just an illegal entity. It's an invalid non-existent entity that should elicit a XML parsing error. So I produced a detached signature of that XML file with my own PGP key, hosted it, and did the refresh. And at this point, we'd expect to see a signature signature. See, I told you it doesn't sound like a real word to me anymore. Signature error, which we do. So we can know that firmware updator is actually doing signature checks. It it kicked out an error saying I don't know

this key. I can't validate this signature. Good. So the happy path is boring and the unhappy path is also pretty boring. We got what we expected. We got that error. Then I had to think, okay, what do I try next? What's the next weirdest thing to throw at this signature validation? And what I decided on was swapping out that detached signature for an attached signature. And this is strange. This is really strange cuz there's two different documents involved in this. Like there's no reason why they have to match. Like the one on the left is going to be the XML data. Uh but the one on the right could be like a signed copy of the Blu-ray rip of the B movie

for all we care. It could be arbitrary data. It doesn't have to match. Um, and this is a clearly nonsensical thing to even think about, let alone the fact that the key is not trustworthy. It doesn't make sense using a file like that that's unrelated to validate some other file. Like, it's patently absurd, which is why I call it the weird case. So, again, we've got our XML that may elicit an error if it makes it through the the the signature check gauntlet. We produce a a attached signature of arbitrary data to put alongside it. I didn't have my copy of the B movie handy, so I just used the word LOL. Hosted it. Did the refresh.

And once again, we should expect to see an error pop out. We did see an error, but it was an XML parsing error. For some reason, this attached signature of unrelated data by an untrustworthy key appears to have bypassed that signature check. And we've ended up down here in the happy path with a way to fool the signature check of firmware updator. And it doesn't matter what that arbitrary data we produce the attached signature over. Its job is to be a validator for the XML. Once that's done, it gets discarded and the XML is what drives that update process. So, at this point, we have the bucket. We have a bypass. We can control the firmware

updates that are advertised to 100,000 Linux hosts. But why did this stupid idea even work? Figuring this out took longer than thinking of the stupid idea in the first place. But I was curious why this patently absurd situation seemed to skip over the signature check. So, here's the code of firmware updators signature validation from early 2020 around the time this was going on. Uh, don't freak out, don't squint. We're going to zoom in on this and work through it piece by piece. But roughly speaking, there's three steps to it. There's the mathematical cryptographic validation of that signature. there's extraction of the result of that first step so it can be analyzed in the third step which is does this signature check

meet the criteria for a good signature. So we'll start by zooming in on those first two pieces. This is what that looks like. And all throughout this snippet of code there's references to GPGME which stands for GNUPG made easy. It's basically like a C library that a application developer can a developer can use to access GPG primitives from their code. It's produced by the GNU PG same group that make GPG which is the free PGP. So this cryptographic validation step takes those two the contents of those two files that we can control the signature and the data. does the mathematical cryptographic validation steps that GPG does for a signature against some data and then firmware updator checks to make sure

that that cryptographic library call succeeded. If not, we bail out with too bad, so sad, bad signature. But if it's succeeded, we then extract firmware update extracts the results of that cryptographic validation out of out of the the context of the GPG GPGME stream of operations extracts it out as a structure so that it can be analyzed in the next step. And again, if this step fails, too bad, so sad, must be a bad signature, we bail out. So that's those first two steps. Then firmware updator needs to look at the result of this GPG operation and see if it makes it happy enough to consider it a good signature. And this is what that looks like.

It reaches into that result structure and then loops over each of the signatures of the cryptographic validation. If any of them are bad, firmware updator will bail out with too bad so sad. But if that loop gauntlet completes without bailing out then by default it must be a good signature and so firmware updator says looks good to me. So again mathematical extract the result consider the result loop over it bail out if any of them suck which is a weird way to approach this type of thing. I would have thought that you'd want to check if that at least one of the signatures are good instead of just checking that none of them are bad. It

was it was strange to me for that boring happy path. Everything's green, the math succeeds, the extraction succeeds, we do the loop, none of them are bad, we go through to looks good to me. For that middle case, that bad detached signature, where we get stuck, where we get caught is in that loop as the signatures are being looped over. That's where it's going to loop through the results and be like, "Ah, I don't know this key. I'm unhappy. Return bad signature." So the happy path, pretty simple. We survived the gauntlet. That middle path, that unhappy path, we get kicked out during that loop operation. But what's happening with our weird case? So if we step through what happens in

each of these phases with the weird case I attached a debugger and did step step step and tried to reason about the flow of the code. So, we'll start off with the cryptographic mathbased validation where firmware update is calling out to GPGME to say, "Hey, here's some signature, here's some data, please check." And this function is documented by GPG thusly takes three parameters that we're going to care about for now. The context is just like some context tracking across operations. We can forget about context for now. signature is our attached signature. Remember, it's normally detached. We've made it be attached in the lab. Signed text is that gzipped XML that is being validated, being used to validate. And

plain in the case of firmware updator, is the null pointer. That's how this function gets called. The documentation for this function from the library starts by saying that this function's job is to verify that the signature in the data object SIG is a valid signature. Sounds like an awesome function to use for signature validation. Here's where it gets weird. This function can be used in two different ways. And the documentation starts talking about these two different paths. The first case is thus. If sig is a detached signature, which normally it is, but we've caused it to not be. So that is not a matching condition for how we're able to cause this library function to be called. Um the other two

criteria match because this is the path that firmware updator normally talks. But we've broken that first condition. Two out of three is not good enough. And so this is not the way that this function is currently being called. The other way this function may be called is if sig is a normal signature signature which an attached signature is. So that's tick. Then the sign text should be a null pointer. That's a miss. and plane should be a writable data object. It's the null pointer. That's a miss. So really, we're causing this function to to be called in a way that the library doesn't describe as a normal way for it to be called, which is exciting.

So what will happen next? So that math validation gets done and then the error check gets done to see that the math-based validation was done successfully. And we're causing this to be called in an out-of-bounds, out of scope kind of way. yet we don't get an error from this function call and we carry on. So then we do that thing where we extract the result of the math out into a structure so we can analyze it and that also doesn't raise an error that that succeeds and we carry on and then we get to the loop. We get to that gauntlet that's going through and looking for any bad signatures and kicking us out. And to know how this

loop's going to operate, we need to have a look at what comes back from this extraction step under our three circumstances. So the first two are here. The result structure that gets given up by that that by that extract the results please and thank you function has a signatures field in it which points to a linked list of signatures. And in the boring happy path, it's a good signature. In the middle path with the bad detached signature, it's a bad signature which that loop gets angry at. And in the weird case, signatures is null. It's an empty list of signatures, which means this loop doesn't even execute. There's no signatures to loop over and we skip straight through. It

looks good to me. So, firmware updator definitely had some questionable logic with the way they designed this validation. Um, they made what I think is a reasonable assumption or at least not an unreasonable assumption that if GPG MI succeeds with those two steps, the cryptographic validation and the extraction, then there'd be at least one signature to look at. But then it was checking that what came back to make sure there were zero bad signatures. It probably should have been checking for at least one good one. But we can cause there to be no signatures to be checked at all. So at this point that I reached out to Richard from LVFS, told him what was going on and from

memory it was really funny. I reached out and said you do like LVFS and firmware updator. He went, "Yep." And I said, "I think I've got a a problem." And he said, "Aha, I I get mail a lot. People seem to misunderstand the whole client server thing. I know you said you're like writing an advisory. You'll send me the details once you've made them crisp and clear, but like what's the nature of it?" I went, "Oh, I stole your old bucket, not stole registered your old bucket, and then found a signature bypass." And he went, "Oh, okay. That sounds real. Please send me details." Sent him details. the the signature bug was actually a bug that

still existed um even in the latest versions even though I was on an older Debian version. So that got killed so that all users of firmware updator uh no longer have this issue if there's a man in the middle kind of situation. And then I handed the S3 bucket back over to LBFS and said do not delete it this time because by definition everyone who's phoning home to this does not have that signature patch and Richard went yeah we will not delete this again. the tactical fix that killed this bug in firmware updater itself uh is basically just to bail out if it's if it's that zero signatures case um and I expressed I think this is still awkward design

but this would at least this at least killed my proof of concept and would mean that during the loop there will be at least one signature to check and if it's bad it will bail out so it's kind of like the weird way of doing the right thing at this point I'm not sure if it's been updated to to invert the logic since But GPGME's weird API and the fact that if you call it in a weird way returns zero signatures felt awkward to me. Felt like it should at least error out rather than being like I successfully validated zero signatures. Like the function documentation says it has one job in my mind to verify that the signature is a

valid signature. And it's weird to say I did it successfully with zero signatures anyway. Uh, I I shot mail to GPG at the time to go I don't know if I was cooking here with this weird B movie attached signature detached signature pattern, but it causes what I think is a strange situation. It's hurt firmware updator. I think firmware updator should have had different logic then they wouldn't have been hurt by this. But like this is a cryptographic validation API. Um, I think it's awkward and I think I don't know if it's a bug. Maybe you're going to tell me it's normal and there's actually multiple ways to get it and the way you did it was the unnecessarily

complicated way which they kind of did. They kind of were like, "Yeah, that this function returning zero signatures is not strange." Um, I'm a big fan of safe APIs, ones that are very difficult to use incorrectly. Unrelatedly, I love that React calls it dangerously set in a HTML so that a developer when they're typing it is hinted at the fact that it's not safe to do and just unorggonomic APIs like this I think can be bothersome. But GNUPG disagreed and and that's totally fine. However, last week I was preparing for this talk and I wanted to see if this behavior had changed and it seemed like it did. Uh, and this concerned and confused me for

hours because I was struggling to figure out what had changed. All I knew is that in the olden days, GPGME given this weird input would return zero signatures successfully. In modern days, I was seeing an error [clears throat] message come up earlier on, which I'm like, cool. I think that that API should do that. When did this happen? Um, I must have compiled like 80 different versions of GNU PG to try to figure out where they changed it. Turns out it's a Debian thing. Debian has included a patch, which we'll talk about in a moment, that breaks this trick I was using or breaks my way of getting to the weird situation. Uh, turns out I also suck at

building an OPG. Uh, I was running my systems version the entire time trying to hunt down where it changed. It's a story for another day. So, this is just last week. that I realized back in 2022, someone sent a patch into GNU PG that was just it was some hardening. Um, and this hardening happened to break my weird way of getting to that weird situation. GNU PG rejected the patch. They they didn't they didn't take it on. But just this year, FreePG seems to have picked it up there. there. Free PG is a project that collects patches that GNU PG rejected but which pre free PG think are actually pretty cool. Uh so it's kind of

like a collection of misfit patches that they pull together. And then Debian pulled from FreePG and picked out a few of their patches and included this one. So if you're running free PG or Debian's packaging of GNU PG, you're protected from this weird situation. Just trivia for you. More trivia. If we have a look back in 2022 when GNU PG rejected this hardening, this is what they said. They said this is a a draft and it has not even been discussed in the working group. We will not accept your patch because it may break compatibility and may break processing of existing data. Which is hilarious to me because one example of existing data it breaks

compatibility with is this nonsense. But maybe GPU PG is not wrong. Maybe it does make sense for this cryptographic validation to succeed in with a zero signature result. And maybe there's less obtuse ways to trigger it. And maybe this rejected patch from 2022 that Debian has picked up and has now baked into stable uh is indeed going to break compatibility with meaningful data other than my silly trick. Um, remember that Debian broke EP OpenSSL back in 2008 with this behavior of deviating from upstream. They've deviated from upstream with GNU PG. It's not going to be an OpenSSL horror show, I wouldn't think. Uh, but it could be a case where in a few months time we're like, oh, on

Debian, you can't even validate this legitimate PGP data anymore. Anyway, we'll see. I've got two asks for you. Can you get that weird trick to work despite the patch? I had a quick go at it and I couldn't work it out and I had to finish my slides and so it's on my list to look at. Um, if you can figure it out, I'd love to hear from you. Uh, hit me up at research.com. Or if you could help me understand why this function, this library function works as it does. If you've got a justification, if you know GPG better than me and and you likely do, and you can help me understand why this is

sensible, again, I'd love to hear from you. Uh, cuz it's been bugging me for like 5 years now. So, like I said, I gave the bucket back, but I controlled it first for about 30 days. Here's some pretty graphs. And the reason I think there were about 100,000 hosts phoning home to it is I was getting like around about 100,000 hits per day on weekdays. I don't know who turns their computers off on the weekend. Weirdos. But that's my estimation for how many hosts probably were phoning home to this bucket or at least a sensible lower bound. I don't know. Back in 2020, I took this all the way through to a proof of concept. Uh, I

wrote the web server that would basically in the lab environment, again, I promise serve up arbitrary XML to the client signed in a way that would exercise the bypass. Uh, and in Gnome software, this was what I was seeing. I was seeing that I was successfully advertising to myself what seemed to be a malicious firmware update. Uh, and if you squint, it says device cannot be used during update. I didn't click the install button cuz I was scared my device couldn't be used after the update either. So, I'm not a hardware guy, not a firmware guy. But once I'd gotten to this point, I was satisfied that what I'd found was a real issue. I was

advertising the update to myself, even though I don't have the elite skills to go and actually write malicious firmware and see that it flashes correctly. Uh, I was on the right hand side of the signature check. Like I said, it's a story from the vault. If you want to read more, uh, my advisor is up on my GitHub from 2020 and the Tantosc blog. We've got a bunch of stuff we've published lately we're super proud of. We've got a bunch of stuff in the pipeline we can't wait for you to see. Uh, so check out the Tantosc blog as well. But like I said, this is a story about luck for me. Uh, you can just do things.

You can just register buckets and see what happens. You can throw the wrong signature type at PGP and see what happens. And while I'm super proud of that root cause analysis work I did to understand what was going on, I definitely felt like I got lucky. But at the same time, I think there's things we can perhaps do to kind of increase our luck surface area. You know, it's rare to get struck by lightning. But if you want to get struck by lightning, there's things you can do to make it more likely to happen, like fly a kite in a thunderstorm or stand on a mountain top with a long metal rod. For me, my only things I

could think of were I computer weird. like I did my firmware update from the shell. If I was a user of Gnome software, I just would have had no updates popping up. I don't think I would have noticed an error in a log somewhere. And so by using firmware updated through the terminal, I was exposed to this error message. I also know more than I'd like to about PGP. uh not as much as many, but I knew enough to I was like, what do I what what do I try for for for a weird trick? I know I'll swap out the detach for an A touch. I knew enough to try that. Uh and it just worked. So, how could you be

more lucky than other people? How do you computer in a way that's different to others? What software do you use that maybe not a lot of other people use on their desktop, but might be used on servers somewhere? Or in what ways do you consume that software that maybe your colleagues or peers in this industry don't? Keep an eye out cuz weird error messages if you pull on strings sometimes weird stuff can happen. Thank you very much.

Great talk. Are there any questions for Justin?

Wave your hands vigorously if you've got one. Yep. Quick. >> If you were to try to do something differently, what would you have done in that same situation? >> I'm sorry, I couldn't hear that. >> If you were to try to do that again, but do it a bit differently, what would you have done that would have, I don't know, streamlined it a bit? >> Streamlined it. If I was to try and defeat the patch these days, how would I do it? Like, if I was to try to find a way around the GNU PG patch or find a way around firmware updators patch these days, I'm not sure I understand. I'm sorry. >> I mean, yeah, I mean more like if you

were to try to if you found the same vulnerability today, would would you have tried it the same way or would you have looked at it a bit differently? I think I would have started by doing the what I call manual fuzzing where I think what is the next weirdest thing I could toss at this. Uh I think I'd probably still start there. Um given I've got one case of weird things just working first time. If that fails, um, I think the next thing I'd try is a fuzzer so that I could take in a corpus of, you know, legitimate and illegitimate PGP signatures in varieties of different formats and set up a way that that can

be that that could automatically exercise that code path and then do mutations on them and try to see if I can cause something weird to get to the fun part of the right hand side. So, if the manual fuzz where a couple of weird ideas didn't pop off, uh, didn't didn't work, my next call would be, how can I fuzz this intelligently? And how can I detect if I'm on the the fun side of the check and and and wire up some sort of automation to try? That would possibly be what I what I'd do next. Thank you. >> Are there any other questions? Uh, there's one over there in the back just on the right.

Oh, over here. Uh, >> stole it. Uh, thank you so much. That was really awesome. Just question of is it strange that they did take the bucket down given there was obviously that much hits per day. I mean obviously it's pretty old software you were running and you could make that choice of too bad so sad. But I mean that's a lot of traffic to then go ah we're going to turn it off. >> Sorry I'm shocking with hearing. I heard the first part. The first part was do I think it's strange that they deleted the bucket considering it's still getting hits. Was that correct? >> Yeah. Yeah. >> What was the second part? Sorry.

>> Oh, just me extending that a little bit. Like I know you're running older software, but that's still a lot of hits. I would have thought someone would go, "Oh, we should probably just leave this up for a while." >> Right. Right. Right. I think I think that perhaps the person responsible or the organization responsible for that bucket which was getting hits which could have been costing them pennies or dollars per month may not have realized the consequences of deleting a bucket and may not have realized that it can then be scooped up by anyone else. So maybe that's the part that they were missing. Maybe they wanted to ease their management burden and get rid of

something they're not using anymore and save some money per month on the on the 404s that they're serving up. Maybe I think I've learned since then that if you ask Amazon nicely, I think they can sinkhole buckets these days. Maybe. I don't know. So, I think the thing to do these days would be if you got a bucket you don't want anymore, maybe if you ask Amazon nicely, they'll just sinkhole it so no one can ever have it again and just basically bin that name. Bad name. But I think to answer your question, they they did see they were getting hits. They may not have realized the consequences of deleting it. >> Question over here on the right.

>> Yeah. Um, thanks for the talk. Um, and yeah, I don't think anyone really understands PGP like you say or at least like from the technical end to how people use it. So, have you now taken this strategy and audited all of the different uh PGP signing package distribution systems, Debian itself, you know, um, and checked for this class of error? >> No. And I had a slide in here that I cut for time, which I now wish I'd kept in cuz I think I'm under time. someone else did which made me really really happy. I love to write. Anyone who's read my writing knows I'm yappy. I'm verbose. I can write. And one of the reasons I do that is cuz

when I get bored of looking at something like PGP signature update mechanisms in package updators, I don't want to keep doing it. I'm like, I I did my bit. Here is my recipe. Here are my thoughts. I don't remember if I invited someone to continue my work, but I hoped they did. And someone did. I think it was someone from Sinactive, maybe. They published a blog post where they said that they had seen this post I'd written. They had wondered that exact same thing. And then they had done the work to find a couple of instances of it. Uh, and that made me really happy to see someone picking up where I'd left off. Thank you.

>> Question on the left over here. Hi, great talk. Um, do you know why GPG or PGP would have um would verify multiple signatures or return multiple signatures? Like that doesn't really make sense to me. >> Yes. Um, yes, I do. And yes, it makes sense once once you hear I hope this makes sense once you hear why. There's nothing stopping multiple people from signing the same data, which can be very useful. Let's say you are a software update mechanism and you use PGP signatures to distribute updates that are signed in a way that people can validate. And let's say at some point in time you want to change your signing key much like LVFS wanted to change where

they put their stuff. What you could do as a update provider is start signing those updates with your old deprecated key and your new shiny key and embed them both in the same signature so that when a consumer gets it, they can do that thing where they loop over each of the signatures. People who have the old version with the old key can go, uh, this first signature looks weird. I don't know that key. Oh, this second one, I know this key, so I trust this. People with the new software will look at the first one and go, "Yes, I know this key, so I will accept it." So it it's a way to use multiple keys to

sign the same data which can be useful for either transitioning keys or for when someone wants to publish something and have multiple people vouch for it so that the consumers of it can say 30 people have signed this and I know one or two of them and I trust their keys and therefore I know that this came from that that group of people one of whom I trust. So it can be a way for multiple parties to all attest to the provenence or integrity or something of some data. So it does make sense, but what doesn't make sense to me is successfully validating zero signatures. That's the one that's weird to me. >> Was there a question at the front as

well? >> It's still >> over there is fine.

>> Uh thanks very much for the talk. So this is a supply chain opportunity and so how far laterally across uh systems uh can we appreciate there may be others >> sorry I heard the first part this is a supply chain opportunity indeed it is what was the second part sorry >> so how many others uh could be identified laterally across systems >> yeah there's a blog post I see inactive. If you search GPGme used confusion, it worked or something. It's a Pokémon reference. There's a blog post that that basically replayed this this this trick against various other things. I think there was one thing from VMware, but I can't remember what it was. There was and then

there were historical instances. O package and mut both used to fall for this, but it's been patched there. Uh so I haven't done the work to go laterally and find other instances. Uh but I know that at least one other person has and and they've published their work on that. >> There's a question at the front. >> Yes. Thank you for wonderful for sharing. Um I wonder uh if the incident has been uh the responsible of the mistake has been attributed and uh how do we trust the uh open source project for critical jobs given uh it's pretty open for anybody to participate. Thank you. I think the first question was has this incident been handled or reported or

something like that. Is that correct? >> Um more of uh taking responsibility or >> responsibility. Yeah, >> sure. Um so I was working with this bloke named Richard who's a top bloke. Loved chatting to him about this issue when I got in touch. >> He was at Red Hat at the time [clears throat] is at Red Hat still. I'm not sure. I think Red Hat may have became the sponsor of LVFS. uh and they took responsibility in the sense that they fixed the bug. They had a CVE allocated for the signature bypass vulnerability. There's no sense in getting a CVE for we left a cloud bucket dangling and then someone registered it and now we have it back.

You know, CVE are meant for system users and system administrators to to drive their vault management program. That's at least my understanding of a CVE. If there's no patch that a CIS admin can install, there's no sense in getting a CVE. So, they didn't get a CVE for the cloud situation. Um, I don't know that they published a post saying that this was the situation. I can't remember. As for how we can trust open source, this experience for me with my communication with LVFS with Richard was phenomenal. uh and people make mistakes and the way that it was handled to me was exemplary. It made me trust this project more as for how we can as consumers of open

source in general trust it because anyone can edit it uh in like the anyone can edit Wikipedia sense. How do you trust it sense? Um I don't have a good answer to that. It's not something that I have expertise in. I'm more look for vulnerabilities. That's kind of my my beat is vulnerabilities. uh I don't have the mental model or the expertise to talk about things like supply chain attacks in terms of bad actors contributing bad code like we had with I think it was the geotan incident for libex lib xz or something that's out of my wheelhouse so I don't have a good answer for that I'm sorry I I focus more on vulnerabilities and for me the way

that this this project handled it was phenomenal >> there's a question in the front as well >> if a um super creative black hat hacker got hold of this, what would they potentially do with it? What's the um what's the sort of broader scope of um of the vulnerability? What they could do? I left this part out for succinctness, but I will say this. It is entirely possible for a hardware vendor, let's say for example. It's entirely possible for someone like ASUS to bake into their hardware when you get this is my understanding when you get firmware sent to you for flashing do your own signature check and make sure it's signed by the ISUS key.

So I think there is opportunity and I think some or many hardware vendors can do that kind of endto-end integrity. They control the hardware that's getting flashed. They can have it expect to only get flashed by stuff that's signed by them. What I was mcking around with was more that transport security and that assurance that the the firmware manifest file was signed by LVFS which is not like an end to end ASUS manage process. It's it's that middle mile where the data is handled by LVFS. So, as an attacker, first you'd need to figure out how many types of hardware and which types of hardware do not do that end to end integrity check. Or for the ones

that do, you'd need to find a way to bypass it through a vulnerability in that endto-end integrity check. Let's put those aside. Let's just take the case in which you find out that I'm not going to say a vendor name cuz I don't know who does and does not do end to end integrity. Vendor X is a wireless mouse vendor and they've got little dongles you plug into your computer to do wireless mouse stuff. And as a bad actor, I've found out that wireless mouse dongle from firmware vendor X does not do end to end integrity validation. Cool. I'm now going to bake a firmware that if flash to a wireless mouse dongle, which is a HID device,

is going to just emit malicious keystrokes once every hour or something like that and act as like a a rubber ducky type attack. As a malicious actor who owns the bucket and who has the trick for bypassing the firmware updator signature validation, I would offer that firmware to users of firmware vendor X's mouse. Whoever happens to have it pop up in their Gnome software as a update for their mouse and clicks install, I guess, would install a firmware on their mouse that at the top of the next hour would press like the the Windows key, then type terminal, then enter, then type curl something, pipe shell, enter, and try and get malware out into the wild with people

who use that wireless mouse. One hypothetical situation. Again, I didn't do that kind of stuff cuz I don't know how to write wireless mouse dongle firmware. I hypothesize that that could be possible. >> We might have time for one more question if there are any left. >> Thank you. Um I have one more question. Um how uh how do we um increase our or review our project's resilience if we uh rely heavily on open source project um by tolerating the potential risk and mistakes. >> You could engage a offensive security firm who does pentests to review the safety of some open source component. [clears throat] tantosc.com. We'd love to hear from you. Thank you very much, everyone.