← All talks

BSidesPDX 2025 - Saturday, Track 1

BSides PDX · 20257:08:41821 viewsPublished 2025-10Watch on YouTube ↗
About this talk
BSidesPDX 2025 - October 24-25 at Portland State University. BSides Portland (BSidesPDX) is a gathering of the most interesting infosec minds in Portland and the Pacific Northwest! Our passion about all things security has driven attendance from other parts of the country. Our goal is to provide an open environment for the InfoSec community to engage in conversations, learn from each other and promote knowledge sharing and collaboration. The Portland and greater Northwest information security community spans a broad spectrum of participation from CISOs, Fortune 100 company security experts, small business system admins, to independent security researcher. bsidespdx.org
Show transcript [en]

Heat. [music]

Heat.

Heat.

Heat.

[music]

>> [music]

[music]

[music] >> Heat. Heat. [music]

Heat. Heat. [music]

[music] Heat.

[music] Heat. [music]

[music]

>> [music] >> Heat. Heat.

Heat. Heat. [music]

[music]

[music]

Heat. [music]

Heat. [music]

[music]

>> [music]

>> Heat. Heat.

Heat. Heat.

[music]

[music]

[music]

Heat.

Heat. [music] Heat. Heat.

[music]

[music]

>> [music]

[music]

>> Heat. Heat. [music] Heat.

[music]

[music] Heat.

Heat. Heat. [music]

[music]

[music] Heat up

>> [music]

[music]

>> here.

>> [music]

[music] >> Heat. Heat.

>> [music]

[music] >> Heat. Heat. [music]

Heat. Heat.

>> [music]

[music]

>> Heat. Heat. [music] Heat.

Heat. Heat. Heat.

[music]

[music]

[music]

>> [music]

[music] >> Heat. Heat. Heat.

[music] Heat. [music]

>> [music] >> Heat. Heat. [music]

>> [music]

[music]

[music] >> Heat. Hey, heat. Hey, heat. Heat. Heat. [music]

[music]

[music]

Heat. Heat. [music]

[music] Heat.

Heat.

Heat

[music]

[music]

up Heat.

Heat.

Heat. Heat.

[music]

>> [music] >> Heat. Heat.

[music]

>> [music]

>> Heat. Heat. [music] Heat.

[music] Heat. Heat. Heat. [music]

[music]

[music]

>> [music]

>> Heat. Heat. [music]

>> [music]

[music]

>> Heat. Heat. [music]

>> [music] >> Heat. Heat. [music]

[music] Heat. [music]

Heat. [music]

>> [music]

>> Heat. Heat.

Heat. Heat. [music]

[music]

Heat.

[music] Heat.

Heat. [music] Heat. [music]

>> [music]

[music]

[music] >> Heat. Heat. Heat. [music]

[music]

[music]

Heat. [music] Heat. [music]

[music] Heat. [music]

>> [music]

[music] >> Heat. Heat. [music]

>> [music]

[music]

>> Heat. Heat. [music] Heat.

[music]

[music] Heat.

[music]

Heat. Heat. Heat. Heat. [music]

[music]

Heat. [music]

[music] Heat.

>> [music] >> Heat. Heat. [music]

>> [music]

[music]

[music] >> Heat. Heat. Heat. [music]

[music] Heat. [music] Heat. Heat.

[music]

[music]

>> [music]

[music] >> Heat. Heat.

>> [music]

[music] >> Heat. Heat. [music]

Heat.

[music] Heat.

Heat. Heat. Heat. Heat.

[music]

>> [music]

>> Heat. Heat. [music]

Heat.

[music]

Heat. [music]

>> [music]

>> Heat. [music] Heat.

>> [music]

>> Heat. Heat. [music]

>> [music]

>> Heat. Heat. [music]

Heat. [music]

[music] Heat. [music]

[music] Heat. Heat. [music]

[music]

Heat. Heat.

[music]

>> [music]

[music]

>> Heat. Heat. Heat.

[bell] [music]

[music] Heat. >> [music]

[music] >> Heat. Heat.

Heat up

>> [music]

>> here. [music] Heat. Heat.

[music]

[music]

>> [music] >> Heat. Heat. [music]

>> [music]

[music]

[music] >> Heat. Heat.

>> [music]

>> Okay, I guess we should start. They said so. They made me do it. How's everyone doing? [cheering] [applause] Um, welcome to Besides PDX again. Uh, how many of you uh were here yesterday? How many of you are here today? >> Some of you are not here today. I wonder what's going on there. Um, so yeah, I'm not going to say too much. I'm just going to actually try and get to the next slide. It's a CTF in itself. Uh, welcome. Um, so, uh, we just got some thank yous again because, um, there's a lot of people that make a lot of stuff happen, uh, do a lot of things to make this happen. Um, and I was wandering around

yesterday realizing that I didn't know half of what was going on, which is a very discomforting fact as an organizer of event. Um, but it's also kind of like the point you need to get to when an event gets to a certain size because stuff has to happen and it has to go and you have to be okay with other people running stuff, which is, you know, a growth thing for me. But, uh, it's also kind of amazing that uh, we can kind of delegate some of these things and it happens and it happens wonderfully. And some of that is the volunteers who've been working all year. Some of that is the board of directors who uh silently

make sure that like the uh organization has the funds and the paperwork done so that we can do what we need to do. Um part of that is the review board members who go through and read all the submissions, the the good ones and the more interesting ones um and pick out the good ones to to present to you. And unfortunately, you know, we had a lot of submissions this year, which is great. Um every year we want more. Um, and the problem with that is we start getting more and more good submissions that we don't have room for. So, we got to figure that out. Uh, how do we how do we get more space uh for all the great

stuff that people are doing and working on? Um, in terms of actually hands-on and getting stuff done, uh, we need to, uh, all stand up and applaud for Magneto, whose name was not on this list the past two yesterday. So, give a big round of applause because he's [applause] he he's been managing speaker operations and like I was saying, you know, I I am supposed to be running this event along with Malcolm and I don't know everything that's going on and so the things that I don't know about because they just happen and work I forget about which is great but also bad when I'm supposed to be thanking the people for doing that. So, thank you. Um, and also Sam

organized all the catering. Rebecca has been dealing with social media stuff as well as like keeping us on task. Um, you know, basically that because she wanted to make social media posts, she's like, "Okay, you guys, you need to get this done. You need to get this done so I can post about it." I'm like, "Oh, thanks." So, she was kind of like the schedule driver in disguise. Um, Mickey did all our website updates. Brian um dealt with the video and got the volunteers to go and do the live streaming and we'll take care of posting all this stuff later. Um, Shady wrangled the volunteers. Evan did organizes CTF. Ron and Brian made registration happen. Um, and thanks

Malcolm, who I think had to step out. Oh, there he is. Thank you, Malcolm, for doing all the things that I forgot to do. [laughter] Uh, and then the things that we both forgot to do, I don't know about, so they seem to be fine. No one's asked for their money back yet, so I think we're good. >> Um, thank you again, PSU. I think the food's been excellent this year. Um, and the coffee has been, you know, free flowing, uh, which is great. Um, I was actually chatting with the the PSU staff earlier and um I think that 20 gallons of coffee 20 gallons of coffee or 20 picture three gallon pictures of coffee.

I don't know. You drank a lot of coffee yesterday, but you didn't come close to the price of the coffee we drank 10 years ago at the convention center where you drank $8,000 of our $10,000 conference budget in coffee. So, >> we'll try harder. >> Uh, yeah, it's okay. Um, we have community rooms. Uh we we shuffled it around a little. So two there's two rooms on the f on the second floor. They have a little signup sheet on the door. I think there's a 2pm chat in one of them about badges. Um and if you want to talk about something, just write it on that that that signup sheet and show up, you know, take a peek if

it's a topic that's interesting of interest to you. And if it's empty, feel free to go and sit down and start a conversation. Um t-shirts. um in a in a weird change of events. Typically, we finally get our t-shirts like the week the day before the event and we're missing something like usually the fitted women's shirts. Um so like multiple times we've had all the the the unisex shirts but not the fitted shirts. This year yesterday at 1:00 the women's shirts were all delivered. So we have those. Um so if you ordered a fitted shirt um we'll have them at registration to pick up um at afternoon. Okay. I I would say at noon, but noon's a very

precise time. Afternoon is a very broad time. So, afternoon, go down to registration and and pick up your shirt if you pre-order a fitted shirt, which makes me think right now I should probably print the list of people who have fitted shirts. Um, thank you again to our sponsors. Um, uh, you know, from, you know, lots of dollars to fewer dollars. We appreciate all of them. Um, and we don't really do a lot of differentiation at the event about who's who. Um, as in terms of how much they sponsor, but they all are making this event possible. uh BPM, Formal, Chain Guard, Conductor 1, um LMG Security, and Profit. And then the next tier we've

got, Code Tool.AI, Eclipsium, Asaka, PaloAlto Networks, Securing Hardware, uh Spectre Hop, Spectre Ops. Um and then, uh our silver sponsors, Identity Technologies, ISSA, and No Starch. Um what's great about a lot of these sponsors is they're recurring year after year after year after year. We also have community sponsors, EFF, which we're glad to have here, and a hacker tracker, which you know, if you're using the app, you can do that. >> We have a code from NoCH this year. >> Do we have I don't know. >> We do. >> We do. What's the code? >> I don't know. >> Okay. >> In the follow-up email, you will get the code from No Star to get a discount,

which is kind of great. And if you don't get that, bug me or me or Malcolm or someone. Um, the CTF should resume at 11. And I believe the challenges are shut off at 400 p.m. Is that correct? Is Evan in here? Okay. Assume it's turning off at 4:00 because that's when we're going to have closing and have to give out prizes and it's easier to give out the prizes after the end of the event. Um we also have Hackboat. I added the URL up there, hackboat.org. Um Hackboat is going to be on Friday, June 5th. That's a, you know, there's a room for 85 people on one of the boats. We go out, we go up and down the Wamtt River.

Um we have lunch, we talk about hacking. Um, we do not hack the boat itself, just to clarify. Um, because the boat is what's keeping us afloat. Um, but we will be hacking on physically above the board of the boat. Um, and we don't have registration open yet, of course, but besides will be Friday and Saturday, October 23rd and 24th, 2026, same place. Uh, similar time. Um, so we're looking forward to everybody coming back for that. Um, this evening we have an afterparty. Uh, how many of you have been to Control H before? PDX hacker space. How many uh so it's uh 70 Oh, I should put the time on here. Sorry. Five o'clock. We're

gonna have closing ceremonies about four o'clock and so you can e you know meet uh saunter your way there. Um it's on the yellow line. It's kind of a a hall but if you're if you're taking the max yellow line goes straight there to the Lombard Street exit. If you Google for the Nan Cat mural um there's a big Nan Cat mural on the side of the building. Um it's on Google Maps. It's easy to find. People pull up and like hop out and take selfies with it. It's kind of interesting. We started getting junk mail for it. Um Nyan cat mural at 7608 North Interstate Avenue. Um but yeah, we'll be there from 5 to 8:00 p.m.

You're all welcome. Uh and it'll be just kind of a mellow uh there'll be some food and drinks and uh look forward to seeing you there. Um if you need any help, look for someone with a balloon. Um there's an info desk downstairs or registration and um be kind and have fun. Am I missing anything Malcolm? >> No. Any Okay, cool. Uh, which leads us to our keynote. Um, Micah, uh, actually I'll give a brief introduction, then we'll do some laptop switching and then we get started. Um, Micah has been to Besides Portland before. He's presented here. Um, I think we first met when you took a class like 10 plus years ago. Um, so Mike has been doing a lot of stuff at

the intersection of like information and journalism and tech and security and all that stuff and it's really exciting to see what he works on. Um, actually at uh Defcon he was giving a talk um on Signalgate which I think will be kind of a lot of what he's referencing here which I think is kind of pretty relevant to stuff that's going on around here. I don't know. I didn't have to fight through a war zone to get here but some of you may have. Um, and so I think with that, uh, I will stop talking. Micah, you want to come up here and get set up and we'll begin that keynote in just a minute. Thank you, Micah. Thank you,

everyone.

[music]

Heat.

Heat.

>> [music]

[music]

>> besides Portland. >> [applause] >> So, I've worked in journalism for over a decade, but this is the first time that I've ever traveled to an active war zone.

It's hard to believe that we're not even one year into Trump's fascist takeover of the government. The onslaught of horrifying news is happening too fast to keep track of. But what's clear to me is that the Trump administration uh and ICE in particular is tooling up for technological repression that Americans have never been subject to before. Today I'll go over the disturbing signs of the coming age of technofascism along with practical ways to defend yourself and your communities against it. I'm Mike Lee. I'm an independent security researcher, a journalist, and a software engineer. I spent the last decade and a half reporting on classified documents, helping journalists protect their sources, building open-source privacy tools, and

teaching people how to analyze leaked data sets. These days, I work closely with journalists, researchers, and activists, doing what I can to keep them safe and productive. The views I'm expressing in this talk are entirely my own and not the views of any of the organizations that I'm working with. Since Trump's inauguration, the US has slid into technofascism. So fascism is a slippery ideology that's kind of difficult to define. And sometimes it borrows from conservatives or from liberals or even from leftists. But in the end, none of the beliefs are actually genuine. It's all about accumulating unlimited power for an inroup at the expense of everyone else. One common definition of fascism is imperialism turned inward. So here's a

bit of recent uh US imperialism history since September 11th, 2001. We launched wars of aggression based on lies in Afghanistan and Iraq. We ran a covert torture program and imprisoned and tortured innocent people for decades at Guantanamo Bay. We built a global surveillance system and spied on entire populations, all without probable cause. We ran a massive drone assassination program, bombing weddings across the Middle East and Africa and countries we weren't at war with in the name of American freedom. And right now, we're funding and arming Israel while it commits a genocide in Gaza. Huge swaths of the world are subject to intense state repression violence surveillance and censorship. And in many places, this repression is explicitly supported by

the US government and by US companies. The thing that makes the Trump era different is fascism. Under Trump, this complete disregard for human rights is now pointed inwards at the enemies within, as Trump calls Americans that he doesn't like. What we've been seeing on the streets of the US with ICE kidnappings and military invasions of cities is the normal American disregard for human rights, but this time targeted inwards towards us. And the American tech industry is totally on board with it. Elon Musk, the richest and most divorced person in the world, donated hundreds of millions of dollars to make sure that Trump got elected. He then bought Twitter and turned it into X, a ass cesspool of propaganda,

disinformation, and hate. Mark Zuckerberg got a haircut, went on Joe Rogan, and shut down Meta's diversity program. Jeff Bezos, the owner of Amazon and of Washington Post, personally intervened to prevent the post from endorsing Kla Harris, and he restructured its opinion page to make it friendly to fascism. Tim Cook personally donated $1 million to Trump's inauguration committee. You know how right now while the government is shut down and food stamps for millions of Americans are set to expire in about a week, um Trump is tearing down the east wing of the White House and uh building himself a privately funded ballroom. So, some of the companies who are funding Trump's ballroom include Amazon, Apple, Coinbase Google Meta Microsoft.

This talk mostly isn't about the reactionary tech billionaires and their complicant companies. Instead, it's about attacks that we should be prepared for during the age of technofascism and the ways to defend against them. In this talk, I'm going to give some specific actionable advice about three topics. mercenary spyware, device searches, and app censorship. But I don't think uh but don't think of this as a checklist that all you have to do is finished and then you're good. Ultimately, what we need to do is build an intentional and forgiving security culture. These are things to talk over with your friends, your colleagues, your family members, uh and start doing them as shared practices. Fascists are targeting everyone outside of their

inroup. If we want to keep our community safe, our defenses need to be collective and not individual. I'm also going to quickly go through a lot of slides and show my sources. So rather than trying to take a photo of each slide you're interested in, you can uh find links to all of my sources here. Um, and I also want to warn you that this talk is pretty intense. So, just to lighten the mood a bit,

[laughter] [applause] >> I'm going to put on some frog ears. Um, and so I'm not going to sugarcoat the awful reality of the current situation, but at least I'll be somewhat dressed up like a frog while I'm giving you all anxiety. So, it used to be that government spy agencies like the NSA developed the most sophisticated hacking tools in the world in house. But over the last decade or so, this has shifted to the private sector. Now, private companies make the world's most sophisticated hacking tools and they sell them basically as a subscription service to government agencies and police departments around the world. uh many of which would never have been able to build these

capabilities inhouse themselves. Americans have largely been shielded from this type of attack. NSO groups Pegasus spyware is typically configured to not be able to target US phone numbers, though they could easily disable this setting if they decided to target US phone numbers. Mercenary spyware firms are on US sanctions lists. And in 2023, Biden published an executive order prohibiting mercenary spyware use by the US government without first going through a review process. Those days are over. Mercenary spyware is officially welcome in America. Last year, during the Biden administration, ICE tried to sign a contract with Paragon Solutions, another sketchy Israeli firm that makes spyware called Graphite. But the Biden administration blocked this contract from going through.

But a few months ago, the stop work order was dismissed and ISIS contract with a Paragon with Paragon officially began. According to this reporting by Jack Pollson, the US company Red Lattice acquired Paragon Solutions. So now that a Paragon is Americanowned, ICE is allowed to use graphite spyware. Paragon builds graphite as the ethical alternative to Pegasus. The difference between graphite and Pegasus is that Pegasus takes over the entire phone. It does location tracking. It listens through the microphone. It steals all of the data it can get and so on. While Graphite is narrowly targeted at spying on encrypted messaging apps like Signal, WhatsApp and iMessage. But obviously government abuse uh governments abuse it to violate human

rights too. So, here's a recent report from the Citizen Lab published in June where they caught graphite being used against prominent journalists in Europe. In this case, graphite relied on a zero-click vulnerability in iOS that exploited a bug in iMessage. And here's an earlier report uh from Citizen Lab published in March. And in this one, they helped fix a uh zeroclick exploit in WhatsApp that targeted dozens of people in Italy, including journalists and the founder of an Italian organization that rescues migrants from the Mediterranean Sea. So, it's not a very ethical alternative. Um, 404 media launched a Freedom of Information Act lawsuit against ICE demanding documents related to its contract with Paragon. In this post, the

journalist mentioned Paragon stance that it's an ethical alternative in the spyware industry. It says, quote, "Selling to ICE, an agency that has flaunted due process, accountability, and transparency, may complicate that stance for Paragon. ISIS arrested people who were following the steps necessary for illegal immigration, waited outside courtrooms to immediately detain people after their immigration cases were dismissed to rush them out of the country, documented people who had valid work permits in order to deport them, and continues to pick up people around the country while masking their faces and declining to provide their names. There's nothing ethical about anything that ICE does, and there is no way that ICE will use graphite in a way that

isn't abusing human rights. But hey, at least the Trump administration isn't using Pegasus, right? Earlier this month, news broke that American investors appear to have purchased NSO Group. Right now, NSO Group is still on the US sanctions list. Biden's executive order making it harder for governments to use mercenary spyware is still in effect. And there's a trail of dozens of US officials that were hacked with Pegasus, which normally wouldn't be a good sign for NSA group doing business with Americans. But in the age of technofascism, I really don't see those old rules lasting much longer. I wouldn't be surprised at all if we start started to find Pegasus infections on the phones of immigrant defense

activists or advocates for trans healthcare or even just people trying to get an abortion. Also, just to add to the absurdity of this, the main investor uh that of of the group of investors that purchased NSO Group is Robert Simons, a Hollywood producer of Bmovie films. So, if you haven't heard of Robert Simons before, perhaps you've heard of this 1996 Adam Sandler film. Uh, Robert Simons produced Happy Gilmore along with a bunch of other Adam Sandler films. His entire experience is in the entertainment industry, not in tech or cyber security. He also has a bunch of business dealings with Chinese companies. According to the Israeli tech site Kalcalist, for some reason he's been on the board of NSA,

NSO Group's parent company for a few years, and in 2023, he tried to purchase NSO Group and failed. It appears that he just tried again and was successful. And also in this article, it mentions that in 2018, Sophie Watts, the president of his production company STX Entertainment, complained of harassment and called him obsessive. So quite likely this guy is the new owner of the most notorious mercenary spyware firm in the world. And quite likely he's going to be selling Pegasus to fascist law enforcement agencies under Trump. But even if the current rules against Pegasus stick, uh there are plenty of American technofascists who don't have any qualms with violating human rights. Remember how I said that the most

sophisticated hacking tools used to be developed in-house by agencies like the NSA? This was a big story back in 2019 when Reuters exposed that over a dozen former NSA operatives went to work for the United Arab Emirates royal family, helping them spy on dissident, royal uh dissident, journalists, and activists. It's a bad sign that the US government is embracing mercenary spyw wear from sketchy Israeli firms and the US companies are buying up these firms presumably to make it easier to sell to the government. But I honestly think that there's enough homegrown and talented American technofascists to support a domestic spyware industry anyway, even without the Israeli technology. Last month, Bruce Schneider blogged about digital threat modeling under

authoritarianism. It's worth a read. In it, he described the shifting risks of decentralization, which is something that I hadn't really considered before. Spyware is targeted surveillance, not mass surveillance, which means that it doesn't scale easily. If all you have to worry about is staying off the radar of highle fascists like JD Vance and Cash Patel, then most people probably don't need to worry too much about it themselves. But if repression is decentralized with every state and city having its own local fascists in charge of picking targets they don't like, then everyone needs to fear it. It's too early to know how mercenary spyw wear will be abused by the Trump administration, but it's prudent for everyone to get prepared for

it now. So this is bad, but it's not hopeless. There's a lot that we can do to defend ourselves against mercenary spyware. Zeroclick exploits which can hack your device without any interaction from you can feel like magic and like it's hopeless to even try to defend against them. But it's not magic. Exploits are only possible because of bugs and these bugs are routinely fixed in software updates. Zeroday exploits cost attackers millions of dollars to purchase, which means it's very expensive to hack a fully updated phone or laptop. Exploits for bugs that are already patched though are basically free. So, you should never put off installing updates. And you should not only always install updates, but you should also get everyone that

you know to always install updates as well. Apple added lockown mode to iOS in 2022. If you enable it, it prevents your phone from using certain features that are frequently exploited. Basically, it reduces your attack surface. For example, it blocks fonts in Safari, which might make some websites look worse and the icons might be missing, but it cuts out an entire attack vector. I've been using lockdown mode in iOS since it came out, and it's actually really usable. A few things are broken, but otherwise, it's fine. Um, in the age of technofascism, you should not only turn on lockdown mode, but get everyone you know who uses an iPhone or a Mac to do the same. To my knowledge, no

researchers have found a successful infection of a device while lockdown mode was turned on. Um, and you and everyone you know who uses an iPhone should also enable advanced data protection in your iCloud account. Without it, iCloud is basically a government backdoor into your phone. If your phone gets backed up to iCloud, including your messages, photos, and all of the data in all of your apps, Apple can give this data to the police, the FBI, ICE, or whoever else asks. If you use advanced data protection, most of this data is encrypted with a key that only you control. The recovery key is a long sequence of random characters. So, everyone who enables it either needs to

keep this key in on a piece of paper or store it in a password manager. And so while you're at it, if you're helping people in your community enable advanced data protection for iCloud, it might be a good idea to also get them set up with a password manager.

enable this and you'll be far less vulnerable to mercenary spyware. So, I don't have much love for recently categorized ICE officers as a targeted group in order to comply with Trump. Um, but I am excited about memory integrity enforcement which is built into the hardware of the new iPhone 17. Basically, if you're using the new hardware, every time software allocates a block of memory, this memory is tagged with a secret. If the software ever tries accessing that block of memory again without the correct tag, the request is blocked and the process is killed. So, this should effectively eliminate entire classes of memory corruption bugs, including buffer overflows, use after free, and outofbound bugs. So this diagram shows an analysis of

real exploit chains. These were ex these are exploits that were actually included in real mercenary spyware and how each class of bug would perform against an iPhone with memory integrity enforcement. It will prevent all of them from fully hacking the device. So if you could afford it, this is one of the few reasons I'd recommend considering buying a new iPhone. Of course, if you do get a new iPhone, you should also enable lockdown mode on it and enable iCloud advanced data protection. Mercenary spyware relies on exploits to hack your devices remotely, but there's a whole different set of local attacks against devices, too. Device searches have been a risk for as long as people have carried computers around with

personal data. But in the age of technofascism, we should prepare for device searches way more frequently. Celebrate, another Israeli surveillance company, is the most notorious firm that does device searches. They make products that are currently already used by law enforcement across the US, but they're aiming for a much bigger slice of the market. Last year, Celebrate announced that it formed a US-based subsidiary specifically for selling to the federal government. Celebrate makes hardware and software used to break into locked phones and extract all of the data from them. It works by exploiting vulnerabilities and locked screens, by brute forcing passcodes, including using exploits to bypass any rate limits, and by rooting devices to get access to all of the data

in them. This phone is from or this photo is from a 2021 blog post on the Signal blog by Moxy Marlin Spike. He said, quote, "By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me." Um, it turns out it was Celebrate equipment. Um, and this was actually right after the Celebrate You software started supporting extracting signal messages. Um, [snorts] specifically, uh, this was just the software to extract data from phones, not actually the hardware and a bunch of cables. Um, but it didn't include the hardware that that like hacked into lock phones. Um, Moxy wrote about the security about security

vulnerabilities that he actually discovered in the Celebrate EU software. He discovered that quote, "It's possible to execute arbitrary code on a Celebrate machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into a Celebrate and scanned." And then he also just announced that Signal would start downloading random files just in the file storage. But don't worry, Signal doesn't do anything with this. Um, so like other Israeli surveillance firms, Celebrate has a history of being abused to violate human rights. In 2020, police in the African country Botswana used Celebrate to break into the phones of detained journalists, according to the Committee to Protect Journalists. In 2021, during protests in Hong Kong,

Chinese police used Celebrate to hack into the phones of pro-democracy protesters. According to reporting in The Intercept, in 2022, Russia used Celebrate to hack into the phones of anti-Putin opposition activists, according to reporting in Harets. Last month, ICE entered into a new 11 million contract with Celebrate. But ICE already has a long history of working with them. In 2017, they first spent $2.2 million on a Celebrate contract immediately after Trump's travel ban. Um, in 2019, they spent somewhere between 30 and $35 million on um another contract. And now they're starting a new 11 million contract. So, it's fair to assume that ICE is using CBR to hack the phones and steal the data from every

single person that they arrest, regardless of immigration status. And when your device is searched, authorities stealing your data is only one of the risks that you face. Another is that they might install spyware and then hope that you keep using it. Here's an article from last year about a pro- Ukraine activist in Russia named Kurrill Parabets. Armed FSB agents violently raided his home early in the morning. One of them picked up his Android phone and said, "What's your [ __ ] password?" And Parabets told them. Then they threatened to imprison him unless he agreed to spy on Ukrainians for them. So he agreed, even though he says he didn't plan on actually doing it. And when they

released him, they gave him back his phone and it had spyware on it. According to analysis of Parabet's Android phone by the citizen lab and the legal assistance group First Department, the spyware they found allows the operator to track a target devices location, record phone calls, keystrokes, and read messages from encrypted messaging apps among other capabilities. And the report also points out any person whose device was confiscated and later returned by a security service should assume that the device can no longer be trusted without detailed expert analysis. So in the age of technofascism, this applies to when your device is seized by DHS, ICE, CBP, the FBI, and in many situations probably local police. Also,

sometimes it's legal for authorities to search your device and sometimes it's illegal, but all of that is pretty abstract when it's clear that the Trump administration doesn't care about breaking laws and gets away with it all the time. So, whenever you cross a border or go to a protest, you should be prepared for the fact that authorities might try to search your devices. It's still important to know your rights. Even if the fascist authorities are likely to violate them, you should consult actual lawy lawyers for legal advice. But here are just some quick tips. You have the right to remain silent, so don't talk to the police except to assert your rights. Police are kind of

like vampires. They can only uh legally enter your home if you invite them in. Uh so if police or federal agents show up at your house or your business, do not invite them in. If they say they have a warrant, it needs to be a valid warrant signed by a real judge. ICE tries to use their own fake warrants and those aren't legally binding. If they try to search you, tell them you do not consent. If they want you to unlock your phone or your computer, don't comply and don't share your passwords. There's a good chance that this will result in them stealing your devices, but at least they'll be encrypted. Before I go into the defenses against device

searches, I want to take a minute to plug the Access Now digital security helpline. Researchers at places like the Citizen Lab, Amnesty International, and Access Now have done an amazing job exposing spyware firms and their flagrant abuse of human rights. Detecting spyware is hard, and none of this research is possible without the cooperation of spyware victims. So, if you think your device has been hacked by the Trump administration or if there is anyone in your community who might have been hacked, please reach out to the Access Now helpline for help. If anyone you know has had their phone seized by federal agents and then later given back to them, they should definitely not trust that phone and contact the access

now helpline. They could actually like try and get spyware samples, try and do the research and confirm if it was actually hacked or not. While I can't give legal advice, I can give you technical advice on defenses against device searches. These mostly resol revolve around disc encryption. If someone gains access to your phone or computer and you aren't using dis encryption, nothing stops them from accessing all of your data. But even with disk encryption, your data is only as secure as how you're able to unlock your device as well as your lock screen settings. So, for example, let's say you have an iPhone and a strong passcode, but you unlocked your phone with your face. This means that when you get

arrested at a protest, the cop can also unlock your phone with your face and then access all of the data in your phone. Because of tools like Celebrate, your phone's passcode is also really important. It's orders of magnitude harder to hack into a 10-digit passcode than a six-digit passcode. You should also harden your devices. [snorts] Um, if these defenses against device searches look familiar, it's because they're also defenses against mercenary spyware. Celebrate and similar similar tools that attack computers rely on vulnerabilities to help them bypass your lock screen or brute force your password without rate limiting. Install updates. When you're using the latest version of your OS, there are fewer vulnerabilities in your lock screen that

can be exploited. And again, enable lockdown mode in iOS and Mac OS and enable advanced protection in Android. When your device is seized, but in a locked state, you should also be careful about what information is on your lock screen. They can access that data without even needing to hack your device. So, make sure that sensitive notifications like the content of your signal messages don't get displayed on your lock screen. This applies to both computers and phones. If you have disc encryption, the very best thing you can do to keep your devices secure is to completely power off your device when you're not using it. A powered off device before you've entered any password to unlock the

encryption is much harder to hack into than one that's powered on but locked. So, when you're going through a security checkpoint at the airport, completely power off your phone and your computer first. Don't just suspend it. When you're going to a protest, if it looks like you're about to get arrested imminently, power off your phone before you get detained. You could always power it back on if you find yourself to be safe. And finally, turn off all of your computers every night when you're not using them. Most police raids often happen in the middle of the night or in the early morning. So, powering off your computers every night means that if you get raided, your devices will be harder

to hack into, and they'll give your disc encryption a fighting chance. People often talk about anonymous burner phones except in very specific situations. Truly anonymous burner phones aren't that useful. Using a secondary phone though that you don't even try to keep anonymous on the other hand, it's easy to maintain and it has some major benefits. If you get detained at an airport or arrested at a protest, the authorities either already know who you are or they're about to. So anonymity isn't really important here. When you set up a secondary device, use a separate Google or Apple account so it can't access any data in your main account. Make a separate signal account and just add the contacts or groups that

you'll need. And if authorities hack into your second device, there won't be much data to extract. It won't have your messaging apps, your contacts, your browser history, your photos, your documents, or anything else. Since secondary devices are just for temporary use to take an international trip or to bring to a protest, you should factory reset them between uses. This should protect you in case they install spyware on your device and give it back to you. Also, although ideally, you should contact the access now helpline and uh let researchers get a sample of that spyware first. Even on your main devices, minimize the data that you retain. They can't steal data if you don't have anything to

steal. So, we'll all be better off if we start treating most online communication as ephemeral and delete it after we've read it. If you want to retain anything, take a screenshot, but delete everything else. If you go into the Signal app, you can go to settings, privacy, disappearing messages, and set signal to use disappearing messages by default for every chat. And while you're at it, get everyone you know to stop sending you messages in iMessage, WhatsApp, Instagram DMs, or anything else and switch to Signal. You should minimize other data, too, and not just me messaging apps. Basically, think about what data you have on your phone uh and on your computers and regularly take steps to reduce your risk

if those devices are ever searched. This isn't really about mercenary spyware or device searches, but I wanted to slip this into my talk, too. Um, we've known for years that ICE and local police departments across the US use cell cells site simulators. Here's the recent reporting from earlier this month about yet another ICE contract for these street level surveillance devices. If you're not familiar with cells site simulators, they're also called MC catchers or stingrays. They're devices that pretend to be legitimate cell phone towers, tricking nearby phones into connecting directly to them rather than to real towers. We know that they're in use across the US, but uh there's a real challenge in detecting them. Ray Hunter is open source custom

firmware for cheap mobile hotspots that can detect cells site simulators. It's developed by Cooper Quinton and others at EFF. Um here's a little 4G hotspot that's running Ray Hunter. And I can see there's a green bar at the top. So it's not detecting any cells size emulators. um you need to plug a SIM card into it, uh but you don't need to actually pay for service. Um so it's a cheap onetime cost and it's incredibly easy to flash the firmware on these. Um so if you're interested in trying to detect cells site simulators, uh check out their 800 project. A different way that technofascism is expressing itself is app censorship. Apple and Google, the companies that

control exactly what software anyone is allowed to install on their phones, are actively collaborating with the Trump administration by censoring their app stores without even a fight. A few weeks ago, at the request of the Trump administration, Apple kicked the Ice Block app off of the App Store. This was an iPhone app that allowed users to anonymously report ICE sightings within a 5 mile radius and get notifications when other report when others reported ice sightings near them. The developer Joshua Erin points out that iceblock is no different from crowdsourcing speed drops which every notable mapping app uh application including Apple's own maps app implements as part of its core services. To justify his decision, Apple has

decided to treat ICE officers as a targeted group and to treat apps that help inform the public about abuses by ICE whose job is racial profiling and violence against people based on their national or ethnic origin as the same as discriminating against people for their religion, race, sexual orientation, gender, or national or ethnic origin. To be clear, the government didn't send a court order to Apple demanding that they do this. The Justice Department asked Apple and Apple simply agreed without a fight. Here's a quick video of Attorney General Pam Bondi lying to the Senate about ICE block. [snorts] >> Senator Lee, our our federal agents have been doxed and many of you know what that means. It has happened to you on

both sides of the aisle. Our federal agents lives have been threatened. We fought while we spoke with Apple and Google to get the Ice Block app taken down. That was reckless and c criminal in that people were posting where ICE officers lived. We worked with both Apple and Google to take that down. Threat. >> It's stunning. I would add here that it took it took as much work as it did by you to >> so first iceblock did not post where officers live. They just posted ICE sightings and which automatically got deleted after a few hours. And also Iceblock was never actually available for Android. It was only for iPhone. But don't worry, Google also voluntarily

chose to collaborate with the fascists at the request of the Trump administration. And in fact, they use pretty much the same justification. Apple and Google both removed the Red Dot app from their app stores. Red Dot is an app that's similar to IceBlock in that it lets people report ice sightings and gets alert alerts when they're nearby. Since it's been banned by both Apple and Google, it's now only available for Android as an APK that you can sideloadad.

Google claims that the Justice Department didn't ask them to ban Red Dot, but I kind of find that hard to believe considering Pam Bondi keeps giving interviews saying she asked Google to ban these apps. But even more disturbingly, Google's justification for banning Red Dot is that working at ICE makes you part of a vulner vulnerable group that is quote associated with systemic discrimination or marginalization. This is just offensive. And even worse, Apple banned an app called Eyes Up from the App Store. Unlike Ice Block and Red Dot, Eyes Up doesn't do any realtime tracking or alerting of ICE. It simply archives verified videos of ICE abuse and puts them on a map to preserve evidence of their crimes.

Also, unlike Iceblock or Red Dot, ISUP is a web application. So, it's still online atupapp.com. Here's a screenshot of ISOP zoomed into a part of Portland. Apple is voluntarily helping fascists videos of violence from DHS DHS officers like this one. I hope

[ __ ] back up. >> That's [ __ ] Express the [ __ ] force. >> Back up. >> No, we are. This is our next fight. >> [ __ ] Shut the [ __ ] up. >> He does not need to be tackled like that. >> He's a veteran. a veteran of this country. >> What the [ __ ] is wrong with you guys? >> It's not just Apple and Google, though. Last week, Facebook deleted a group called Ice Citing Chicago Land with over 80,000 members in it at the request of Attorney General Pam Bondi. Just like Apple and Google, uh, just like their excuses, Meta claimed that this Facebook group was violating policies against

coordinated harm. On a recent episode of the podcast On the Media, the 404 media reporter Joseph Cox spoke about this Facebook group. Attorney General Pam Bondi last week posted on X saying that DOJ had successfully gotten Facebook to take down a group page that she said was quote being used to dox and target ICEGV agents in Chicago. I have seen a limited archive of that Facebook page. It's difficult to access now of course because it has been taken offline but the section that I scrolled through I did not see any evidence of ICE officials being doxed or specifically targeted. It was more just reporting hey there are ICE officials at this location very much in the same sort of way that

apps like ICE block were doing. So in other words, in the age of technofascism, American tech tech companies are collaborators. Uh before going on to solutions, I want to share one final story from earlier this week. This article describes a search warrant that ICE sent to Meta demanding real-time metadata about who a WhatsApp user was communicating with. WhatsApp messages are endto-end encrypted, but Meta freely gives law enforcement all of the metadata. So, if you're using WhatsApp for any sort of anti-fascist activism, stop and switch to Signal. Signal has features like sealed sender that prevent them from even accessing the metadata themselves and so they can't be forced to hand it over to ICE. Um, and this warrant also

specifically allows the government to unlock the suspect's phone using their biometrics. So again, don't use biometrics for unlocking your phone or computer. I actually think it's fine to use biometrics to have it enabled. Just uncheck the unlock the phone with with this. So of these four instances of censorship, Iceblock, Red Dot, Eyes Up, and the Ice Sightings Chicago Land Facebook group, Eyes Up is the only one that's still online. And the reason is because it's a website. What this censorship tells me is that companies like Apple, Google, and Meta cannot be trusted or relied on. So, if you want to make an app that the Trump administration won't like, unfortunately, you should make it with

censorship in mind. Just like eyes up, make it a website that works without a native app. So, when Apple and Google turn on you, your tool can still be useful. And the internet is a global network. There are domain name registars and hosting providers all over the world and many of them won't cooperate with US authorities. There isn't much internet censorship in the United States yet. But if that changes uh like if they want to start blocking the eyesapp.com um uh thanks to activists in places like China and Iran and Russia, we have decades of experience circumventing online censorship. We can use the same techniques here if we need to. Finally, we should all step back from

our computers, put down our phones, and devote real energy into strengthening our communities. Things are really bad right now, and it's easy to feel isolated and alone. Whenever possible, talk to people in person instead of in group chats or video calls. People are facing harassment from Trumpup supporting fascists. Their loved ones are getting disappeared by secret police. The state is making examples out of people who are trying to get gender affirming health care or reproductive health care or for protesting genocide. When they come after you, your friends or your neighbors, the worst thing you can do is keep staring at your phone. We need real community ties with people who have our backs. And we need to have

solidarity with everyone else that they're going after. People living under oppressive regimes have learned throughout history the importance of security culture. A security culture is a set of customs and measures shared by a community to keep everyone safe. So, as [ __ ] gets more real, keeping your community safe is everyone's responsibility. Don't panic if you haven't done all the things that I proposed in this talk. And don't judge others who haven't done them either. It takes time to incorporate these practices into communities as a security culture, but we're all better off if we commit to them. The fascists are probably going to start hacking our phones. They're going to plug them into Celebrate and try to see exactly who

we're talking to and what we're saying. They might want to plant spyw wear on your phone and hope that you keep using it. They're going to pressure tech platforms to prevent us from organizing. They're already doing this. they're going to uh use data from companies like Google and Meta to decide who to target. So, it's not enough to just lock down your own devices. If we want to stay safe and productive in the age of fascism, we all need to work together. Um, and here are a few other resources that you might want to check out. Thank you so much. [applause]

>> [applause] >> Yeah. Uh I I guess what do we have like 15 minutes or 10 minutes or something? Yeah, we have time for questions if anyone has any. And I hope you like the uh frog ears. Uh to the point about using Signal, I can speak from experience that if you're using social media apps for communication, do not sync your contacts. They pull those into the cloud and then those can be used internally. So don't don't sync contacts, people. >> Good advice. I think that um I know graphine has graphine OS the Android ROM has a uh a contacts scope that lets you choose exactly which contacts. So like you know WhatsApp or whatever thinks

that you're syncing all the contacts but you're only choosing a few limited numbers. And I think the new iOS supports this too. So you can still use those apps without giving it access to your contacts. So, total Sophie's choice, I know, but um given the state of things right now today, iPhone, Android, which if you know, which would you say is least worst? Um, yeah. I mean, I don't know. I think it depends. I think that like it really sucks that there's a duopoly and that like, you know, the Trump administration gets to decide what anyone is allowed to install on any phone at all. I think that that I'm I'm kind of excited about the like new

iPhone 17 secure like memory corruption stuff. Um uh and I've been using an iPhone for a while, so I'm using an iPhone 17 right now. Um but I don't know. I think it's fine to use an Android phone, too. Uh, I think Android phones probably like spy on you more, but iPhones kind of spy on you a lot, too. So, I I mean, I think that as long as you um in install all of your updates, try and, you know, use best practices, disable all of the like bad settings, it doesn't make that much of a difference. Um, one one thing actually though is that uh in terms of security, I don't know. Basically, it's easier to

detect spyware on iOS than it is to detect spyware on Android. And the reason is because iOS has like much more verbose logs. Um, everything you're doing on your phone every time you like reboot, every time you do anything, it records all of this in something called the cyst diagnose log file and you can like extract the cyst diagnose and then like look back in time in history. Android doesn't have like a verse log file that like gets to hundreds of megabytes. Instead, it's just like you can take a snap, you can do a bug report. So you can take a snapshot and so it's easier to detect malware on iOS. But that doesn't mean that you should

use it necessarily. >> Hi Micah, nice to see you. Um I try to have these conversations with friends, family, etc. Um but often get push back like you know it doesn't have to be this hard to use your phone, right? Or I'm not an Antifa super soldier. Do I really need to? And a lot of the conversations tend to trail off because uh the people I talk to, my friends, my family, people who are in my circle don't think that this level of security hygiene applies to them. Are there any trainings that you've come across or any way to bring this to my community in a way that might be more persuasive? >> Um that's a good question. I mean, one

thing is most of this stuff actually isn't hard. It does it doesn't make things much like like lock down mode um like makes your phone slightly more annoying to use, but like honestly you should just use your phone less anyway and you'll be happier. So So I find that to be a feature of lockdown mode. Um uh but I mean I don't know. I kind of think that like just tell people like yeah like maybe you feel like you're not directly threatened right now but but like imagine that this is you know 1938 Germany and you know you don't really need to worry about anyone raiding your house because you're not Jewish or something like that like like I think

that it's prudent for us to be prepared. Hi. Uh, do you think a duress passphrase is helpful for dealing with device searchers? >> Um, I don't know. Like I I don't know of any like empirical data about this, you know. So, so I mean I think that what I would say is that there's a lot of like you know theoretical stuff that you can think about things like duress pass passwords or [clears throat] things but in the moment especially if like someone is like assaulting you and like threatening you with prison time and stuff like um you know you might not remember your address passphrase. You might you know decide to just give them your password cuz if you give them the

duress passphrase and it wipes the device that they might beat you even more. like there's like a lot of real world stuff that doesn't really apply. So, um I don't know if it's helpful or not, but I think that if you're interested in that sort of thing, go ahead and do it. And um uh yeah, I mean I think that really just requires more research, but I also think that it's not usable. Like I don't think I think that it might work for you, but it's not going to work for everyone, you know? >> Hi. Uh uh so with mercenary spyware or I guess spyware manufacturers being um moved to companies and in particular companies that are now US-owned.

Do you think that there are uh going to be any for security researchers that are like analyzing or pulling apart this malware? Is it are copyright protections or intellectual property protections going to start I guess coming into play as a way to maybe silence researchers do you think or uh is it basically the same as when it was done inhouse by government entities? >> Um that's a good that's an interesting question. I mean, like the flip side of that, like there's a big lawsuit that uh where Meta sued NSO Group over um exploiting WhatsApp and they actually won a huge like it was something like $230 million settlement that the NSO group has to pay uh Meta. Um but yeah,

in terms of like US, I don't know. I mean, I think that the good news is that Citizen Lab is not American, it's Canadian. It's a and it's also associated with the research university. Um and uh then like Amnesty International and Access Now are like global NOS's. So I think that there's a bit more re leeway for researchers like even if you're American, you can like work with groups. And this is actually another thing if you're interested in this sort of thing. Um there's a lot of of demand right now uh uh for researchers to find spyware. So, like if if you live in Portland and you know and like a bunch of people just got arrested

by ICE and their phones were taken from them and then they were given back like yeah, you should contact Access Now um the helpline and and get help and stuff. But also like you as like a security nerd like volunteer to help try and get a copy of that CIS diagnos try and dig through it and and um you know like I think that this could be a really good way to contribute to your community. Hi. Um, I've got a challenge for you. Downtown here, there's building after building of lowincome people living that are not techsavvy, that are worried. You'll see them out on the street quarters smoking. They're all over the place. They don't know how to keep

themselves safe. I would like to ask you to look out for your neighbors down here downtown. do what you can because they're very vulnerable. They've already lived through regret the regimes. I mean, these are immigrants, people that um relatives immigrated. Uh and they need help, but they can't help themselves. And I know you're all smart enough to figure out ways to help your neighbors downtown here. This would be a good time to like plug any sort of Portland mutual aid groups that do stuff like this, but I'm not from here, so I don't actually know what groups to plug.

>> Uh, thank you. This has been really interesting. Um you mentioned on one of your slides if you're a US citizen um it was on your know your rights slide about um that they have to let you into the country but they can confiscate your device. >> Can they like hold you for an indefinite period of time? Can they charge you with anything like failing to cooperate? Um >> um >> if you have you heard of any instances of anything like that? So, I'm not a lawyer, but I have but I have like kind of recently talked to a lawyer who basically said that for US citizens, um, they can't hold you indefinitely without charging you with something. I don't

know about charging you with failing to cooperate. Like I don't know if if if that would be enough of a crime. Um, but basically that like the like what this lawyer what a lawyer from ACLU thought that uh the biggest worries for US citizens are um they're going to confiscate your devices and try to hack into them. Um uh but I think they can't hold you for more than something like 2 days or 3 days or something like that. So, they might hold you for a few days, but then they have to let you go. Unless they like maybe they break into your phone and find some excuse to charge you with a crime or something, then they

could hold you for longer. Um, but yeah, I haven't heard of instances of US citizens uh like just getting disappeared while traveling. Um, doesn't mean it hasn't happened. Maybe it happened and they weren't able to contact anyone, so no one knows. what's next for you or is are you working on anything interesting? >> Um I'm so currently I'm doing um a bunch of research. Uh so I'm I'm I'm basically like a consultant now. I just I'm self-employed. Um so I'm do I'm actually working with Citizen Lab. Um and I'm working with Freedom of the Press Foundation like doing a bunch of software development. Um and other other people. So I mean I don't know. I feel

like it's really hard to do any sort of planning [laughter] when when there's when everything's crumbling around us. Um, but I'm just going to try and uh, you know, survive and keep people safe. >> Yeah. What makes you trust lockdown mode if you don't trust Apple? >> Um, like, so I don't think that lockdown mode is necessarily going to mean that you're safe. Like I don't think that means that the the device is secure but I think but basically like what it does is it cuts off um common attack surfaces. So like okay so like one of the things that it does is it disables a bunch of features in iMessage. Um so when people try and send you um uh like

documents over iMessage it like doesn't work anymore. And when people send you links it makes those links not clickable. So, it's a lot easier to like it's a lot harder to accidentally like click on a fishing link and then exploit an something in Safari or whatever. So, it's not that I like trust it. It's just that there's um uh like the way that the way that the that modern spyware to hack a phone works is it relies on like a whole chain of exploits. So, first maybe it exploits something in iMessage. Then maybe it exploits something in like a font renderer or it exploits something in Safari or whatever. And what lockdown mode does is it just cuts off a bunch of

stuff. So like like okay so one of the things that it does is it turns off uh JIT just in time or just in just in time compilation in um the browser and so uh instead it so it makes JavaScript a lot slower to run. So JavaScript only applications are slower but it's not actually but like it uh prevents you know like memory corruption bugs from happening in JavaScript anymore where before they would happen. So I don't know if if that answers the question but basically it's like yeah [ __ ] Apple but it reduces features that are commonly exploited. >> It was more referred to where the FBI compromis >> Oh yeah. more like that.

>> Yeah. I mean, I don't know. Like, I find it hard to not have a phone. [laughter] Um, it I think that what would be great is if the Apple Google phone duopoly was like crushed and I can get like, you know, a completely different phone with a completely different operating system from like, you know, Brazil or the EU or something. But that doesn't exist yet. I think that, you know, Jump is trying really hard to make sure that that that sort of competition is going to happen. >> I guess uh somewhat related to that, I'm curious on your opinion around big tech, like ignoring Zuckerberg and Meta and the craziness. I mean, right now it's

basically, you know, capitalism and self-preservation. Do you see there be a tipping point where leaders of those companies would essentially become actual fascists, not just collaborators out of money? Uh, incentives. >> I mean, it's like like what's the difference [laughter] like like like because so like another common definition of fascism is like merging corporations in the state. And so, and I think that like really what's going on right now is we have these huge massive monopolies in the United States that realize that they're gonna, you know, they're going to not have like Nina Khan, they're not going to have like their monopolies broken up if they, you know, just like flatter Trump. And so that's what they're doing. And now

they're and they're like, you know, getting paid. They're they're making a lot of money from it. And so I mean, I kind of feel like that means they're fascists right? What type of evidence do people look for when they're conducting a device search? >> Um, uh, so like like the like cops look for? >> Yeah. Like what what might be on someone's phone that they might get flagged for? >> Um, so I mean I So Celebrate uh the like c so celebrate you is their universal forensic extraction device uh uh system. And so what it does is it really it so the first thing it does is it um uh if the assuming the device is unlocked it

tries to jailbreak the phone or root the phone. So it tries to use exploits to get root which then gives it full access to the file system. And I think that it will and then at that point it could access the private app data for every app. But I think that basically they um support uh they have a bunch of custom like plugins basically for different apps. And so I think they probably copy all of the data off of the phone, but then also they like specifically are looking for messaging apps and then for photos and then for uh they probably have like um you know like a custom module for like the Facebook app and

then you know they they put it into their database so they have a nice way of reading your entire Facebook uh like message history and everything else. Um, but yeah, I mean I think that like messaging apps are probably like really high priority, but but I mean they'll take all of your contacts. They'll take uh, you know, they'll take whatever they can get and then I think it's just a matter of the, you know, law enforcement investigators deciding what's useful to prosecute you. >> So, we are at time unfortunately. Thank you for all the questions and thanks for taking the time to answer them. Um, you'll be around. >> Uh, I'll be around for a few hours. I'm

leaving early today, >> so we can we can still bug you. Um, let's give Mike a round of applause. Thank you so much for coming. [applause]

And we've got a a Sasquatch with a black hoodie for you. So, thank you very much. Um, we have uh we about 15 minutes as we kind of shuffle all the rooms around. There are a few workshops that still have space for walk-ins, but let the people who have pre-registered get in first. Once that's taken care of, then the walk-ins can be accommodated. Um, we also have the community rooms. Uh, uh, fitted women's t-shirts will be available in the afternoon, uh, in registration. And I think that's all the things I was going to tell you. Uh, thanks and enjoy the day.

[music]

Heat. Heat.

Heat.

[music]

Heat. Heat.

Heat.

Heat. Heat.

[music]

>> [music]

[music] >> Heat. Heat.

>> [music]

[music] >> Heat. Heat.

>> [music]

>> Heat. Heat.

Heat. Heat. [music]

[music]

[music]

Heat. Heat. [music]

[music]

[music] Heat. Heat.

[music]

[music]

>> [music]

>> Heat. Heat. [music]

Heat Heat. Heat.

[music]

Heat

[music] up

>> [music]

[music] >> Heat. Heat. [music] Heat. Heat.

[music]

[music] Heat. Heat.

[music]

[music]

>> [music]

[music] >> Heat. Heat. Heat. Heat.

>> [music]

[music] >> Heat. Heat. [music] Heat. Heat.

[music]

[music]

Heat. Heat. Heat. Heat.

[music]

Heat.

Heat. Heat. Heat.

[music]

>> [music]

[music]

[music] >> Heat. Heat. Heat. [music]

[music]

Heat.

Heat.

[music]

Heat. [music]

Heat. Heat. [music]

[music]

[music]

>> [music]

[music] >> Heat. Heat.

Heat. [music]

[music] Heat.

[music]

>> [music]

>> Heat. Heat. [music] Heat. Heat.

[music]

Heat.

Heat. Heat.

[music]

[music] Heat.

>> [music]

[music]

>> Heat. Heat. [music]

Heat. Heat. [music]

>> [music]

>> 11 o'clock presentation for track one. Come on in.

All right.

Track one. Track one. Okay. How's it going today, guys? I'm Eddieville Royale. Uh rather new to the area. Uh if you guys have seen me before, some of my students might be running around here. I teach uh cyber security out of Mount Hood Community College. Uh so had a chance to volunteer here today. So looking pretty good today. Our first uh speaker here at 11 o'clock is Will. Will is the tech lead for detection response at data bricks. His expertise lies at the intersection of threat detection and software engineering specializing in detection engineering, attack simulation, and the practical applications of threat intelligence. Please give your warm attention and warm applause to Will. [applause] Thank you.

Thank you. Can everybody hear me? Okay. Awesome. All right. It's great to be here. Um, so a little bit about myself. I'm a security engineer at Data Bricks. Um, previously I've worked at uh companies like Stripe and Data Dog and um I have over 15 years of experience in security toil. Um let me just see a quick show of hands. Who here has been on call in some capacity for their work? Right. Okay. Well, this talk is for you. Um I have been on call a lot. I've been on call in uh for vulnerability management teams um for detection engineering teams. I've also taken the pager for services, service outages, things of that nature. And um being on

call really sucks sometimes. And and so this talk is just how I have tried to make uh life a little better for me and my co-workers. [snorts] So in this talk we're going to talk a bit about how at data bricks we reduced the incident investigation time um for service issues from around 15 minutes to under two minutes in most cases. And this talk is really about the practical implementation of MCP servers for security operations. So, if you toyed around with LLMs or MCP servers in the past and you're interested in how to potentially use them in a corporate environment, um we'll touch on all of that. Um we're also going to talk a little bit about some of the challenges

of using MCP in multi cloud environments. So, um I thought it'd be good to kind of level set by talking about operational toil. And um I asked my good friend Claude here if he could define operational toil for me. And um I actually think this is a pretty pretty solid definition. Um the thing that really stands out to me about this definition is the emphasis on manual and automatable work. Um if we're doing something that involves us to put our hands on keyboard and to stop what we're doing. Um and if it's something that can be automatable, um I I found that to be frustrating. And I think that's something that you know as an industry

we we like to work towards a state where the computers are working for us. We're not working for the computers. Um I've also found that during on call rotations we rarely have time during the rotation to implement a lot of the changes or a lot of the automations that are needed to make life better. Um and ultimately this becomes tech debt. And we all know that tech debt is notoriously difficult to pay down. Um so on call rotations uh by their nature are interrupt driven and we all know that when you're working on a project you're pulled out of that project because you received a page suddenly you're looking into an alert or an incident. Uh it's exceptionally

disruptive to our productivity. And there's a a workplace researcher at UC Irvine named uh Gloria Mark who did a bunch of research about a decade ago into um workplace interruptions. And she found that it takes about 23 minutes for an employee to regain their focus on the task at hand after they're interrupted. And I'm I'm sure that many of you, you know, if you work in an office, have had that experience where you're in the office Monday morning and your co-orker comes by and wants to ask how your fantasy football team did when really you're just trying to focus on whatever project you're you're uh currently working on. Um, and it's very hard for our brains to pause on one task to

quickly switch to something else and then to switch back. And uh Miss Markx found in her research that these um these impacts on our kind of cognitive context tend to compound. So the more that we get the more that we get interrupted, the harder it is for us to refocus on whatever task we were originally working on. And I think this is really exacerbated in the cyber security industry. Um as a detection engineer or a security engineer, we have to kind of build complex mental models around threat actor TTPs. we have to understand service architectures and network topologies. So we're constantly getting pulled into other types of activities when we're on call. Um it's just exceptionally disruptive and at the

end of the day I think a lot of this leads to burnout which we know is a huge challenge in our industry. Um so what are some some things that we can do to address that right? Well the first thing is we need to look at what we're doing today. And if you look at security teams today, I think we'd all agree that a lot of security teams scale unevenly. And when I say unevenly, um I mean that kind of regardless of your expertise, your background, your tenure, and your career, um what I found in my personal experience is that someone who's been at a company for two or three years is going to be more effective at responding

to security incidents than someone who's been there for three months. And in the past, we've tried to address this through playbooks. Um, but playbooks are really an 8020 solution. Um, they require that we kind of tabletop or imagine all of the different ways that we'll need to investigate an incident or respond to a scenario in our environment. But that 20% is where we really struggle. Um, the 20% that we haven't anticipated and we haven't accounted for. They also require continuous upkeep. Um, rarely have I used a playbook that didn't need some kind of update or some kind of tune in some way. And this all just becomes tech debt that again is really challenging to

maintain in the heat of the moment. Um, and uh, finally, I think it's important to point out that like when we're working with a playbook, we still have to fundamentally understand the problem that's at hand. So if you get an error message or you're working in an incident in a technology that you have no understanding of, even if you have a great playbook, you may really not know how to use it or um how to operate it. And so these are all challenges that we have to deal with. When we're thinking about cyber security incident response, to me, the person that has the pager is really seen as um the first person to the scene of the crime. So if you want

to make an analogy to an EMT, uh the EMT is at the scene of the accident as quickly as possible and [clears throat] their job is to stabilize the patient and ensure that they get any critical care that's needed right away. But it's not their responsibility to take that patient from the moment of the accident and work them through the long arc of their recovery. And in incident response, we're functioning the same way. uh when we take the page, it's our job to triage the incident as quickly as possible to try to understand the what and why of of what happened. Um and ideally reduce the cognitive penalty that we're paying when we're pulled out

of the other work that we're trying to do. So what is an ideal incident triage solution look like? Um, to me, in my experience, an ideal solution can quickly get to the what and the why of a security incident, regardless of whether you're troubleshooting a failed service or um some kind of alert that's been triggered in your SIM. Um, getting to know the the what and the why is very very important because ultimately uh as the first responder, you need to have that context to understand how to escalate this the situation. I think it's also really important that a solution shows its work. Uh particularly in the age of large language models where we have concerns about

hallucinations, it is really really important that when you are reviewing what's happened in an incident, you can understand exactly how a triage solution has reached the conclusions that it has. Um and finally, I think a good triage solution needs to tell you what to try next. um it doesn't have to necessarily solve the problem for you, but being able to point to um the next steps that you should take to investigate an issue, the next potential um set of questions that you should ask of the system ultimately is going to make your incident response process much easier. So, what does this have to do with data bricks? Let me tell you a little bit more about our detection response

environment at data bicks and how we've been able to use some of these tools to help us out. So uh data bicks is a multi cloud company. Um we are uh as a security team performing continuous monitoring across AWS, Azure and GCP. Um plus a few other kind of custom environments. Um we are monitoring over 50 cloud environments. And in each one of those regional deployments we have hundreds of batch and streaming jobs that are supporting thousands of individual detections. everything from detections that are monitoring uh logs from uh from cloud trail to um endpoint based detections and things go wrong right anytime you're operating at at at that scale there's just a host of issues that are going to

come up um the first and probably most common issue is delayed log delivery right we have an upstream uh provider problem particularly we see this with SAS products where there's either a gap in logs or a delay in log delivery Um, we also have cloud provider outages. I think we all got a reminder about that um, on Monday of of this week. Um, and then we also have artificial issues, right? We have quotas and rate limits. So, if you find that your workloads do really well on one instance type in most AWS regions, that instance type may not be available everywhere. And so, every region is a little different and we have to be able to account for that. And

finally, people make mistakes, right? Um, how many of you have ever shipped code that's broken something in production? Right. Okay, that's right. There's probably more of you than uh than raise your hands, but yeah, I've broken production. People break production all the time. And this is despite the fact that we have unit tests and integration tests and CI/CD. These are just things that happen. So, like I said, we have this regional deployment model. If you can imagine, you know, 50 of these regional workspaces, all funneling data um into really common security operations tools, pedagy, slack, and jur. And then ultimately when something breaks, it goes to my phone and and then I'm really sad because I have to stop what I'm

working on and and pivot to a new project. So here's a very kind of common case study um of just a failed Spark job in a single region just to give you a sense of uh what it takes to kind of dig in and investigate one of these problems. So first of all I have to pull out my phone and acknowledge the page right um click the button on my Apple Watch click the button on my phone and then I have to log into pager duty and pull up the incident. I got to find the right incident and then I have to read the error message. Um, I got to figure out which region is involved and uh

potentially which which job is implicated or which data source. Then I got to open up a whole new set of browser windows and I got to log into that region. I got to find that job. Um, I need to find the most recent execution of it or where it aired out. And at that point, I'm like 5 or 10 minutes into this process. I've completely forgot what I was working on before I got the page. Um, and you know, I'm reading an error message about, you know, delayed log delivery or US East1 um, having a problem. And, and then after I gro all of that, I have to actually decide what to do next, right? Is this a job that I

know a co-orker was recently working on when they were shipping new detections, so I should just ping them and see if they can help out or do I need to escalate this to a different team? And it's just really frustrating. And and like I said earlier, I I think the computer should be working for us, not the other way around. And um earlier this summer, I I just had a really terrible on call rotation. Um there was like one one single day where I got paid like 60 times. Um and I just got to the point where I was like, you know, I'm not doing this anymore. There's got to be a way to improve um improve this this

this process. And so I committed to myself that during my next on call rotation, I was going to do everything I could to use AI assisted tooling to to try to make this less painful and more consistent across the entire team. So when I started down this road, um my first thought was like I'll just use an LLM to solve all of these problems, right? And an LLM makes sense for a lot of reasons. So um they have the ability to consolidate a lot of different disperate pieces of information into a cohesive story that tells us a lot about our triage um our triage process. Uh they have the ability to traverse complex code bases. So uh at data bricks

we use detections as code. We also have infrastructure as code. And so I'm able to expose all that content directly to the LLM. So when it's reading an error message, it can actually go in and then look at the source code itself to try to see if it can identify the specific problem. Uh and finally, LMS have a lot of subject matter expertise, right? They know a lot about computers and um even though they get things wrong occasionally, they can generally point us in the right direction, particularly if we're dealing with something that we don't have a lot of experience with ourselves. So my kind of like back of the napkin drawing of how this would work would be,

you know, some kind of operational agent, some kind of solution that had the ability to reach into all of our regional workspaces, uh, to reach into Slack, to reach into Jira, and to really be able to give us the information that we needed, um, to to triage incidents faster. So um about a year ago, Anthropic released a uh open specification called uh the model context protocol. And this is designed to be a data interchange between large language models um and services and tools. And essentially what this allows you to do is to um to define functionality to to make an analogy you can essentially define a Python function um where the function accepts input does

something and then provides output. And we're giving the models the ability to discover these functions and then to invoke them on our behalf. And um this was kind of a huge hit uh as soon as it as soon as it came out. Um this is the these are some screenshots from the model context protocol GitHub page from a few weeks ago. Um these are just a few of the MCP servers that have been released. And as you can see it's um everything from MongoDB to Microsoft Teams um to all kinds of AWS and GCP services. Uh like I said we're using JR and Pager Duty. They all have have great support. And so these are essentially

tools and functionality that enable these LLMs to reach into these services and interact with them much in the same way that we would or you would if you were interacting via um an API. Uh this hasn't been without its problems though. Um the the model context protocol uh domain specifically with AI and cyber security has been very very active. Um there have been a lot of findings and I would say that as a community we're still learning like how to safely use these tools. Um we have had multiple issues uh with supply chain attacks where um we've had MCP servers get compromised in some way and they're suddenly exposing information um to outside sources or um data is being

leaked in a way that we didn't intend. Um, and so if if if this if these are solutions that you're going to investigate, I would just encourage you to uh tread lightly. Um, and uh certainly when in doubt, use them in a readonly context. Right? So um I'm not recommending that you allow these tools to kind of run rampant um in your codebase without without supervision, but they are really effective. So, um, here's an example of, uh, one of our integrations that we've set up with, um, with Pedagrad duty and data bricks. This was kind of like the the first version of my incident response agent that I wrote. And as you can see here, um, in

this specific instance, I have um, uh, Claude pulled up, and I just say, "Hey, can you investigate this page of duty incident?" And I give it the ID number. And then what we see is Claude goes in and logs into Pedager duty on my uh behalf. It pulls a list of incidents that are assigned to me. It's then able to figure [snorts] out that the specific incident that I've referenced is an issue in US West 2. And then I've given it the ability to log into US West 2 and take very specific actions related to troubleshooting. So the ability to see what's running in that environment, the ability to see the status of the jobs,

the status of the workloads that are there. Um, and it goes in and takes all of these actions autonomously. The only information that I have given it is the pager duty incident number. It then goes in and gives me a really solid summary of what happened. So in this case, uh, we have a Spark job that is running, uh, a lot longer than it's supposed to. Supposed to execute on a 15-minute time window. We're bringing data in. We're analyzing it for security purposes. Um, and when you have a sudden surge in data, sometimes jobs will take longer than that and it results in a problem. And so it goes in and it's done an analysis of all of the historical

executions of this job and given me frankly some really useful information that that would have taken a lot of time for me to collect in a very short time. It also goes in and gives me some some next steps and some recommendations, right? So based on the incident, the error message, the context that it's collected, um it's gone in and said, "Hey, like maybe you should troubleshoot um next by reaching out in this specific service. Um consider rescheduling the job so it runs on a on a different schedule and things of that nature." Now, why does this matter with productivity, right? So, like I mentioned, um, if I'm getting paged 50 times, I don't have 10 minutes to go

through and respond to every single page and figure out what's going on. So, at at a minimum, a solution like this allows you to speed up your response time, right? If you integrate this with your SIM, your soore, um, any of the kind of internal security tools that you're using, uh, you have the ability to to basically respond more quickly. The other thing that I like about this is it has allowed us to completely change the way that we manage our page duty schedules. So um a lot of the SLAs's that we are dealing with internally are typically like an hour for something like this. And so we've been able to reschedu um the way that

alerts get delivered so that I can have 30 minutes of kind of like time when I'm I'm not being paged and I'm not being bothered knowing that if we do get slammed with pages, I'm going to be so effective in that next 30 minutes that I've devoted to to triaging alerts and triaging pages that that it's not going to be a problem. So, this essentially allows us to focus more on our day-to-day work, our project work, the things that we're responsible over quarters or months, um, as opposed to kind of constantly fighting that battle. So, um, just a few more, uh, kind of ideas that I want to seed with you around how we're using MCPA in our

environment and how it may be helpful for you. Um, so there are a few other ways that we are um, leveraging MCP for detection and response. So, first uh we found that large language models, if they have the ability to access your detection codebase, can do a great job of uh doing coverage analysis. They're also really good at taking thread intelligence reporting, reprocessing that, and then figuring out where you potentially have gaps or coverage. Um, and they're also great at answering really lazy questions. So, if your manager pings you and says, "Hey, uh, do we have coverage for that vulnerability in our Azure environment?" you can literally just copy and paste that message into an AI assisted tool and uh

be able to get a very quick answer. Um they're also great at answering vague questions like is US East1 broken today? And they can get you an answer very very quickly. And uh finally, we've also been using large language models to uh tune and improve our detection rules. So we can take false positive feedback from our instant response team and our sock and we've been able to use MCP and uh some vibe coding tools to essentially uh automate the process of tuning false positives. We have also found that uh LLMs are really effective at creating additional true positive and false positive test cases. So if if we've envisioned certain scenarios where a false positive may occur, they're able

to go in and provide additional context. Um that's been very very helpful. So anyway, I hope this gives you um some ideas for how you might be able to use these tools in your environment and uh really appreciate your attention. Thank you. [applause]

>> And with that, we have about four minutes if you guys want to ask Will any questions.

>> Seems like there's also a use case here for answer stupid auditor questions. >> Uh yeah, so the the question was it seems like there's a use case for LLMs to be able to answer questions from auditors. I would completely agree. Um in fact, I I would argue that you could take most of the content you're receiving from auditors as well as internal documentation and have the LMS be able to largely automate that process and potentially anticipate things that they haven't asked you about yet, but they may ask you about in the future. So, >> [laughter] >> Yes. >> Other than seems like a legit company, how do you vet? >> Yeah, great question. So, um the

question was how do you vet which MCP servers are safe to use? So, I I think this is largely a software supply chain problem. Um I you know, as a individual practitioner, I would encourage you to look at where the MCP server is coming from. Is this coming from someone's personal GitHub page where they have implemented their own page of duty MCP service or is this coming from page of duty themselves? Also, uh really common like software hygiene things like um how many bug reports are open on this GitHub repository? Um do they have kind of CI/CD tests that are validating the code when it's published? Um ultimately at the end of the day though I think that

the best option is one ensure your legal team is signed on with you taking the data that you're working with and using it in this context. Um and uh two only use these things in a read only capability. So we aren't allowing these tools to take any actions on our behalf yet. We're simply using them to speed up our response process. [snorts] >> I had a question kind of along those lines. You mentioned having it log into pager duty and some of the other services. Are you creating a new account that's a readonly account for that sort of service or is this usually like your own actual account? >> Yeah, that's a great question. So, um,

data bricks is a little unique because we're actually a model serving provider. We allow um, companies to host large language models on our infrastructure. So, when we're working with internal security data, we're actually using models that we host. Um in this case though with the example that I gave that's actually running in claude under the context of my personal account. So it's essentially piggybacking on the credentials that I have already set up to be able to act um in that way. So I think the answer is both but it's going to depend on your environment.

>> Thank you. >> Hello. Uh thanks for a very great presentation. I was just wondering from an audit point of view, when you have an agent or an MCP server logging in on your behalf, does it identify that request as being from not not from you personally? >> Yeah, that's a great question. So, I think it's going to depend on the logging that you have with your SAS provider. Um, there's a lot of challenges with SAS provider logs. And so, like for instance, um, when I'm logging into Paged Duty, you'll see my Chrome user agent. you you'll see my uh my user having gone through the process of tapping my UB key several times and

kind of going through the OOTH flow. Um when these services are interacting, it's coming from source code, right? So either a Python UA or a UA associated with the EMCP server. Um so I I think it'll largely depend on on your environment. But again, if you're going to go production with this, I would encourage you to use service accounts in which case it becomes very auditable. [snorts] All right, I think that's all the time we have. Thank you so much. [applause] >> Thank you guys. Give it up for Will. All right. Thank you. [music]

[music]

>> [music]

[music]

>> Heat. Heat. [music]

Heat. Heat. [music]

[music]

[music]

>> [music]

>> Heat. Heat. [music]

Heat. Heat. [music]

[music]

[music]

>> [music]

[music]

[music]

[music]

[music] >> Heat

>> [music]

>> security agency issued uh a public cyber security advisory urging organizations to remediate this issue. This was no longer just an open-source security issue, but something of national security importance. And this is because TJ actions was not any component, but something that was being used by more than 23,000 repositories at that time, including repositories from prominent organizations such as Meta, Microsoft, GitHub, Hugging Face, and so on. Here is the agenda for our talk. Mark and I are here because we responded to this incident together. Step security detected this incident through baseline event monitoring of CI/CD runners. Chain guard was one of the first responders and played a crucial role in coordinating the community response by

um helping affected organizations recover from from the incident. My name is Ashish Kurmi and I'm a co-founder and CTO of Step Security, a cyber security startup focused on CI/CD and software supply chain security. Before founding Step Security, I spent more than a decade securing infrastructure at Plaid, Uber, and Microsoft. Hi, I'm Mark. Um, I'm a senior product uh security engineer at Changuard. uh where I focus on securing and hardening our supply chain. Uh before chain guard uh I ran networks in academic spaces and later worked on iuntu security psert uh and this really shaped uh how I think about vulnerability triage and disclosure. Uh I've also uh co-authored the open s compiler options hardening guide for C and C++ which is a community

effort to help C and C++ projects adopt safer build and flag uh defaults. So, it's Friday afternoon in Seattle, March 5th 14th, and most people are winding down for the weekend uh when my colleague Evan Gibler spots a LinkedIn post from Step Security uh something about a compromised GitHub action. Moments later, Step Security reaches out to us directly. What happened? Over 23,000 repositories are using TJ actions changed files uh in their GitHub workflows, including our own. Let's just say are we compromised is not a great way to start the weekend. As soon as we learn about the compromise, we search our repositories for any workflows using the affected uh action with mutable tags. A mutable tag is a tag name like latest

v42 or main that could later be moved to point to a different commit. Fortunately, every instance of TJ action change files in our codebase was pinned to a fulllength commit shot, not a mutable tag. Following our internal best practices, we always pin third party actions and we were saved from leaking credentials when the upstream was initially compromised. We were fortunate that dependabot uh had just uh updated our actions a few hours ago uh before the compromise occurred. Still to be cautious, we disabled Dependent um to prevent it from pumping TJ actions again or automatically updating to uh other potentially compromised actions. Internally, we found no evidence of uh compromise, but we knew that the upstream packages we

build from might have been impacted. To stay safe, we temporarily disabled automated package builds for our DRO until we could verify that our upstream supply chain was safe.

So this slide shows a sample GitHub actions workflow. This GitHub action workflow job performs Terraform deployment anytime u there's a change either in the infrastructure or uh teraphform directories. This workflow has access to an AWS access key to perform the deployment operation and it uses four uh open-source actions by referencing them by their release tags such as V4, V44 and so on. If you were able to take that workflow and run it again and again and monitor the outbound network calls being made by that workflow on CI/CD uh on the CICD runner, you would see something like this. This is a network baseline that has been created by running that workflow for more than 2,000 times. Now, this is a

screenshot from step security harden runner, the product that detected this incident. But the focus of this talk is on the detection technique and not on the tool itself. And in the later part of the presentation, I'll also talk about how you can build a baseline-driven monitoring system yourself using a few open-source tools. Now what happened on the 14th of March was that there was a new call to gist. Githubus userontent.com from the pipeline jobs that had never made that call before. And this is what uh that and and this is how the detection was triggered. You could see that this call is coming from the TJ actions change files action step. When we looked at the process events, we

could see that this was a curl call to download memdum. py. Next, we looked at the release tags for the action and we realized that just 3 hours prior, the latest release tag uh for this action had been updated to point to a malicious commit. In fact, not just not just the latest tag, all the existing release tags were updated to point to the same malicious commit. When I opened this commit in GitHub, I noticed this message in the yellow text box. This commit does not belong to any branch on this repository and may belong to a fork outside of the repository. How is that possible? Enter impostor commit. Impostor commits are commits that do not

exist in the original repository. Instead, they exist in a fork of the repository, but because of the way GitHub APIs work, these commits are accessible from the original repository. So, you know, in a way, it's like a ghost commit. It's visible but not not really there. Now, let's look at the uh TJ actions impostor commit and understand what it was doing. Look at this B 64 encoded string. It's being decoded and executed as a script. Now, let's B 64 decode the string. What you're seeing here is an attack that only works on Linux runners. It downloads a file called me dump. py from a very well-known uh public GitHub gist. Several security researchers have cited

this public GitHub gist in their security research prior to the supply chain attack. Now let's look at the content of memdom.py. It goes through the list of running processes on the host and looks for a process named runner.worker. Once it finds that process, it opens it and this is where it dumps it its entire memory. So this script is essentially hunting for the runner.worker worker process. Now, why this process? Um, when GitHub detects that a workflow needs secrets, it makes them available to runner.worker and mem py is essentially dumping its entire memory. Now, let's go back uh to the imposter commit. Uh, look at the string format. This is exactly how runner.worker stores CI/CD secrets in memory. So the impostor

commit is searching through the memory dump to find CI/CD secrets. And here is another clever technique the attackers used. See this double B 64 encoding. Uh that's not a mistake. Now why double B 64 encoding? It turns out GitHub automatically masks B 64 encoded CI/CD secrets in buildocks. So single B 64 encoding uh accelerated secrets are masked. However, double B 64 encoding GitHub no longer treats them as secrets and makes these exfiltrated secrets available in build logs as is. And finally, the output is printed to SD out which makes them available to GitHub action workflow logs. So now let us show you how this looks like in action. We have created an end to-end demo of the attack and for this

demo we have created a compromised clone of the change files action under a new rep under a new organization called TJ actions clone. What you are about to see uh is um is an actual credential theft happening in real time. And please uh pay close attention to the build locks. When the workflow is executed we see the encoded secrets and build locks. Now let's double B 64 decode this and you have every secrets from the workflow AWS credential GitHub token everything. Now this is the network baseline after the last run and you would notice that the baseline is now unstable because the the platform observed a new endpoint to just.github usercontent.com uh that was seen for the first time in

the last run. So, let's review how the attackers were clever and tried to hide what they were doing. The TJ actions imposter commit downloaded the exploit code from a github owned domain. They use gist.githubcontent.com. This domain is a high domain has a high domain reputation rating. Almost all endpoint detection and response or edr agents and other runtime security solutions already trust this domain. On the surface, the commit history looked normal because they were using aposture commits. So commit activity in these repositories, even if during the incident looked normal. And if you go through the branches on these repositories and look through the commit history, you will not find any of the malicious code due to the use of gist.

The attacker used several GitHub users to compromise the repositories. In some cases, they tried to impersonate legitimate users. In the case of TJ actions, uh, imposture commits, they attempted to use the renovate bot, um, and impersonate them. Many security tools would not flake any of this as suspicious. So, by Saturday morning, we knew that our systems weren't directly affected. But the bigger questions were, who is affected? Is our supply chain impacted? and whose secrets are public on GitHub right now. Uh to view uh supply chain our supply chain and the 2300 potentially affected repositories, we needed automation to check for indicators of compromise or IOC's. My colleague Evan Gibler uh created a tool called GHC scan.

At first it was simple, but as we used it throughout the weekend, GHC scan evolved into a more robust uh scanner. It works by uh going through GitHub repositories, fetching workflows either through the GitHub API or through the web UI when the API was unavailable and searching for these logs for IOC patterns. It includes straightforward features like setting target repositories, date ranges, and output formats written in Go. supports concurrent scans and validates that the first round of B 64 decoding yields valid B 64 before attempting a second decode pass for the TJ actions compromise Gcan looked for two key indicators the malicious commit shaw 058E8 and base 64 encoded data matching the exfiltration pattern in this example we're using octo SDS for

ephemeral credentials But a long live pat could have also been used. GHcand was designed to be extensible so that we can reuse it in the aftermath of future uh incidents and we published it as an open- source tool um on March 18th, 4 days after the attack began. Through GHCAN, Evan was able to identify leak credentials across the GitHub ecosystem while I worked on reporting confirmed compromises to upstream uh projects throughout the weekend. We began by scanning and reporting leaks to all upstream source codes in the in the supply chain uh my DRO uses to build packages. Once that was complete, we expanded to every repository potentially impacted by the TJ actions compromise. By Sunday afternoon, here's what GH uh

scan found. 465 repositories were affected with over 200 types of secrets leaked. Most of the secrets were ephemeral GitHub tokens which hopefully expired before the leaks were discovered and potentially abused. But we also found log lived credentials like PATs signing keys, ads keys and cloudfare credentials. For the long live credentials, we reached out to uh directly to organ organizations often through email. This entailed reporting leaks to over 60 organ organizations which included four major Linux distributions, Microsoft businesses, open source projects, and multiple governments. We also reached out to individuals with leaked keys. Several government agencies were impacted. Most leaks, like the one that affected NASA, involved ephemeral credentials, but GSA's Notify.gov was particularly concerning. Notify.gov was a shortlived GSA platform

from 2023 to 2025 that allowed government programs to text the public to meet people where they're at. For example, setting medicaid reminder renewal reminders to help families keep their coverage. Its admin and Terraform credentials were among those leaked which obviously could have caused significant damage if abused. We wanted to raise the severity of this directly to CISA urgently. Thanks to members uh in the open SSF Slack, uh I was connected to a CISA coordinator and after verification over signal, we filed a report about leak government credentials uh Sunday evening. Between the TJ actions compromise and other boged happenings, I can only speculate why notify.gov was sunset. Unfortunately, as of now, no replacement service has emerged.

I want to pause to emphasize that reporting vulnerabilities and compromises truly matters. Ashisha's public report enabled many others to take action quickly after the compromise was disco was discovered. A perfect example of how transparency accelerates defense. The technical work of identifying compromises must be complemented by the social work of reporting and coordinating response to affect change. That's what uh turns isolated discoveries into real world impact. If you're interested in the human side of security communication coordination and disclosure, please find me after the talk. I'm happy to talk about coordinated disclosure. A great resource for this is Open SSF's vulnerability disclosure working group. They share practical guidance about coordination and they host a bi-weekly office hours like session where you can

ask questions and seek advice. In the last section of this presentation, let's look at some of the concrete recommendations and lessons we can learn from this incident. The first one is about um security monitoring for CI/CD runners. A lot of organizations have security agents for their desktops and laptops and similarly they use cloud EDR solutions to protect their cloud workloads. But CI/CD runners or build servers typically have zero runtime security monitoring. In this talk, u we saw how step security harden runner was able to identify this compromise. But you can also build a baseline driven monitoring system for CI/CD runners yourself using open-source EDR solutions such as Vazu, Falco, and Tetragon. These solutions provide runtime telemetry events such as network

connections, process events, file events. And you can take all this information correlated with the the the pipeline run, build a baseline, and then use this baseline for anomaly detection. In August, GitHub added a policy to block unpinned actions and enforce shot pinning in workflows. This lets orgs opt into requiring that every action reference a specific commit, preventing tampering with mutable takes. Two tools I recommend to help with pinning our pin act and Zismore. Pinact automates making sure that uh pin that mutable tags are converted to proper pins. And Zizmore is a best practice tool. It scans CI workflows to spot unpinned actions, exposed credentials, and other risky patterns. GitHub also uh has immutable release feature for upstreams um that can lock a

release tag so that they can't be changed or deleted later. This means that downstream consumers can trust that a release tag always points to the same commit. As a rule of thumb, assume that all longive credentials will eventually be leaked. Avoid them whenever possible. GitHub actions already issues short-term short-lived tokens by default and services like Octos can extend this model further. Similarly, SixToor enables shortlived signing keys. So even if something is leaked, the impact is minimal. So thank you for attending our talk and and everyone's attention. [applause]

You go. >> Okay. And we have about five minutes for questions for Mark and Ashish. Again, give it up for Mark and Ashish. [applause]

think you had your hand raised first. Do you think that these these actions that seem sort of simple that are used so widely that GitHub should uh think about incorporating some of this functionality into the core feature set so that these widespread actions have less of a chance to affect >> yes definitely uh they have added some already but I'll let you we should add more to Yeah, I think u that's an interesting question because um and like like Mark mentioned we have seen GitHub sort of taking over some of these popular actions. There are a bunch of actions for setting up you know like common language environments like Go and uh and Python and I believe GitHub

already owns this action but I mean regardless of how many actions GitHub ends up owning there will always be I think a big ecosystem of open-source reusable components for GitHub actions and that is one of the reasons why GitHub actions is so popular compared to you know other CI/CD providers because it was sort of like the first CI/CD provider that built a native ecosystem of reusable open-source components. So for example, you know, um uh based on what I read last, it has more than 25,000 reusable components and you know, many of these GitHub actions are are highly popular. So regardless of what GitHub GitHub is doing, I think there will always be an ecosystem of

these open-source actions. To your point, I I believe yeah, GitHub as a platform provider can definitely add more features to make them secure by default. and GitHub is already you know taking steps in that direction that's >> talks over a little bit how the um threat actor had pretended to be renovated I was wondering if you could speak a little bit more to that or how they got and then also is there any information about the threat >> so let me see we have a slide for that we could that we could not over. Oop, sorry. Do I need to do Yes. So, you know, this was the first example of a chained CI/CD supply chain attack. So,

even though the attack was actually detected in the TJ actions change files repository, the attack actually originated in the spot bugs uh sonar find bugs repository. And this sort of you know shows the chain that the attackers followed to eventually compromise the TJ actions change files u action and it basically started with a pawn pawn request vulnerability in um in the spot bugs repository the attackers exploited that and through that they were able to steal a maintainer pad using that pad they actually added another GitHub user to the spotbugs/spot repository from From there they stole another maintenance pad uh which led to the compromise of review doc/action setup action and the TJ actions you know eslint

change files action had a depend dependency on the review dog action and the TJ actions change files had a dependency on you know the ESLint change files action. So this was like a chain compromise and and based on the available evidence it took attackers I think more than 10 days to you know traverse through all these different repositories and they eventually got caught at TJ actions change files repository. To answer your second question currently there is no public evidence to tie this breach to a particular threat actor. Obviously, I mean, as you can see, they put in a lot of time and effort and they were well familiar with the, you know, internals of GitHub actions and, you know, they

managed to remain in this stealth mode for a while and then eventually got caught at the change files repository. But yeah, based on public publicly available information, we don't know who was behind these supply chain attacks.

>> Thank you very much. >> Thank you. [applause] Testing testing Mark and Ashish. Ladies and gentlemen, Mark and Ashish. Thank you.

[music]

[music] Heat. Heat.

[music]

[music]

[music]

Heat up here. Heat. Heat. [music]

[music]

[music]

>> [music]

[music]

>> Heat. Heat.

Heat. Heat.

Heat. Heat. [music]

Heat.

[music]

Heat. [music]

>> [music]

[music] >> Heat. Heat. [music]

Heat. Heat. [music]

[music]

Heat. Heat. [music]

[music]

[music]

>> [music] >> Heat. Heat. [music]

Heat up

>> [music]

[music]

>> here. [music]

>> [music]

>> Okay, great. How's it going, guys? Hey, it's your ho, it's your MC here, Eddie V again. Uh, we have two more speakers here, Rick and Xiao. Correct. >> I was good. >> Oh, I'll try better next time. >> That's uh Okay. Okay, so Rick is a Linux kernel engineer who works on security related features, visualization, and memory management. Xiao Xiao is a systems security researcher passionate about compilers, OS internals, and dig deep into low-level bugs. So, you have kind of the um kernel engineer and the security researcher here putting together a team to entertain you guys and provide some great knowledge to you. So, give it up for uh Rick and Jo. Morning [applause]

folks. It's really exciting to be here. Uh I've been living in Portland for a while and always like aiming to talk at besides PDX and this year the dream came true. So yeah, really cool and especially to be cited with Rick who's a co-orker. I had the pleasure to work with back at Intel when I was still there and we did some cool things together which we're going to talk about right now. Uh, also I need to mention this like an honor to us to know that you guys decided to be here instead of like be going to the fair to get like a nice launch. That means a lot. Thank you for being here. So let's go. Uh, first

thing just some disclaimers for the sake of Rick's employee. But let's jump straight into it. So the thing that we're going to talk about here is like comes from like the control flow hijacking kind of problem. If you are I mean raise your hand if you're like a low level person, if you understand like memory corruption and all of that. Okay, we have a good amount of people here. Cool. So, uh we're talking about control hijacking and uh I'll might go a little bit nitpicking on this because some people may not understand what we're talking about. So, I'll try to give them the feedback uh the background about this. So, uh if you're writing code in C

or C++ and all of that, you have like your bugs that sometimes uh allow attackers to manipulate memory in ways which were not intended in the first place. So you have like problems with pointers or like arrays checking and and buffers and all of that thing. So let's say you you you have like strings which are badly manipulated. Eventually like an attacker might figure out a way to override into a buffer right right beyond the limits and then corrupt like extra data that might be lying there. And in C you also have like this thing that we call like the code pointers. So you have like function pointers, you have like return addresses which eventually is data that that eventually

is used to like uh redirect the control flow of your program. So let's say you have like a function pointer that's supposed to call like a function fu, but it could also like eventually get up an address to call like the function bar instead of like fu and things like this. And uh because you have like this function pointers and you have the memory corruption bugs, attackers might just uh exploit those, find a way to overwrite your code pointers and then redirect your control flow of your program to wherever he wants it to go. And there's like a bunch of bad things that might come out of those. Uh this is like really old problem. I think it was

first figured out like in the 80s or something. And because of that, I mean researchers came up with mitigations and attackers came up with new techniques to bypass mitigations. And then you had like more mitigations and then you had like more uh bypasses and all of those. So it's like kind of a cat and mouse game always like trying to chase. The first thing you have is like right uh x or execute kind of memory. So uh imagine you you're able like to redirect the control flow of your program and then you have like uh variables inside your program that the attacker is able to manipulate. Let's say you have like a description field that the attacker is

just able to put whatever he wants there. And uh if you if you have like a memory that is writable and that at the same time might also be executable. This means that the attacker might just be able to execute to put whatever code he wants in that field that he's able to write to and then corrupt the code pointer and redirect the program there and then execute whatever he wants to. So people figured out that it was like a bad idea to have executable memory that's also writable and they came up with like this special bit in the hardware that says hey I mean this memor is like executable but it's not supposed to be to be written to whenever the

program is running so uh do not allow it to be written if if the CPU or program tries to write here just like throw a fault or something. You also have like another thing called ASLR which is basically like randomizing where things are in the memory layout. uh as you can expect if you're working with code pointers you need to know where things are so you can prop appropriately point to them. So with like ASLR technically whenever you run the program it's going to like load things in different areas and it makes it like harder for the attacker to sort of like know where he needs to point to. So sort of like an obfuscation technique and uh because of

these mitigations people came out with like a bunch of different ways of like bypassing them. So we have like code reuse which is basically you don't need to inject code into the memory address space. You can just like use things which are already there. You have like functions that whenever they're called out of context they are like strong enough to allow the attacker to do a bunch of stuff. You also have like memory disclosure which is sometimes you you figure out like a way to corrupt a pointer that's not like writing to a place but to read from a place and uh basically you're reading you allow you it allows you to read into the memory

address space put that on the screen somehow and then the attacker just figured out like where those byes are are what they mean and he bas like figured out like the entire uh memory layout of your process and by doing that he basically like figures out like your SLR and bypasses uh uh this technique to is able like to inferior where like the things in the memory And uh the latest technique which is like probably like the most powerful one is like what we call ro or return rendered programming which basically is the idea of like reusing executable code. Uh imagine you are able to let's say write to some place in the memory and uh then you able like to put a fake

stack there and uh basically like by having this fake stack in place you are able to uh chain pieces of code together uh as long as these pieces of code are like ended with a return instruction. Right? I'm going to give you an example about that. But uh the idea is you put this fake stack there and then you somehow find a way to corrupt the stack pointer so that you are now like pivoting into the fake stack and now your program is going to run through that stack and that stack has like the the addresses of the instructions that you want to be executed. And the cool thing here is that you can jump in the

middle of a function. You don't necessarily like need to go to the beginning of a function. So you can just like jump and execute let's say the last two instructions and then because you go chaining all the last instructions in each function you can like sort of like build your own like kind of payload attack and and build do whatever you want as in a way like similar to what if you were able to inject actually inject code. Uh another cool thing depending on the architecture so if you are talking x86 x86 runs analign instructions which basically means that you can actually point to the middle of instruction and execute it. So sometimes there are

instructions inside the source code which are not like intended to be there. So if you just like go like looking into the binary and also uh evaluating each string of bytes as like unique things. So not like looking at the instruction in its beginning but let's say you jump uh one offset from the beginning of the instruction and you look at it again. Now it might be a different instruction. So that basically allows you to find a bunch of different uh instructions which were not intended to be there that might allow you to do a lot of like malicious stuff. Uh so these like pieces of code that we are chaining through the fake stack we call this the gadgets and uh

let's see how it goes. So imagine that this is like your fake stack. We're able like to inject this in the in the memory address address space of the of the process. Keep in mind that this is data. This is not code. So it's like really easy to inject a fake stack in the process usually. Uh pivoting to the fake stack is a different kind of thing but is also like doable especially if you have like memory corruption stuff. It gets a little bit easier. So uh imagine you injected this fake stack and uh you have like your normal program uh running and eventually it's going to run into a return but now we have like the fake

stack uh uh pivoted to. So uh what happens is that that return address going to use the address which is on the the top of the stack to return to. So it's going to like return into a place that we found in the code that's going to like run a pop RDI instruction and then a return. So what the pop RDI instruction does it's in x86 it will look on the top of your stack and it will like get the value which there and put like it to inside the RDI register which is the register usually used for like the first first argument for whenever you're calling uh a function that's like AI defined not not going to

go too much into detail there but basically whenever you call a function your code is usually generated in a way that it's going to like put the argument inside the RDI register and uh whenever you call that specific function with the setup in place that function will consider that the first argument is inside the RDI register. So when this runs uh we grab like the the thing which is like on the top of the stack. Now the the address is not like on the top of the stack anymore because when we executed the return uh there was like an adjustment to the stack pointer. The stack pointer is now pointing uh u one frame down. So we are going to grab this

address of string which is like a string that we are also like able to manipulate inside the source code inside the the memory address space. And uh so what's going to happen is that we're going to put the address of the string inside the RDI register. And then uh we are going to actually like be pointing to a string that we manipulate which is in this case is like the bin sh uh string which some of you might know is like actually like a a shell application that allows you to run like a bunch of things and to run your own programs, your own scripts and your own everything. So if you're able to turn a specific program into actually

like an execution of bin sh this means you basically like own the whole thing with the permissions that it has. So we're now like pointing to a string that says this. And then uh there's like a return again, right? Um and the return is going to use the address which is on the top of the stack. Now we're like one frame lower because the pop also adjusted the stack pointer. And uh whenever we run this specific return uh what we actually like returned to is into the function system which uh defined by the I'm not sure if it's pos API or or I think it's it's right uh by the API that's going to like run a

system command uh in your Linux and uh the system command is going to like get one argument and this argument in this case is going to be the RDI which is actually like pointing to -ashbin- uh sh and uh basically we're running system to call uh uh the shell for us and that's basically how you might get on in a eventual or or hypothetical scenario. That's that's more or less like how a rap uh uh goes through. Uh we have going to have like questions in the end, but it's like really important to sort of like get this concept. Anyone has like any question that want to get clarified before I move forward. No, that's great because we don't have a

lot of time. Thank you for not having questions. So, uh because of this thing, there was like this new mitigation idea which is control integrity. It turns out I happen to be working on this thing. I don't know why for a very long time because it's super annoying. But it's also super cool. And uh control for integrity is is this thing that okay I mean like high level idea. We have a program uh this program is like going to have like a bunch of indirect branches. So we're going to like depend on addresses which are written in memory uh to know where we should go to in specific contexts. What if we're able to sort of like limit

where these indirect pointers are pointing to? So let's say if you have like a a function pointer, you're not going to allow this function pointer to point to anywhere in in the the address uh space of the process. You're actually going to say, hey, I mean these are like the set of addresses which are considered to be valid for this specific indirect call or for this specific return. And uh for forward edges uh you can basically like use uh compile heristics to do this. It's like kind of hard to do this uh as super precisely because if you study computer uh complexity you know this is like not sound problem like points to analysis is

not a sound problem. So it's like really hard to figure out what are like all the val valid targets for a specific uh function pointer but this talk is about shadow stack so I'm not going to go in detail here. If you want to know details about this uh case is sitt there he's like the expert into this so he might give you all the details just ping him. Uh on the return side of things uh we're going to talk about shadow stacks. Uh it's actually like in a sense a harder problem but also in a sense a better problem because here we are able to define the exact target for a return address. Uh when we are talking about uh

specifically like about using runtime information. So if you are calling a function right let's say you have a function a calling a function basically you have a return address which is going to be right the instruction right after the call instruction in the function A. And uh if you're looking at that during runtime, you are able to say, "Hey, I mean whenever B is returning, it needs to return to this specific instruction which is after the call inside function A. So it's something that you can actually like tell precisely where you're supposed to return to and not like have a set of targets that you are actually able to return to. So just like computed during runtime and the thing I

mentioned about the Oh, thank you for reminding me about time. uh so uh for this thing which is like an academic proposal I think it's dates from 2005 originally by a guy from Microsoft and then like actually there's also like the J secret before that uh then academically it became it became popular from the the the academia this guy from Microsoft Martin Abodi and uh it sort of like started spreading out now you have this in the Linux kernel supported you have like this in a bunch of different places and uh the idea here is like you you're supposed to be able to protect against a control flow hijacking attack even in the face of

arbitrary rights and reads. This is like a really really high bar kind of threat model because I mean it's really hard to protect against arbitrary reads and the academics when they came up with this idea and when they start like studying it and proposing it they had like yeah I mean this is supposed to resist arbitrary rights and arbitrary reads and uh we're going to see in this talk why in practice it's like so hard to set such a high bar and uh why it is like so complicated to achieve. Uh so speaking specifically about shadow stacks now uh just before I switch to Rick uh basically the idea is the following. You have like two stacks. You

have your program stack and you have like a shadow stack and whenever you do a call you put the return address into the regular stack but you're also like going to put it uh onto the shadow stack and then you continue executing your program. Uh the shadow stack is like a memory which is not writable. It's only like writable by co- instructions. So the only uh uh instruction is actually like able to write there is a call instruction and it doesn't get like any argument. So the only thing that's going to like write there is supposed to be uh the that's going to be written there is like the the actual return address. It's not like in depth true because you have

like an instruction to write on it but it's not currently supported. So I'm not going to get into too much detail here. the brick's going to talk about it. But uh my point is in terms of like architecture and design, it's just like something that mimics the actual stack where you keep like a value that you can check for before you're returning. So whenever the function B is returning to the function A, it's going to use the address which is on the regular stack. But before returning, it's going to check the shadow stack, check if both of them match. If that's the case, fine. You're able you're able to return and all good. If these are different, this

means that your actual stack was corrupted. So please stop the execution of the program because something nasty is taking place. It should work pretty automatically, right? Uh Rick's going to tell you otherwise. [laughter] >> So, yeah, I'm going to talk about uh talk a little bit about um what was happening in the the Linux kernel enabling for this feature um to try to get some context for the security research we're going to talk about later. Um so, so yeah, so like Jav said, uh you know, you call and you push the shadow stack, you pop or you return and it pops and verifies from the shadow stack. And this pretty much works automatically when you have a normal

program that's just calling functions, returning from functions. Uh but it turns out that um shadow stack uh or that that user space has a bunch of more exotic rare things it does with the stack that can confuse the shadow stack implementation. And this is not something that's defined by the hardware at all. It's something that software has to decide whether to support these operations because some of them are very close to the ROP that uh he was explaining. Um so whether they should be supported or how they should be supported is kind of up to software to decide. Uh so a couple examples of that a couple examples of that. Um so for the the main example I think is

user level threading. So this is where you have like an operating system thread and it has a stack and then user space sometimes can go and create can create a new stack just allocating memory and then swap to that and then start calling functions and then while that operating system level thread scheduled you could actually have more sort of software threads that are they're switching back and forth. And so obviously if you switch to a new stack and start calling and returning um you might be returning from a place you didn't just call and the shadow stack's going to get confused. Another example is long jump. And this is an API where you can set a

point in your execution, then go off and do more computing. You know, maybe you jump to a new stack, maybe you're far away doing something completely different, and then you could long jump back to that place that you said basically remember what I was doing here. So, obviously, this is going to be confusing for shadow stack when all of a sudden you completely reset the stack back to some previous location. There's also a sig alt stack which is a Linux kernel feature where you can uh configure the kernel to handle signals on a different stack. So your program might be executing along and all of a sudden it takes a signal and then you just jump to a new stack and the program

doesn't even this is something the kernel does. So the program doesn't even know it just all of a sudden finds it's executing on a new stack. And then lastly, like uh as a sort of catch-all thing, JITS can do a lot of weird stuff with the stack and shadow stack um you know and the compilers and all the kind of tools where you might be able to try to fix these things up um don't have a lot of visibility in into that. Uh so upleveling a little bit from like the the technical issues, I'll talk a little bit about what people at the beginning were looking to use shadow stack for and we had a lot of interest from uh from

distros. So the distros uh they have kind of an interesting way that they do compiler hardening. Um so there's a bunch of features in the compiler that you can turn on and it'll make your it'll it'll harden the the program that's being compiled. And so uh but these uh these compiler hardenings they work pretty much automatically. Oftentimes they work but not 100%. But they work close enough that dros can actually turn on these compiler hardenings like under a project build file. So someone might have an open source project and a dro wants to turn into a package and they're going to go in there and sort of optimistically turn on these these features. Uh, and this

works well enough that like there's some rare breakages, but then they'll be reported as a bug and then the dro can go and say, "Okay, well, this package will just turn off this hardening." That was the problem. Um, so, so the dros kind of wanted to use shadow stack in the same way. They wanted it to be so you can have, uh, you could just sort of turn it on and optimistically and it should just work and you'd have shadow stack across your whole dro. So, you can maybe imagine where this is going. you know the there's a bunch of things that don't work automatically and then there's a use case which is we want

this thing to basically work automatically. Uh so this was kind of a conflict and um the GIC C direction for this and this was not work I did so I'm kind of representing some other people's uh uh you know outlooks on this thing. How they how they sort of went about it was GIC has a pretty extensive test suite that exercises a lot of the libc APIs that do this tax task uh this stack manipulation stuff and they reckon that if they if they could surpass the test suite then they'd have enough coverage of the way programs tend to behave to sort of do this kind of droide enabling. Um so they went around and they uh they

did a bunch of um special like shadow stack implementations. So like long jump behaves differently when shadow stack's enabled. Uh and then there's a there was a bit in the elf headers in the binary that basically a dro could mark this binary works with shadow stack or this binary doesn't. So the kernel or the loader could decide to turn on shadow stack or not. So uh there was kind of a so there was like a couple trade-offs here. um you know for uh uh and there was kind of a push pull between Linux kernel community, gypsy community, distros all kind of wanting to sort of say well we want a little bit more of this or we

want a little bit more of that. Um that kind of was helping us tease out exactly where we wanted to have the solution. Um so like uh uh compatibility obviously was very important trade-off um because that was that was going to enable this use this use case of enabling the the the district wide shadow stack. Um, but performance kind of comes into play too because some of these operations like long jump for example is more of like an order one type operation where you just kind of say reset to this point and the registers get reset and pop you're back to where you started. But with shadow stack um some of the implementations that were done for long jump involved

unwinding the stack in a sort of order n way. So if you had an application that depended on long jump to be fast and a few of them do, then it could all of a sudden be surprised by these long unwinding operations for its fast switching. Uh and then lastly, you know, security obviously is pretty important. It's a hardening feature. So that's kind of one of the main goals. But when you look at, you know, it depends on kind of how you gauge security. So you can imagine if we had a a solution that had like 99% hardening level, but it only worked for 1% of apps. that's not going to get a it's not going to be very

widespread. But if you had a an a a hardening solution that worked for like 70, you know, worked for like uh say it was like a 70% hardening level, but it worked for like 99% of apps, then you'd have sort of like in an area under the curve sense more protection. So all these kind of trade-offs um there wasn't like a real simple answer for for what to do. And so we were con we were constantly debating well is this important or is that important? Um and this is around the time that I got in involved in the enabling and when I was looking at the solution I found uh you know I was evaluating the whole the

whole solution and I found this one uh particular corner which was uh the implementation around U context. So this is the GIC's uh user level threading um API. So, this is the the thing I talked about earlier where you're switching between software stacks. And so, the implementation said, "Okay, well, if you're going to have a new software stack, you're going to need a new shadow stack to go with it." Um, and so, uh, the hardware, uh, to switch to a new shadow stack, the hardware, and I'm going to talk sort of generically about the hardware because there's actually like I think there's up to three shadow stack implementations um, now like in different hardware architectures and they all work uh,

pretty similarly in this regard. So, you have like a token that's on the shadow stack and it's like a special value and there's an instruction that says I want to switch to the shadow stack at this point. the hardware is going to check the token and then um and then if you know if it passes it'll allow the shadow stack pointer to be set to that shadow stack. So to to switch to uh to have a shadow stack um you need to have to actually start using a shadow stack you need to have the shadow stack memory permission memory and you also need to have this value this special token value in the shadow stack and so uh the way

the kernel design was at the time there was a pro shadow stack. So this was like a memory permission like readon or executable just like the normal memory permissions uh that could be you Linux has some uh sis calls like m map or m protect that lets you create memory or change memory to different permissions. And so how gibbc used this was it would first create a writable it would it would map some writable memory and then it would write the special token value using a normal write to the uh to the memory that was intended to be shadow sec and then it would m protect which is the sys call that lets you change the memory

permissions. would it would improtect this region to be shadow stack and then you had shadow stack memory with the token you wanted and when I looked at this I was wondering about the region of time when the memory was writable to normal instructions because gibbc was writing a token that it wanted and I wondered well what if something else you know what if that's like the good that's like the good right to the shadow stack but what if there was some other thread or something that was that had ability to write at the time and it could write bad stuff to the shadow stack and I you know with this pro shadow stack design. I

talked about the compatibility trade-offs earlier. It really gave user space kind of a an you know a nice safety release where if it got itself into into it got itself into some sort of a bind where it needed to go reach some compatibility goal, it could always toggle the shadow stack to writable, fix it up however it wanted, shut it back to shadow stack and then and then go about his business. Um but I but at the same time we didn't know whether this was a real you know sometimes there's hypotheticals that don't really come up and so it was kind I but I I knew Xiao is a uh you know there's a there's an

unaffiliated group of people who care about colonel CFI and Xiao's certainly one of them and uh and I I so I went to him with a question and said basically like is this a real thing like is this exploitable um and because I want to know about whether this is like an important trade-off to make since we're going to have to take some some cuts on the other the other uh the other um the other trade-offs. So, you want to talk about your >> Sure. >> Okay. So, uh I started then uh looking into the thing that Rick was working in and say, "Hey, I mean, can I come up with at least like a proof of concept uh

implementation of why this might be a problem in the future or why this is like not as strong as the threat model is supposed to be?" And I started first looking at this thing that they call like the make context and swap context and that's like a gypsy supported API for you to create uh different runtime threads within your process. Don't think about like parallel stuff here. Only thing that you have your program running and eventually like you want to have like a different context of execution uh that you want to like let's say jump to it and then jump back from it while your program is running. Let's say you want to like co implement cor routines or

things like this. This is like useful for that. So uh because I mean you're like creating a new context, you're like going to need a shadow sack for this new context. And I start like looking on how the shadow sack was allocated and all of that. How does it work? And uh the way that the make context which is the the function that actually like going to like create the entire data structure for uh your new context, it it works. It basically like uh it's going to have like this econtext tuck there where it's going to like store all the data. Whenever you call the function, you first need to allocate the actual stack, the regular stack for the for that

specific uh uh new context. You do a maloc. You allocate some memory there and then you pass the pointer to this new uh chunk of memory to make context and uh then make is like going to start like prepping everything and like put the the data into the the context t and eventually it's going to allocate uh the memory for the the the new shadow stack and it's going to like uh make it like a writable uh memory because it needs to put the token into there the token that he was just talking about. So here allocates like a a writable uh chunk of memory writes the token in there and then it calls us this call to transform

it into a uh into a uh shadow stack memory. So it cannot be used as a shadow stack before you turn it into an actual shadow stack page and then it's going to like uh do some tricks with like call instructions to make sure that it puts the start context there. So imagine that you're running a context and uh eventually your context finishes executing. It needs to return somewhere. So but there's like nothing below it. So it's going to like return into this uh start context function which is basically the function that handles the the the context kind of like ending or finishing its execution. So basically like it put everything in place and uh

makes everything that it's kind of ready to run. And uh beyond that you have like the swap context function which is actually like the function that you're running your program and eventually like your program wants to jump into the different context. is going to use this swap context thing to jump to the different uh uh execution thread and uh basically on return whenever it like finishes running the context it's going to use the push start like I just mentioned. So if you take a look at the start allocation inside the make context function that's more or less like the the steps that you have there first allocates a writable uh memory then it's going to like write the shadow stack

token inside of there then it's going to turn the page into a shadow stack page then it's going to like save the uh the shadow stack uh pointer into the context. So shadow stack pointer works similarly as the the return stack pointer the the the RSP inside the the regular stack. So it's going to like point to the top of our shadow stack. It's going to put it like into this context structure. Then it's going to like uh pivot into the new stack and into the new shadow stack. It's going to do some tricks. Uh basically it's going to like run a call instruction in front of like a jump instruction to make sure it puts the jump instruction as the

return address. And the jump struct is going to like jump into the start context. It's not super relevant but anyways. And uh then it's going to like pivot back into the original stack and then the original shadow stack and it's going to like continue running until until swap context is now called to execute this new context we just created. So this is all good, right? It's supposed to work. It's beautiful. It's cool. Uh life's great. But not really because there's actually like a race condition here if you were paying attention and if you're like a reader attention attention reader. Uh I mean if you have like another thread running in the same process and now I'm

talking about the parallel thread indeed. uh this the the that that window when the shadow stack is actually like writable that basically means that a different thread could just come and write into the shadow stack before it's actually like a shadow stack. So we're able to put like a bunch of uh uh trash into the shadow stack before it's actually like usable. And uh that's basically on those seven steps I gave you. That's basically like where the race condition is. So like the first three steps uh you can basically like get a different thread attacking it. Okay. But is this like really dangerous? Is this like really a problem? Because if you think about the stack uh before

you use a return address uh you actually uh need to write to it right so you first do a call and the call is going to write to your stack and then you're going to do a return and then the return is going to read from the stack. So all the data that's being uh read was actually like written for written into the shadow stack before the shadow stack was already like unwritable. So this basically means that whatever we write there is not going to be uh really used because it's going to be like overwritten by the the chain of calls uh before it's actually like used. So can we really use this let's say in like an

exploit scenario? Is it like really something that we should be concerned about or is this just like you know kind of security paranoia? Well except that there's like the stack pointer which is also stored in the U context uh um and the context strct is not into the shadow stack. It's like in regular memory. So that basically means that we're actually like able to corrupt the shadow stack pointer and by doing that we might be able to point above that amount of thrash that we just injected there which means that uh we might make our value survive uh even after the the shadow sack was started like being used. So how would that happen? So let's take

a look. So imagine like we're now trying to raise the shadow stack and put like our own rob chain inside there. So you have like your regular stack, you have a regular shadow stack, and you have the context t uh strct which has like this the shadow stack pointer there. So we erased it. We wrote like a bunch of uh rob chain there like something really evil and that's going to take the world for us. And uh we also like set uh the at the address of the push start like described into the the stack and into the shadow stack. We also like have the token there which was written written by the the make context function and we

have the the shadow stack pointer pointing to the top of the stack. All good here. And then uh you have a function a call in a function B. You wrote to the to the regular stack. You wrote to the shadow stack. Uh basically we this this chain I mean did override our uh our rob chain that was there. The red thing. Uh then the B is called C. I mean our our addresses our our evil stuff got overwritten again and it didn't work out. And then when things start returning start like comparing uh the stack with the shadow stack and things were matching. So we were not like really able to do anything meaningful here, right? But now with corrupting the

shadow stack pointer which is in writable memory, the thing that we can do is basically we corrupt the shadow stack, right? We put like our our rob chain there, but we also corrupt the shadow stack pointer. And now we're not pointing to the top of the stack. We're talking we're pointing like one frame above it. And what happens is that whenever we do have a function A call in the function B, uh the function A is uh the address the return address in the function A is actually going to be written one frame on top of the things that we wrote there. Which basically means that now our contents are surviving this this uh series of calls.

So uh you exploit like a different memory right you override the regular stack you override the the the address of the the push start in the regular stack. And now you have like this function A call function B. And whenever like B returns to A, it's going to compare the two addresses of the function A. It's going to work out fine because these match. But now whenever the context stop running and it's supposed to return to the push start function, it's actually like going to return to the place to that we control. So basically what that means is that we're actually like able to bypass the shadow stack policy and the thread model that is supposed to be enforced by this

uh just by like these amount of rights. So that's that's pretty much like how you uh break the shadow stack or used to. Um so keep in mind this is a PLC. It we use like pretty strong uh uh primitives to write this PC. We assumed that we had arbitrary rights and uh what we were trying to to say is that hey I mean are we like really being uh loyal to the to the thread model here or not? And the thing is uh I mean basically what we show is that the thread model is not like being capped. we had like the the arbitary rights and we were able to bypass the policy. Uh if you if you take

this and bring it to the to the field of like writing exploits, you can keep in mind that it's going to be like really hard to write an exploit based on this for servers. But if you're talking about let's say consoles or like mobile devices and this kind of things that we have control over and that we're able to manipulate things over and you want to eventually unlock and things like that, this becomes like a really serious problem because it might not be that hard to implement an exploit based on this. uh for implementing this PLC we basically like uh did like a brute force thing. So we wrote like a loop that was like kind of running the program

running the program and trying to uh write the right addresses uh in the right time and make sure that we are able like to kind of bypass the thing in the right moment. Uh we had like to keep it like adjusting the timing because timing is like really hard and for race condition you need to keep trying until you you you write to the you write to that specific memory in the right time when that specific thing is actually like readable. if it's like before or after that you're going to like end up with a crash. But we just like kept trying trying and honestly did not like take uh many many minutes uh uh before I

mean we were able to achieve this. Uh basically what it proves is that the academic CFI was not being kept at this moment and because of that we had a couple of discussions say hey I mean perhaps the JBC uh way of implementing this is not the way we want to move forward and we might like want to think about a different way of like not having this uh race condition there waiting to be a disaster. And uh jump in there. >> Oh yeah. So I was going to talk about the um so after we looked at this PC and we said you know like Josh said it's um it was short of like a full application

that was exploited but it was like enough of an example that we could say okay this is like not a completely hypothetical thing. And um we uh there so we ended up changing the the kernel design actually to get rid of the pro shadow stack and we added a new a new SIS call. So we I talked earlier about how there's the M map and M protect SIS calls for working on memory and so we added a new SIS call map shadow stack and it lets you go map shadow stack with a specific value sort of pre-provisioned in it. So you could put the token in it um and it pops up with the token

already. It never goes through the writable the writable stage. Uh, and so the SIS call has like you can sort of ask for certain certain specific values the colonel knows how to do. At first it was just the adding the token, but the ARM the ARM solution added a few more uh flags for this. So it's kind of grown since then. Um, yeah. So that's what we ended up doing as a result of this. And uh, yeah, you want to go next slide? >> Oh, sure. >> Oh yeah, that's me. >> That's you. Uh so yeah so one of the things that what kind of the genesis of this talk was we John and I were talking

about how this was maybe a uh it's a little bit of interesting technical story but also maybe it was a good example of how um you know security researchers and engineers can work together well uh so I was going to give some some try to pull some lessons from my side um and I think one one big lesson obviously is it's good to go engage the security researchers early during the design because the stuff wasn't upstream yet which means we could we could just adjust it um much easier to change that than after it's upstream in the kernel because kernel is a stable API. It means we'd have to support you know if there was something that was

like not not a good design we'd still have to support it you know even if we made a a new solution. Um, and then another thing is, uh, you know, I think it helps to ask researchers to to probe the design or ask them more specifics because I think I see sometimes where people talk to security people and they say, "Oh, here's my really complicated design. Can you just quick take a look and tell me if it's secure or not?" And it's like it's almost it's almost funny because like the you know to for security researchers to sort of evaluate something like that, they need to actually spend some time and analyze it and probe it. And so I thought it helped

to sort of say instead of going to Xiao and say is this secure or not to sort of say hey can you exploit this like show me your best exploit you can do against this and we'll take a look and see whether it seems reasonable or not. [snorts] Um and then lastly I think you know one of the things that if you're a person that knows about security and you're like a super genius and you may think that you could just do this stuff yourself. Oh yeah. Uh, so but I think that it helps to have someone if you ask someone else to look at your design, they're going to bring sort of more of an

adversarial perspective that you can't really bring yourself no matter how good you are. And so I think it really helps to have a dedicated security researcher looking at this stuff to really give it a hard look. Um, you're going to get, you know, you're going to get better um, analysis than you could do yourself. >> Okay. And from my side as a security researcher and like real quick because I have just two minutes. Uh, first thing be available to support others. uh you won't be it won't be possible for like uh engineers to engage you early in the development if you are not available if you need like a big chunk of time if you

need like to sort of like get a bunch of requirements first before you like jump into something. So my idea is make sure that you you if you're there make sure that you have time to know engage with them to like put time in the project to make sure that you are able to uh uh do all the things and and help them. Uh that's that's actually like our our uh uh work. I mean, I used to think that if we're like in a in a MMO game or something like that, we were playing support. We're not playing tank or DPS. We are actually like the support. We need to help them build this kind of

stuff. Uh follow the product design closely and early. So, if you can get like earlier in the design stage, you avoid like a ton of problems. If you're fixing things after they were deployed, that's like a mess because you have like products in the wild that actually like using that and that becomes like really complicated. So, get early there. So I always prioritize looking at things before they actually like a product or there before like they are out there. Uh I also like keep the thread model in mind. I think that the thread model like as a security researchers like you're really northstar that's where you're supposed to look into. So don't don't like overthink attacks. Don't overthink

exploits. Don't overthink like a bunch of things that you need to have in place for saying that something is secure or not. Just think about the threat model. Is it thing supposed to uh resist uh an arbitrary right? No, not really. So that basically like is is how it's supposed to work. that makes our work much much much more simple in the sense of like I don't need to find an actual vulnerability to actually like build an exploit to actually go and say that this thing is not like the best design. So if you keep the thread model in mind you make it like much simpler you uh make it much more compartmentalized and much

much like easier for other people to digest too. And finally, understand the trade-offs uh through all requirements, right? So, I mean, we are we are security people. We have security in mind a lot of the time and all the time, but uh there are other requirements besides security that we need to make sure that everything works. I mean, it's not worthy having like a 100% security product that doesn't work, right? So, with that, uh we're done a few seconds late and uh thank you so much for being here and I guess we are open for questions. [applause]

I have a question. So, what is the state of art in terms of defeating shadow stack? Uh, so sorry. What's the state of the art in terms of defeating uh shadow stack for Linux versus Windows? It's like you discussed rap uh as one of the ways to defeat uh depr. So, uh, and it seems like Shadow Stack is fairly new within the last five or so years. >> Uh, yeah. So, that's a good question. Um, I think it I think like you said, it's new. Um, and like Jia was saying, there's a cat and mouse game and I think Shadow Stack hasn't been um hasn't been explored as much uh yet. But I think also, you know, it's a it's when you

talk about defeat, I think it's it's it's intended to be like a hardening mechanism. So, um I think it and I think it would be I think as I think as as security researchers start to look more at it, there's things we could do on the kernel side um to sort of adjust those trade-offs a little bit and I think it would be I think it I think there's going to be more um research and like changes like this um in the shadow sex future. Yeah, I mean from from my side, I tend to think that uh I mean CFI regardless of like it being super tied to the to the to the thread model or

not, it's still like kind of super strong in the sense that it has like a lot of uh cost for for attackers to write exploits and I think that that's a great thing regardless of like it being super loyal to the thread model or not. uh but uh I've seen like research towards like trying to break CFI and mostly in the sense that memory uh page tables are not like super kind of protected at this point and that's like an easy target for attackers. I think that there was a talk at black hat maybe two years ago where a guy broke the forward at CFI the kernel by rewriting to the memory uh memory page tables. So

basically he messed up with like the translation of the virtual address and by doing that he made up like that he was able to build whatever chain he wanted to. So that was like a cool thing. So uh if you think about the specific thing we just did show, we did we're not like breaking the the shadow stack policy itself. The policy is still like very tough. We're like kind of kind of breaking the how it was implemented. So if you have like this thing kind of working properly, I think it's it's really good especially for shadow stacks which is like a onetoone kind of matching for returns. Uh so we basically like need to explore sideway uh uh uh uh

vectors for like maybe getting into the the shadow stack and find a a way to sort of like break it. And also like something else that people have been talking a lot is about no control data attacks which basically sometimes you're able to do a ton of damage to the to the program just by exploiting uh data structures and not like really taking doing the actual control for hijacking thing. >> True, >> right?

>> Uh Joe, you had said that um uh servers might be less vulnerable than client systems. Could you maybe describe why you felt that way? uh because I mean for this specific attack you need like three arbitary rights right you need first to write in the shadow stack then you need to first write to the to the you context then you need to write to the actual stack uh return address and uh I'm not like a super experienced exploit writer I mean I wrote a couple but not many for my experience I think that it's like super hard to actually have three arbitary rights in the wild and like actually like do something like this in

an environment you don't control Even if you do that, there are things like uh you need to bypass ASLR to have something like this. And uh in the wild, it's a little bit harder to to to bypass ASLR. I mean, it happens, but uh makes it like more complicated. And then timing all these three rights, you know, in like an actual production environment. I mean, I I don't want to say that it would never happen in the I mean, I'm saving enough to say I I don't think it's super safe. I think it could happen. I just think that it's a much harder thing to weaponize if you're talking about the the cloud server scenario in comparison to you have like

access to the machine where you can probe things where you can get like addresses and all that. >> How many how many tries did you have in the PC? Like how many like in terms of number of tries did it take? >> Uh I think it was like around 10,000. >> 10,000. Yeah. >> Yeah. >> That's noisy. >> Yeah. So basically like uh I I think I was able like to my my PC running in a loop. It took like two three minutes to to run it uh running until the guy could actually like break thing and I think it would take like around 2 minutes to self adjust the timing considering like all the context of other things that were

running on the machine that were influencing on that and all that. So yeah, usually like two to three minutes in the local machine I would be able to to bypass it. But I had like the the three arbitrary writes and I knew the addresses I had to write to. So make made it like much simpler to implement the PLC.

Hello. I found this talk actually incredibly interesting. I just had one question. You guys were talking about the make context and the swap context. And I'm wondering because earlier in the talk you guys mentioned long jump and assuming that that's fast and like if you create that co- routine but then want to go back to your primary stack like does that utilize the long jump and was there any kind of investigation done there? Yeah. So, Long Jump um has had a couple different uh attempts at making it more compatible, but I think today the upstream one is not like you could run into problems I think with the long jump uh the way it's implemented, but

there's a there's an instruction called uh on x86 and I like the the other ones have something similar, but it's called incsp. And it basically lets you unwind the shadow stack. So if you're on the same stack, the long jump implementation can sort of incp basically unwind the shadow stack back to a certain back to the place you were. And if you're on a another stack already, there was some I don't know if it actually landed up upstream, but there was a scheme in mind which was to go look on the stack from the point where you're going to jump to search using a normal read for a token because if you left the T, I didn't talk about this,

but when you leave a stack, you can also leave a token behind so you can swap back to it. So you can search for this the token on that stack with the expectation that if you left the stack you must have left one there. Uh and software has to sort of that that that software has to sort of um you know actually make sure that happens and then you can find that and then from there ink ssp back but even then that has problems because if you swap at the end of a stack you could um take a signal while you're unwinding and then overflow the shadow stack. So um there's uh there's another and actually I have a

backup slide. I don't know if you want to go how much time do we have left? We got time. >> Uh yeah. So there's there's another there's another feature and this is kind of like where I think you know it could be interesting to see as shadow stack evolves. I think you know someone asked about whether there could what's the state of attacking the shadow stack and I also think about what's the state of shadow stack compatibility um that we're going to find once this thing is you know used in more and more applications. And there's a an optional feature called x6 is called wrss and then the arm the arm version is gcsstr and it lets it lets you have a specific

instruction that can write to the shadow stack. So it's like a privileged instruction and the idea is that you could go write um you if you enable this thing you put these instructions only in special places like long jump for example and then you could uh massively simplify like the schemes I was talking about where you're searching for tokens and and all this kind of stuff um gets fairly complex and slower. But with with these instructions you could actually just write a token and switch right back to it. And I think um you know I think we don't really know this this we actually we were actually just debating this on the mailing list the other day

is like is how safe is this from the compatibility side? It like is a huge win because you can just write tokens where you need them. Um but uh how if does it actually uh does is does it come into play in security and I think you know we don't know and um more research could maybe sort of guide you know someone says hey I tried to look at this thing and we didn't we didn't find any problems with it then we you know that would be like a reason to maybe think h we can we can enable even more apps with shadow stack if we rely on this thing. So does that answer your question? Yeah,

>> somewhat. Yeah, I was wondering if make context or swap context possibly use long jump. >> I don't know if you >> Oh, no, it doesn't. It doesn't. I mean, it does something similar, but it's got its own assembly, I think. And you know, we were also talking about gibb specifically. And uh that's where that was like the first that was the first libby that got shadow stack support. But there's other libs that could do things, you know, it's not like defined by the kernel how you have to do this. Like the libs could do it however they want. they could use a long jump if they wanted or you know. >> Okay, thank you.

>> So just to to add a little bit to your question um long jump and side jump I think that they use a regular stack whereas make context cont when I actually create a entire new context for this thing to run. So it's a little bit different but just to give you a need a pointer here. There's like a paper from I think four years ago called chop. Uh it's from an academic conference. I won't remember which one and I won't remember exactly here. uh but this this paper they actually like uh describe a problem trying to exploit uh shadow sacks on um on the exception handler context for C++. So it's like similar kind of problem you know you need to

unwind the shadow sack to handle it properly. So if you take a look at that paper you sort of like figured out like all the questions you have and how does that kind of work and was like the the bypasses and all that. >> All right folks thank you very much. Uh, please. >> Thank you. >> Please give a great warm round of applause to Rick and Jaho. Right. [applause] >> Better. Excellent. >> Excellent. All right, we'll take about a 10-minute break before the next one. Starts back up at 1:00. Thank you. Thank you very much. [music]

[music]

>> [music]

>> Heat. Heat. [music]

>> [music]

[music]

>> Heat. Heat. [music]

Heat. Heat.

>> [music]

[music]

>> Heat. Heat. Heat. [music]

[music] Heat.

>> [music]

[music] >> Heat. Heat.

Heat. Heat. [music]

[music]

>> [music] >> Heat. Heat. Heat.

[music] Heat. [music]

Heat. Heat. N. [music]

[music]

[music]

[music] Heat.

Heat. [music]

[music]

>> [music]

>> Heat. Heat.

>> [music]

[music] >> Heat. Heat.

[music] Heat. Heat. [music]

[music]

Heat.

[music]

Heat.

Heat. Heat.

[music]

Heat.

[music]

Heat.

>> [music]

>> Heat. Heat. [music]

[music]

>> [music]

>> Heat.

Heat. [music] Heat. Heat.

[music]

[music] Heat. Heat. [music]

[music]

>> [music]

[music]

[music] >> Heat. Heat. [music]

>> [music]

[music] >> Heat. Heat.

[music]

Okay folks. Okay folks if you please take your seat. Let us um introduce the next speaker. Next speaker is James. James is a web cloud pen tester at Anvil Security based in Seattle. does a little bit in terms of like API security, hardware hacking, etc. And he spends a little bit too much time on GTA online. So if you have a good quote like um oh how'd it go? Only those with legacy only those who die have legacy. The ones who are still living do not. That's from GTA 4. Good quote from a while back. Uh so with that, let's put our hands together. Warm welcome for James. Thank you. [applause] So, hope you're all having a good

conference so far. So, yeah, my name is James. I work for a pentesting firm up in Seattle called Amble Secure. That's the uh slide design. And I guess that's really all you need to know about me. Uh I have a lot of material and I tend to kind of talk fast. So, if you want to find the slides to follow along, you can check the talk details in the Bside schedule page. But this talk is about a couple run-ins with the DMCA uh and a few other stories that all revolve around a couple implementation quirks in GitHub. And since this is a besides audience, you probably know what GitHub is, but I mean you never know for sure. So in case you

need it, uh Git is a version control system for text files. You put these files in a special folder called a repository. And each version you make is called a commit. It has a cryptographic hash instead of a version number. And this is really helpful uh for various reasons. And one of the reasons is you can have multiple branches of history. And that's really helpful for uh you have like one product and different teams working on different features. Uh they can work on their own stuff without having to constantly sync with each other or step on each other's toes. Once they're done with their work, they can merge into the default branch. And yeah, that's Git. Uh, GitHub is probably the

most popular uh, place to host Git repositories. Anyone can view or download a public repository. And only authorized users can push commits to an a repository. That's you and anyone you authorize. So, if you want to suggest changes to a repository you're not in, uh, you click that fork button which makes a copy in your account. uh and it has a link back to the original as you can see in the top left uh to the upstream as as we call it and you make your commits in your fork. Uh you submit a pull request to say hey merging my changes and yeah that's how it goes. So if you needed that that hopefully that

was helpful and for everyone else you uh cut the part where I lied, right? Cuz it turns out the fork button does not make a copy. Uh because GitHub forks are all the same repo as the upstream. They they all share the same objects and stuff. And of course, you already knew that because GitHub has this documented and everyone reads the docs. And if you were here for the uh previous talk about TJ actions, they did touch on this a little bit. This is I'm not going to talk about CI/CD stuff so much, but it's the same stuff under the hood. Um, but yeah, this might be a bit to wrap your head around if you haven't seen this before. So,

I'll start simple with the first time I ever, I guess, messed around with this feature. Uh, there was this tool called Wapalizer. It's still around. that's just a little bit different now. Uh it's a website browser extension that tells you what tech a site is built in. And this is really helpful for pen testing because you might want to find some low hanging fruit like hey update engine X or you've got a jQuery version somehow still that has known vulnerabilities in it and you can see if it's exploitable. So uh something that was really cool about is that it had local detection rules that ran in your browser. Uh, and so that meant you didn't have to wait

for a scraper to come by and update a site. You could run it live and it also would check like it ran in your browser so you can check non-public sites like UAT sites or internal sites stuff like that. Um, it was probably the second tools introduced to after Burpuite. I found it really helpful at my first uh pentesting job, but uh eventually just my the nature of my work changed. I changed jobs and I didn't need to use it for a while. So I ignored it for a while. But then a few years later, I finally had an engagement where it made sense to try to use Wapilizer. So I went to the GitHub repo

for it and was going to download it, but uh I guess it was gone for some reason. And it turns out that the author deleted the Weapalizer repository. And this was kind of sad, but I get it. I mean, corporations, they see open source code and they're like, "Hippity hoppity, that code's not my property." Uh, and he was trying to, you know, this was kind of a solo gig. I mean, it was open source, so there were contributions, but he was writing most of the code. He was trying to monetize it with his own paid service, but he was basically working for free to make his competitors better. Yeah, I can see why he wanted to delete it, but it would

have been nice to at least keep up like all the historical commit history cuz sure, remove your commits, but I actually contributed a few rules. What about my commits? Well, okay, what about my commits? cuz if I made contributions, I probably had a fork and I never deleted it. It's still there. It's just a bit out of date. You can see that's the last commit was from 3 years ago. Um, so at least I had what I wrote, but I thought it'd be nice to have everything up to the point where Wapalyzer got deleted. Uh, and so I already knew that in theory this is my fork is the same repository as the original Wapalizer repo. So, if I can

just find the commit hash, the last one that was published before it got deleted, then that should exist in my repo somewhere. Uh, so to find that commit hash, there's this cool website you've never heard of before called archive.org. And you can just put in the URL and yeah, there we go. We got a commit hash uh in red. And then going to my fork, that's the most recent commit in blue. Uh, at the top, that is the URL to see the state of my repo at my latest commit. And then all I have to do is just swap out that commit hash for the Walizer commit hash. And there we go. That's not me. That is the original

authoralizer. So there we go. We've got the code for except uh you know that URL at the bottom is kind of long. I don't want to have to bookmark it. And normally when you work with git repos, you clone them to your computer. And if I clone this, it's not going to have this uh commit hash. So what can I do about that? Let's talk a bit more about git branches because branches while they are I guess logically a list of commits they're not actually a list of commits because each commit knows which commit came before it and so a branch just needs to have a pointer to the most recent commit in the branch and then you

just check the one before that the one before that etc. You have the whole history and then whenever you make a new commit it just pushes the head of that branch up one. So, what I want to do is I want to take my commit hash from down there and just go forward in history and see if I can sync it up with Wapalyzer. And that's actually super straightforward. So, I have to clone uh my fork of the repo. And then I when you do a clone, it's only going to give you the hashes that are part of the branches in your fork view of the repository. So, I had to explicitly ask the GitHub

server for the full commit hash uh that I found earlier because it didn't push it during the clone. And then I can just turn around and say, "Okay, get push origin set this commit to be the head of master." And there we go. I go to just my fork page and it's there. That's all the up-to-date wopilizer code. I mean, obviously, it's been a few years, so it's now two years ago instead of three years ago. But, uh, you know, this is practical. I really got some use out of this. Um, but there's more fun. you can do with uh this property. And I did kind of promise something in the title of my talk. So, let's talk about Nintendo. Um,

you know who they are. You probably know they are very ligious. Uh, you may have you may have caught some headlines last year about two high-profile takedowns of Switch emulators. Uh, this will talk about Yuzu, which was the one that happened earlier in the year. That's what these headlines are about. Uh, and when I saw these news articles, I thought, okay, I mean, that's sad. I kind of care about game preservation, but I don't play Switch games, so it didn't matter to me so much. Uh, but I completely forgot. I I'll blame it on the ADHD, I guess. I I actually had a fork of Yuzu just sitting around. Um, I don't know why. I never

contributed anything, but it just sat there. And so, uh, surprise surprise, I got an email from GitHub. They're like, "Hey, we got a DMCA takedown notice for your repo." And I thought, "Okay, that's fine." Uh, I was just going to delete it. But it turns out that if I looked at the email again, they actually said that I could make requested changes. And I was like, "Okay, well, what would requested changes be?" Um, you know, the the walizer thing I did, I basically took a commit down here and pushed it up forward in history. What if I just, you know, did the reverse and reround history to the first commit ever? because then I would just have like a

readme file and nothing else and there technically wouldn't be any emulation code in there. Uh so why not try that, right? Just to find the first commit, I run git log- reverse. I get a commit hash and then I do the same command as before. Basically just I give it the commit hash col master. I have to add this d-force because normally git will say this is a terrible idea. Why do you want to do this? And because it's theoretically a destructive action. Uh, so yeah, I do that, do a push, and I go to my fork, and there we go. There's just a read me, nothing else. Uh, but I I did have to check I did have the

commit hash from earlier, so I just I checked the URL for that, and it the code was still in there. Um, but I can't do anything about that. I did the best I could. And I mean, GitHub is going to know what I did, but I I pretended I didn't know that. I just said, "Hey, as requested, the repository no longer has any commits. reference Nintendo hardware or software. Um, and they said, "Okay, thanks for letting us know. We will reach up when we're done." And so, this was a long 24 hours just kind of like they're going to figure this out. But no, they they said, you know, thanks for letting us know you made changes.

Um, and so I was patting myself on the back like, I can't believe I pulled a fast one on GitHub. That's that's really cool. Uh, and so yeah, that's how the story ends is basically they uh they went to Nintendo and they're like, "Hey, James made changes." And Nintendo's like, "Okay, yeah, so we need to delete Yuzu and every single fork." Actually, all the forks, but James's cuz just James's is fine. No, they they I don't know why I thought they would care, but uh you know uh I didn't actually show you the DMCA takedown notices, just some of the emails. And I don't have to uh because in the interest of transparency, GitHub publishes all

the DMCA notices and counter notices that they receive and they put it all up in a central uh place for the public to see. Um you might call it a repository. [laughter] It has a fork button. May maybe you see where this is going. Um [laughter] cuz my manager did and he was like, "Hey, please stop." Uh, [laughter] [snorts] and you know, at first I thought like, okay, man, let me have some fun. Like, this is, but I mean, I I had to check. Um, turns out Nintendo, their office, you can drive to my desk within 30 minutes. So, maybe I don't want to poke the bear, right? Um, so I I'll make it up to you,

right? I did promise some DMCA shenanigans. So, let's talk let's go back a few more years. Uh, in 2020, the RAA submitted a takedown uh, notice for YouTubeDL, uh, which is a tool. You probably know what it does. It downloads stuff from YouTube and other sites. Uh, and they alleged that it was bypassing copy protection, which is not true, but, uh, GitHub legally had to comply, and that was not a very popular move, but they had to. Um, and so there were lots of discourse on Twitter, etc. about that, but uh some maverick out there decided to take it upon themselves to push the entirety of YouTube DL's commit history to the DMCA repo, which is probably the funniest possible

reaction you can have to a DMCA takedown. Um, pretty based. Uh, now a month later, GitHub did reinstate the repository and they even make made a big old blog post about it saying like, yes, we're standing up for developers. We have a whole new process to evaluate uh DMCA takedowns based on circumvention. Um they even like announced a legal defense fund and that's cool but more importantly uh that means I can replicate their work and it won't be breaking IP law. So how do you do that? Uh it's again really simple. You just assuming you already forked the DMCA repo. You just and you also have a copy YouTubedL. You just change the remote URL and you push. You have to use

d-force again. And there you go. There's YouTube DL in my fork of the DMCA repo, but [snorts] you know, you start with the commit hash URL, swap out the username, and there we go. It's it's in GitHubs. Um, now, of course, I, you know, I was done with this, so I figured I might as well delete my fork, and it's still there. I can't get rid of it. Um, because it turns out the deleted commits aren't just delete, aren't deleted. They just get hidden. Um, and so this is kind of like a tool for bulletproof file hosting, maybe. like I didn't have to upload legal source code. This could have been switch emulation code. This

could have been anything. This could have been really any public repository. Um you can hide any as long as you're okay with having to find the commit hash or just link it. You can hide anything you want in any public repository. Um as anyone because of commit authorship because commits all have an author name and email address. You can view this uh in git or uh in GitHub. You can add that patch to the URL to see it. And even for users that don't have a commit, you can there's this default user that or this default email address you can use. And GitHub will match users to commits by email address. And you can pick whatever

you like. So um my name is Lenus Torvaltz and that's my email address. And Lenus uh made a commit in the WSL repo where he removed all the bad code. And just to clarify that is that is his actual GitHub account. Um not not a fake one. And I mean I've got another story for you. Did you know that Tim Sweeney of Epic Games accidentally committed a Fortnite V-Bucks generator to an Epic Games repository? It's definitely not malware. Um, you should you should definitely try it. Uh, got another story for you actually. So, Vital Bdderin of Ethereum fame actually added this little uh Easter egg into an Ethereum uh repository and you know that it's actually him because this

repository is archived. So, I didn't just commit this last night. And um you know, if just to say like, hey, thanks for exploring the code. Uh click on this link for a small reward. It's definitely not a fishing link. Uh yeah, I think fishing is an opportunity possibly here. I don't know if it would work for a general audience, but maybe it would do well in certain verticals where people think they know computers, but really don't. So like gaming, blockchain, vibe coding, that kind of a thing. Uh but uh how do you find these commit hashes, right? Like how do you see like what stuff people have hidden in places? Uh you can't really brute force these

hashes because they're they're really long and that would take forever and GitHub would ban you before you even got close. Uh but a fun fact about git and github uh they also support this thing called short hashes where as long as you provide enough characters that no other commit starts with those characters. Uh it'll just figure it out for you. And you can indeed brute force four alpha numeric characters. It's not that bad. Uh and the secret scanner tool truffle hog has a built-in mode to do this. Um but uh to try this out we need an example. So if you work for AMD please don't hurt me. Um, back in August they accidentally committed some confidential

source code to one of their open source repos and they went ahead and they did the right thing. They rewrote history to remove those commits. Um, but as we know commits don't get deleted. So to find that you it's actually pretty straightforward using truffle hog. There's a GitHub experimental subcomand. You run it out of discovery mode. You give it the repo and it's you just let it run. Uh, it brute forces short hashes and tries to find the long ones. Um, the time depends on how many commits are in the repo. And for Fidelity FX SDK specifically, for some reason, there's only like seven public commits, but AMD rewrites the hell out of their history.

And so, a full scan takes like 200 hours. Um, but I got super lucky uh because Truffle Hog as it works through this list, it will build out this text file that shows you the hash it's found so far. And would you believe that the hash I was looking for was number three in the list. So, it only took like a few hours and there we go. That little internal folder you see is how you know this is internal confidential proprietary source code. Um, and they can't do anything about it. It's just there forever. Uh, I don't know what to tell you. Uh, now everything I've shown you so far you can reproduce uh except

for this. And that's just because truffle hog is a little bit too smart. Uh, audit discovery mode doesn't flag commits that normal mode can find. It makes sense because brute forcing is super expensive. Uh and truffle hub can actually scan pull requests. It's one of the cool things it can do. And the reason why it can do this is because GitLab and GitHub actually publish uh refs for the pull request. You can look at a whole list of refs by just running get ls- remote. That's an example right there. Uh and so if any commit is in a pull request, auto discovery mode won't flag it. and you can't find that commit through object discovery mode because

someone thought it would be funny if they made a pull request to add the commit back in. So, I guess moral of the story is if you're looking for secrets and and you know juicy information, right? Like API keys you can revoke. You can't really revoke source code. Uh you should definitely check the poll request before anything else. Uh I guess I have that in there just to point out that that's not their commit. Uh so in summary uh two quirks, GitHub forks are all the same repository as the upstream and GitHub doesn't delete commits. So some shenanigans you can do with that. You can make hidden commits anywhere as anyone. You can do that for piracy,

trolling malware fishing more creative things like CIA CD compromises, I guess. Uh you can also find deleted commits and secrets by brute forcing short hashes. Uh, so some takeaways I guess is you might want to check your threat model cuz if the risk of you accidentally committing something that you cannot remove without nuking your repo and every other fork of it. If that's not something if that's not a risk you can accept, you may want to reconsider using GitHub. And definitely uh I've heard of people using this trick for bug bounties and stuff. So you know have at it but have fun responsibly. Uh I am not over time. That's cool. Anyway, I will open the floor to questions.

>> [applause]

>> Do you know if you delete your account, your commits will go away? uh from my understand oh so what would happen is it would delete your repository and if a repository gets deleted the the upstream repository is the oldest fork so no

do other git forges behave in the same way or is this specifically a GitHub uh quirk From what I can tell, this is I I wanted to look, but I ran out of time because of ADHD, but I'm assuming it's just a GitHub specific thing cuz uh Trouble Security, this is something that's been publicly known. Trouble Security wrote a few blog posts about it, and I feel like a security team trying to give this a whole scary uh vulnerability class name would try to see if it works with other Git forges as well. So, I'm assuming it doesn't. Uh maybe that's something to check to double check. Uh so I was under the impression and

maybe uh you just proved this wrong that GitHub support could if you had GitHub enterprise actually fully delete a commit from history. Uh but is that just fundamentally not possible? >> I was hoping we wouldn't ask that question. So um technically it should be feasible. Git itself has tools to do that. Um I've just in practice I've never ever seen it. Uh I don't know if that's something specific to enterprise or not, but uh I guess really the thing is like I've never seen it happen. And so if you wanted to just throw up some illegal file somewhere in someone's repo, like it's not going to get removed unless someone happens to find it and

pisses off Nintendo or GitHub enough. But yeah.

How likely do you think it is that GitHub will find this video and try to take it down in some way? >> Um, >> like fix the solution of the repository. >> I think they should fix it. I think this is something that I understand why they do it, but I think it has some unintended consequences. I would like to say I would never do a cyber crime. So, just just so you know, um I I think this is this isn't that they they document and it's something that's been covered so many times. They've probably gotten a million duplicates to the bug bounty because of this. So, this is something that they've they're either stuck with or they've doubled down on.

So, I don't think this will change anytime soon. But, please GitHub, if you see this, prove me wrong. I mean, it is fun to play with, but still it's Any

more questions? Any more questions?

Going once, going twice. Sold to James. Your time is get given back. [applause] Thank you, James. Thank you very much for an interesting and wellthoughtout talk today. >> [music]

>> Heat. Heat. [music]

>> [music]

>> Heat. Heat. [music]

[music]

>> [music]

>> Heat. Heat.

>> [music]

[music]

[music] >> Heat. Heat. Heat. Heat.

Heat. Heat. [music]

[music]

[music]

Heat. Heat. [music] Heat. Heat.

[music]

[music] Heat. Heat.

[music]

Heat. Heat. [music]

>> [music] >> Heat. Heat.

[music]

>> [music]

>> And the the last talk I gave last year, uh I was hoping and praying that uh there was some stuff happening in court and I was praying that it was all going to get done by the s bides happen and I could actually say some of this stuff out loud. It was not the case. Uh so I had to sort of curtail it and I did not allow photography. I couldn't say out loud before uh because it was sensitive. I didn't want somebody to go on Facebook and find this dude. I didn't want somebody to tip him off. Uh I didn't want to interfere with an ongoing investigation. Um all that [ __ ] is

over now. [laughter] I can give you I originally wasn't going to give this talk. Um, [applause] I was I was pretty burned out on this and I originally wasn't going to give this talk and the Bides crew are like, "No, can you please just tell us like what the hell actually happened with this?" Like, don't leave us hanging. Uh, so [clears throat] thank you to the Bides crew for for talking to me and doing this and getting getting off my ass and and putting this uh presentation together. There's so many great people here. There's so many friends. There's so many good people. Um, I I this is this is where I want to be today. And I

promise you this is the last [ __ ] talk I'll ever give on this subject for the rest of my life. I look forward to a lot of really boring regular cyber talks in the future. But but here we go. So welcome to Bastardo finale wrapping up years of oent work chasing criminals on the internet. Uh [clears throat] hi everybody. Uh for those of you that haven't met me, uh I am a regular old I work in traditional IT. Uh I'm I'm a cyber security guy. I'm a server guy. I used to be a code guy. Uh, but I also do this thing on the side uh related to bikes and it's called bikeindex.org. And I'm not going to go super into it,

but it's just it's a free nonprofit service that lets you register bikes in case they get stolen or before they get stolen. And then if you like encounter a weird bike somewhere like in a pawn shop or on the street or you're buying it off Craigslist, you can just do a real quick check and be like, I just want to make sure that this this isn't a stolen bike. That's it. We do a lot of amazing work. Uh it's been a lot of fun. Uh but I I do have a cyber background, but I just the thing I'm talking about today is is related to this bikes thing. um where [clears throat] it's been around since

2013. It's nonprofit. We've It's just like me and five other nerds and we've recovered $27 million worth of bikes, which blows my mind. Uh we have like 1.3 million bikes in the system, tons of partners. It's very tech. It's a cool tech thing. I'd love to give that talk, but that's not why I'm here. And time is short. Uh I'm here because as part of running that service, we get the most amazing intelligence on the most crimeiest [ __ ] in the universe. Just just people that are just cringing so hard. Uh, and it just all comes our way and we're the we we we have this great 10,000 foot view of like who the

bad guys are in what region and we have all this priceless intelligence coming into us. Uh, so [clears throat] long story short, uh, and for those of you that have seen this before, you're going to see two or three slides that you're familiar with and it's all brand new. Don't get bored. Uh, in 2020, I got this email and it said, and it was an email that was sent to a bike theft victim out of the Bay Area. It said, "Hi, my name is Redacted. I'm a cyclist from Mexico. I'm sorry to inform you that your bike is in Mexico." blank blank to be exact. This bike is being sold a blank. You'll find your box your

your your bike with the fox transfer blah blah blah blah. This mofo only sells stolen bikes and all are from your area. And this is pretty intriguing uh [clears throat] because uh the mofo he was talking about was actually located in Labara Haliscoco in the center of [ __ ] Mexico which is 2400 miles away from the Bay Area. So, we were looking at bikes that had been stolen from San Jose, San Francisco, kind of kind of I'm sorry, not San Jose, San Francisco, Marin County, Oakland, and they were all showing up in in Labara, Haliscoco, which is weird. We we typically [clears throat] see them go stateto state, maybe city to city, but transnational it was a big difference.

And this the instant we started looking at this guy, uh [clears throat] it it was complete overwhelm. Uh he thousands of pictures, hundreds of bikes, uh almost all of them coming back to stolins that had been in our system. Uh he did this thing where he region locked his Facebook page where you could only see the Facebook page if you were in Mexico, which apparently is a thing that Facebook will let you do. He'd post 10 to 15 posts a day. Each one of those posts had 10 to 20 huge ass pictures of these bikes and and we were just hitting stolen stolen stolen stolen stolen stolen. This guy was like the Kaiser Soay of Bay Area stolen bikes. [snorts]

And weirdly, [clears throat] he's like affiliated. He he's a cyclist. He's a biker. He's uh he's affiliated. He runs all these other Facebook groups related to cycling in Mexico. And um we we it just it went from like, "Wow, that's interesting." Like, "Holy [ __ ] who is this guy?" really fast. Um his Instagram was insane. Uh one of the problems we had was that as he sold the bikes on his Facebook, we he would get rid of the ads. He wouldn't advertise them anymore. But he had Instagram set up in such a way that those ads never went away. So, you know, we went from, "Wow, this guy is interesting to like," no, wow,

this guy is making millions of dollars, you know, [ __ ] this guy. Um, [clears throat] fortunately, uh, he left an online footprint a mile wide. Uh, as a businessman and as someone who is doing lots of self-promotion and as someone who, uh, was involved in the cycling community, uh, and had just left a lot of these ads up, uh, for somebody like me who's really into OSENT and for the dozens of victims who were digging into this guy with us, uh, it was like shooting fish in a barrel. Um, I actually like to call, you all know what OSENT means, open source intelligence. You, your crowd probably gets it. I call this guy an Osent piñata. Uh [laughter]

because the minute you took a swipe at that dude, everything just fell out. I think in the first 15 minutes, we had his name, his address, his email, his date of birth, his phone number, his license plate. And weirdly, the name of his bank, his account number, and the routing number beca