← All talks

BSidesPDX 2025 - Friday, Track 1

BSides PDX · 20257:46:241.5K viewsPublished 2025-10Watch on YouTube ↗
About this talk
BSidesPDX 2025 - October 24-25 at Portland State University. Chapters: 0:00:00 Stream Starts 0:13:53 Friday Opening Remarks 0:36:59 Friday Keynote - Perri Adams 1:39:50 Accidental Honeypot 2:09:44 Drone Blind Spots 2:40:23 How Zero Trusty is Your Network Access? 3:42:10 Securing GraphQL 4:10:36 I'm not actually an SCCC admin... 5:36:43 From Pi to Pwnage 6:07:39 Beyond the Mask: The Snitch Puck 6:39:38 CFAA Plus 7:41:00 Friday Closing Remarks BSides Portland (BSidesPDX) is a gathering of the most interesting infosec minds in Portland and the Pacific Northwest! Our passion about all-things security has driven attendance from other parts of the country. Our goal is to provide an open environment for the InfoSec community to engage in conversations, learn from each other and promote knowledge sharing and collaboration. The Portland and greater Northwest information security community spans a broad spectrum of participation from CISOs, Fortune 100 company security experts, small business system admins, to independent security researcher. bsidespdx.org
Show transcript [en]

[Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] Heat. Heat.

Heat.

[Music]

Heat.

[Music] Heat

up [Music] here. [Music] Heat. Heat.

[Music]

[Music] Heat. Heat. [Music]

[Music]

Heat. Heat.

Heat. Heat.

Heat. Heat.

Heat.

Heat.

Heat. [Music] Heat.

[Music] [Applause]

[Music] Heat. Heat. [Applause] [Music] [Applause] Heat. Heat. [Applause] [Music]

[Music] Heat. [Music]

[Music] Heat.

[Music] Heat. Heat. [Music]

[Music]

[Music] Heat. Heat. [Music]

Heat. Heat. [Music] Heat. Heat.

Heat. Heat. [Music]

[Music]

Heat. Heat.

[Music] Heat. Heat.

[Music] Heat. [Music]

[Music] Heat. Heat. [Music] [Applause] [Music] [Applause] [Music] Heat. Heat. [Music]

[Music]

[Music] Heat. [Music]

Heat. Heat.

[Music]

[Music] down. [Music]

[Music] Heat. Heat. [Music] Heat.

[Music]

Heat.

Heat. Heat.

Heat.

Heat.

Heat.

[Music] Heat.

[Music] Heat. Heat. [Music] [Applause] Heat.

[Music] Heat. [Applause] [Music] [Applause] Heat. [Music]

Heat.

[Music]

[Music]

[Music] Heat. Heat.

[Music] Heat. Heat.

Heat. [Music]

Heat. [Music] [Applause] [Music]

Can you hear me? >> Am I good?

Good. Better not. There we go. Um, h happy Friday. [Music] >> Um, I can think of a lot of different places to be on a Friday and uh, this is a pretty good one, I think. So, thank you all for coming. Um, we Yeah. Woo. It's been a a few days uh getting up to this. I two weeks ago we're like, "Okay, besides we got two weeks. Everything's under control. Like everything's happening. This is great." And I look at my calendar. I just I just have two to three day trips between now and then. So that was a mistake on my part, I realize. Um so hi, I'm Joe. I should have introduced myself. Um I uh started

uh actually my first conference, my first security conference was Bides Portland. the very first Bides Portland. Um, a co-orker had said, "Oh, you should go to this conference." Like, "What's a conference?" Whatever. Um, my first presentation at the conference was the following year at Besides Portland. Um, the first conference I ran was also Besides Portland. Um, and I think 2014 or 2015 when the original crew that had started it kind of got a little burnt out and needed someone else to do all the work. Um, so I got sucked into that for a few years. Um we've had a number of people come and go as organizers and um this year Malcolm uh and I kind of

decided that we'll we'll take it on together. Um it's been very nice to have a like two in the box type situation. Um because then you know everything I forgot about it's possible Malcolm took care of already. Um and likewise the other way around. The downside is that you know you never know what both of you forgot until you find out. Um so Besides is a nonprofit organization. Um we've got a board of directors. They meet uh second Wednesday of every month. Um the idea is that we want to um have a nonprofit. We want to uh promote the community. We want we do this because we want you to have an event like this an opportunity to get

together to communicate to interact to have some talks. Uh we want an opportunity for people to uh present. Is that me or you? Um yeah, it's you, it's me, whatever. It's all of us. Um you know, we want new speakers the opportunity to present. Um we want um people who don't have the opportunity to go to like Defcon or Black Hat to see some of the the work that people do there, come back here. So there's a lot of stuff that goes on. Um and so Maggie Harugui is our president. Chris Martin is our treasurer. Mickey is our secretary. Um, Brian, Diana, Gabriel, Ally, and Mickey are also board me. Actually, I guess Mickey's on there

twice. Um, board members. Um, is there anything I should change over here or is this is this me or is this you or is it a loose cable or Okay, it's not reinumerating. So, like it's probably not bad. Um, we have a review board. Um, the review board is the ones that go and like accept the talks. They they go and read all the submissions and decide which ones uh are going to get an opportunity to speak and not. Um, and unfortunately, like we had a lot more submissions there. I say unfortunately because we don't get to fit them all. Um, it's great that we got a lot more submissions. We have a whole bunch of

first-time speaker submissions, which is wonderful. Um, uh, we also didn't have room for all the submissions, so apologies to those who did not get to present, but that doesn't mean you shouldn't keep trying. Um, Michaelitz is the been the review board uh, chair for a few years, and he's passing that torch on to Gabriel. Um, but we also got Marian Wu Changang, Jo, and Allison who uh went through and read all of your far-fetched ideas and near-fetched ones as well. Um, and then like there's the actual uh operating of the conference. So, the the board the the nonprofit board, their job is to make sure that the organization just keeps perpetuating and does all the

filings and stuff like that. Um, they're really good at their job because I had to ask who was on the board a few minutes ago to make that slide. um the board if they're doing their job, you don't know they're there. It's kind of great. It's kind of like security, right? Like they're they are the, you know, the emotional and uh physical and financial security for the event. So, thank you to them. Um but in terms of actually making stuff happen, uh Rebecca uh Diaz uh joined us this year. Thank you so much. Uh there's actually I want to add it. Um there's uh uh some thing a few years ago it's like yeah you know if

you want if someone's trying to infiltrate your organization you know someone's going to show up and like you're going to get stuff done and they're gonna always be like tasking and getting stuff going and like that's how you know that this person is like an outsider who's trying to like you know uh infiltrate your your you know organization of some sort which I was like oh wow I want one of those people like can someone in infiltrate this? Um and I think Rebecca I was like are you sure you're not a plant cuz thank you so much for everything. She kept us on track this year. Um, I still managed to get us off track, but she kept she

tried. Uh, Mickey took care of the website. Brian is here, uh, managing all the video along with a crew of volunteers. Shady, uh, wrangled all the other volunteers as well. Um, Evan has some slides about the CTF that I will speak to, or maybe he will. Um, um, Ryan and Brian are making registration happen. How smooth is registration this year? >> Awesome. Um, and Malcol and I have been the ring leaders just kind of like organizing and getting stuff to go. We really need to thank PSU. We keep coming back here. Um we used to do it at the convention center and the or the uh event got pretty big. Um actually if there's an empty seat, raise your hand.

If there's an empty seat next to you, raise your hand. See lots of empty seats. Just go right through. Um working with the Oregon Convention Center burns out volunteers. as an all volunteer organization, it's kind of difficult to uh, you know, keep coming up with a steady supply of volunteers for the convention center to burn out as you interact with them. They're used to dealing with events that have event managers and like big budgets and corporate sponsors that spend like tens of thousands or hundreds of thousands of dollars. So, we're a little like an oddball there. Um, PSU on the other hand is just a joy to work with. Um, we are filling this space pretty pretty fully.

Um, and we keep coming back because we just enjoy working with them and they take care of things and they make it happen. Um, a couple notes about the event. Uh, how do you like the badges? >> Okay, so we did electronic badges last year which was great and we don't want to do that every year because, you know, it's it's a lot of stuff and also it's an interesting year for getting electronics assembled manufactured shipped everything. Um, and the same with lanyards. Like there's a lot of stuff that we use that like ends up being you know custom made and a lot of it is overseas and ships in and whatever and if you plan you can get it all in time.

Um but at the same time like why why do we need to create all this stuff? So our goal here was a badge that was useful and we found these cool like bottle opener wrench interesting ruler because it has a bend in it um screwdriver tools. Um, and it's like, okay, we can just order a bunch of these and you have a badge and it's useful after the conference. Um, and how about who brought their own lanyard? Thank you. Because how many lanyards do you have at home? How many besides Portland lanyards do you have at home? Do you know how many I have at home now? None. Because I that was the pile of all

the extra lanyards. Um, we also every year would have a half a dozen extra or half a dozen a hundred or so extra lanyards. So, those were out there, too. So, it's just kind of like, hey, let's let's let's use what we've got instead of uh creating more waste. Um, who hasn't gotten their badge yet? Registration's on the first floor. Change from last year. I think that's a nicer space. We also have the sponsors and vendors down there. Um, check them out. They make this event happen. So, thank you very much to them. Um, there's also that's where coffee will be. Um, and some also nonprofit organizations and groups that have information to share there.

Um, we have a new idea this year, community rooms. We have a couple small rooms if you'd like to have a conversation about a topic, right? Um, we've got basically a signup sheet which we'll probably put at those rooms after this opening thing. Um, they're on the second floor right outside the elevators and just like write a topic on that board and and if you see a topic you want to talk about, go go hang out in that room and talk about it. If it's empty and no one's there, go sit there and see who shows up and maybe start a conversation. Um, we'll probably do that a little more organized in the future, but you know, it's better than nothing.

uh t-shirts. Sorry about that. Um we uh we have docked the pay of everyone involved in not getting the t-shirts here on time. Um as a volunteer organization that didn't actually save us any money though, unfortunately. Um there may be some of So the vendor site says they're going to arrive between the 21st and 23rd. Right now it says they're going to arrive between the 21st and 23rd. The tracking information said next week, although FedEx may be delivering one box of them today. We'll update you as we figure that out. Um, if we do get some t-shirts, we'll kind of prioritize the folks that are out of town to get them first and then, um,

local folks, we can bring them to Rainc. We can keep them at the hacker space so we can hand them off um, in person and if not, we'll ship the remaining ones. So, apologies. So, it goes. You all have t-shirts on already, so thank you for showing up with your own t-shirt, even though some of you forgot a lanyard. Um, we've got a quiz show this evening. Um, there's QR codes floating around. Um, after the event today, we'll um we'll have a reception in the back there and the quiz show over here. Um, there's prizes. It's fun. Um, we also have a capture the flag event. You want to >> thanks, Ev.

>> Yeah. >> Yeah. So, this is Okay. Um, sound cool. Um, we have now two years in a row, well there was I think we missed a year of capture the flag at some point. Um, but now we're trying to make sure we don't do that again. Um, quick notes on that. It is fun in games. It is a side event to the main event. It is a tie-in event. Um, do I need to click >> space? >> Cool. Yeah, wall of text. Don't worry. All this information and more is on that web page that's linked right there, ctf.besidespdx.org. Go ahead and sign up if you feel like it. Signups are already open. We're not

going to start doing actual challenges until 11:00 a.m. because we don't want to step on the keynotes toes. Be excellent each other. Be kind to our infra rules are all on the website if you choose to participate. Some notes on that are that we now have a partnership this year with Seattle's lockpicking village. They drove down generously to run some lockpicking challenges for you and also just a general lockpicking village. There are boxes of candy that you can break into and uh steal a piece of candy for yourself. Um there are points to be scored for the CTF at the social hour as well. Um, we are doing a little repeat of a challenge uh that will involve going around and

talking to people. There will be a survey at the end of the CTF to let us know how we did. Um, we'll try to keep that open after the CTF ends, but it will only be worth points um before the CTF ends. There will be prizes. They include some rather nice lock pickicks, um, a couple of books generously donated to us by the folks from Seattle, and some no starch gift cards as well. There'll be top three, and then also some judges choice. Um, we will vote on who did super awesome fun stuff. Um, and those bonus ones have nothing to do with points on the CTF. Notice on scoring, we're doing dynamic scoring again this year. Um, mostly

because it is incredibly hard to accurately estimate the difficulty of a challenge um, while you're writing it. Um, we've estimated approximately the difference between easy and not easy and set the starting points different for them just as a hint. um so that nobody like decides they are not good enough because they accidentally picked the hardest challenge on the list and got stuck. Um that said, go have fun. Play it however you want. Um that's really the goal here. It's all fun in games. Take a crack at it in between talks that you're interested in. If you get stuck on a problem, go watch a talk for taking to take a break. And uh yeah, have fun.

[Applause] Speaking of fun and uh enjoying the conference, um we do have a schedule of talks there. A lot of 20-minute talks this year. That was intentional because, you know, we want you to see a lot more new stuff. Um, we also have longer breaks between the talks and part of that is because we want you to talk to each other. Um, there's a bunch of seats on this side. Feel free to just like make a line of people and just come right across the front. I I won't I I'll be angry if you stand in the door. I won't be angry if you walk in front while I'm talking. So, I won't actually get angry. I'm too tired to be angry.

Um, I slept last night, though. So, um, we have stuff in the hall. We've got like right outside these doors, we've got a photo booth with Sasquatch. So, I don't know. Maybe you'll like that. Maybe it was a weird idea. Whatever. Maybe we should have worked on the t-shirts instead of the the photo booth, but so it goes. Um, yeah. So, how many of you uh have felt a little exhausted this past year? Um, that's okay. Um, and one of the things that I appreciate about Besides Portland is we just show up and have a great time. Um, we are um, as organizers, we do our best. Uh we hopefully don't get your expectations too high. Um many of you get the

opportunity to come because it's free. Uh many of you uh get the opportunity because to come because your employer, you know, lets you leave work um and pays for your ticket. Um and especially thank you to those who who pay a little bit more with that ticket because that lets us do that. Um the people who pay more than they need to, the sponsors, that's what funds the event. Um oh, CTF prizes. There's prizes. Um we have sponsors. We have a lot of sponsors. Um like with the member like uh when we show up, you know, you some people paid, some people didn't. Everybody gets a badge. You're all here. It doesn't matter, right? Um so we got a lot of

sponsors. We have sponsor tiers. Some pay more, some pay less. Um BPM, Formal, Chain Guard, Conductor One, uh Profit, LMG. Uh LMG is a new sponsor. Thank you. I think a couple others are as well. I don't remember the one on the left. >> Tool. >> Coal tool.AI, right? Um, Eclipsium has been back many years, many, many years. Um, Isaca, um, Palo Alto Networks, Securing Hardware.com, uh, Spectre Ops. Um, and you recognize all these company names. Uh, Identity Technologies, ISSA, No Star Press. Um, these are companies some of you work for, some of you use tools from and products from. Um, we have community community u sponsors as well, you know, other community organizations, EFF. Um, the schedule's

also in Hacker Tracker. Um, so if you use Hacker Tracker, uh, it's an app. You can get the schedule. It's a really easy way to find out what's what and where. Um, yeah, sorry, I'm a little tired. Um, so, uh, if you need help, we have people walking around with balloons. We have an info desk downstairs in the lobby. Um, registration is downstairs, another spot to get information. Um, we will, uh, probably have some closing notes at the end of the day, opening notes tomorrow morning, and closing notes the end of the day tomorrow for kind of general stuff. And if anything big comes up, well, if we if we figure out a t-shirt plan, we'll probably send an email.

Um, so with that, I don't have my next slide. Oh, yeah. Be kind and have fun. Um, Malcolm. Yeah.

>> Oh, yeah. Does anybody here drink coffee? I My first year running Besides Portland, I I don't drink coffee, so I just didn't know. Uh I just yeah sure have endless coffee and actually that we we drank like threequarters of the conference budget in coffee that first year. I I learned a lot about coffee. I learned about about conference budgeting and a few other things. Um before we introduce our keynote, I don't think we've actually shared the unfortunate news that uh Perry won't be able to make it here. Um it was some interesting back and forth. Perhaps she'll touch on that. Uh but we are going to set up so we can uh have her remotely uh give her keynote

address. Um so if you give us a few moments, we'll set that up. Um, that'll be a great time for all of you to come and find the seats. Raise your hand if there's an empty seat next to you. Um, if you're on the end, feel free to like scoot in so the end seats are available and we'll work on this. So

heat. Heat. [Music]

[Music]

[Music] Heat. [Music] Heat. [Music]

Heat. [Music] Heat. Heat. [Music]

[Music] Heat.

[Music] Heat. [Music] Heat. [Music]

Heat. [Music] Heat. Heat. [Music]

Heat. Heat. [Music]

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] You can't see everyone here. Um, I could actually turn my laptop around, but I'm actually I'm going to turn my laptop around for just a minute. Hopefully, fingers crossed it doesn't disconnect anything. Got to turn on my camera. One minute.

There's a ceiling. [Laughter] [Applause] >> I might. Yeah. >> Hold on while I precariously. >> We are professionals.

stream right? >> Yeah. >> Okay. So, um Yeah. >> So, we're not going to leave that because that's going to show up on the stream and there's some people who probably have no photo tags on their lanyards. Um Perry, thank you so much for coming or for showing up to the meeting. Um, I think I I had met Perry a few times, but the the the time that I realized like, oh, we need to get her to keynote, um, was at LabsCon, which is a threat intelligence conference, and they have a morning hike up Camelback. Perry, can you hear me? Probably not. >> No, you'll have to open up your mic. >> Yeah, let's let's I'm muted.

>> Turn your camera off while you're down there too. >> So, yeah. Uh, Perry, now can you hear me? >> I can. Can you hear me? >> I can hear you. Um I was uh just showing you the audience because um you know they're here. They're happy to see you. And I was just saying uh I met Perry uh a few times, but when I realized she was like the perfect Besides Portland keynote was on Camelback Mountain. Um we made it about halfway up um at LabsCon. They have like a morning hike way too early in the morning. Um but you get up Camelback Mountain to see the sunrise which is kind of great. We made it about

halfway and we're sitting there and I always tend to have lots of stuff in my backpack even like when I'm not at a conference when I'm just like walking. Um and I had a Bob's Redm Mill granola bar and Perry, you were quite excited to see that granola bar if I recall. >> And uh so and that's when I learned Perry um is from Oregon and I'm like oh >> Oregon I Oh. >> Okay. Um, I one of my favorite things about Besides Portland and organizing it is finding keynotes and I want to find people who have a connection to the Pacific West. I want to find people who do amazing things in the industry. I

want to find people who are doing things at the edge of the industry that we all might work on. And I thought Perry, you would be an excellent candidate. Um, so without any more distractions and um, uh, me rambling, I think Are you ready, Car Perry? I am ready. Can you hear me? >> Uh, we can hear you. Thank you, Perry. >> Can you see the slides? >> We can see the slides. We can see you. We can hear you. Um, I'm going to put myself on mute now. >> Wonderful. Hello everyone. I wish desperately I could be there uh with you today. I was at the airport last night and uh had some unfortunate issues with

uh the flight. However, I'm going to hope to make this as exciting and engaging as possible. I was so excited when Joe asked me to give the keynote because like he said, I am from the Pacific Northwest. I was born in Seattle and raised there and then my parents moved down to Oregon, actually southern Oregon, uh about half a decade ago. And they now live in Ashland, Oregon, right outside of Medford. and I go there every year for the holidays. But the first time I ever came to Portland was for a Latin competition actually. And it was became a yearly endeavor to go down to Portland, go do our Latin competition because I was, I think, like many of the

folks in the room, a bit of a nerd in middle school and high school. And so we'd drive down from Seattle, do the Latin competition, and we'd always go to Voodoo Donuts and go to Powels. And my mother both loved and hated this because I would go to Powels and buy maybe five or 10 of the largest hardback used books I could find and then we'd have to truck them back to the car and then truck them all the way back up to Seattle. And I just have phenomenal, phenomenal memories. So, I'm just truly sad I can't be in Portland today because it is one of my favorite my favorite cities. But, uh, hopefully I'll make this as engaging

as possible. And I think that Joe is going to do questions at the end. And I'm going to try to leave enough time for those because this is I think really fun material and it's material that is available to everyone to play around with and leverage in your day jobs and your hobbies etc. And I'd really encourage uh uh you to uh engage with it. So I'm going to be talking about uh the AI cyber challenge which is something that I did when I was at DARPA. And so I gave a little bit about my PNW history, but on from a professional point of view, I've spent the last four years and I've recently left uh uh and am now at Dartmouth as a

fellow, but I prior to that I spent the last four years at the Defense Advanced Research Projects Agency, which is an agency in the federal government that focuses on funding the cutting edge, the next great breakthrough in science and technology across the spectrum. and I focused on cyber security and I followed in the footsteps of some really great PMs that focused on cyber security such as for instance Mudge who was at the agency in the 2010s and when I was there I started a challenge called the AI cyber challenge which was a two-yearong effort to use AI to find and fix vulnerabilities in software and so I want to give a little bit of a

background on the competition because what we'll be talking about today is the takeaways from the competition. What are we what are we walking away with? But the competition started I pitched it in uh the spring of 2023 as a two-yearong competition in which we partner with a number of entities in the cyber security and AI space to develop tools that could automatically find and fix vulnerabilities in software using AI. And this was a time when LLMs had really come into into their more powerful iterations. And so in the 2010s, you hadn't seen uh you'd seen promise, but it was really around 2022 2023 when they started becoming so powerful that it made sense to apply them to the

space of software security. And the agency had a history of focusing on automatic software security prior to that. But what the agency focused on was more traditional methods in what we call program analysis. So automatic ways of understanding a computer program and there hadn't been a significant amount of integration with traditional AI uh uh LLM type models. And so this was going to be a two-year long competition. We'd run the semifinals uh after one year at Defcon and the top seven teams would walk away with $2 million each in prize money. They would return to Defcon for the final competition in which the first place winner would win $4 million. And so this is a total prize pool combined with we

had some prizes available to small businesses at the beginning that ended up being around $30 million. So we were putting a lot of money down for software security and so we had 42 teams compete. Uh there were five uh in semi-finals there were five challenge projects. Now, I want to talk a little bit about the way that we designed the competition and the thought that went into this design to ensure that what came out of the competition was actually uh was actual usable software and discoveries and pushing the state-of-the-art forward. And so the first aspect of this was the open-source projects which I'll talk about in a bit, but also the collaborators that we brought on. And so

we had Google, Anthropic, OpenAI, and Microsoft as collaborators on the challenge, as well as the Linux Foundation, the Open Source Software Foundation, and Black Hat and Defcon because we wanted to make sure that we were getting the state-of-the-art in LLMs. We are getting uh perspective from the open-source community which I think from our perspective at least from my perspective that is one of the most important communities that we need to support when it comes to software security as well as ensuring we were running this in a place where it could get the most amount of visibility and engagement with the cyber security community. And so in designing an effective competition like I talked about the first aspect was prize money.

You need to make this a worthwhile competition for the best and the brightest in both AI and software security. If your software security experts can go and make a lot more money doing something else, they're just not going to do your competition. And so you do have to make it make it worthwhile. And the prize money ends up actually being uh for a team of seven or eight experts, getting $2 million to spend a year doing that ends up equating what they might get paid uh elsewhere doing something doing something else. You also, like I talked about, need access to state-of-the-art technology. with our AI tooling that we were trying to build, we were relying on foundational frontier

models. And those models were changing so quickly that if you ran a competition without getting perspective and insight from the folks making those models, you might be making tools that just do not represent the state-of-the-art in this space. And so we went to OpenAI, Anthropic, Deep Mind, and Microsoft. I approached all of those companies in uh May of 2023, and I said, "Hey, I'm thinking about doing this competition. I'd love to have uh uh your engagement with it." And all of those places were really excited about software security. They build software in-house. They rely on a lot of open-source software. And so that was a really impactful aspect of it. The third thing was moving quickly.

Uh like I talked about, I was ideating this in May of 2023 and we had the finals in August of 2025. The US government does not usually move that quickly. This was a very uh I I think unique opportunity and it took so many bright driven folks pushing this forward committing to the timeline. But what moving quickly also allowed us to do was to harness the excitement around LLM at the time and to harness that energy and push it into the competition. Now the Probably one of the most important aspects of design and competition, if not the most important aspect, is finally the game structure. You can have all of these other things, prize, money,

collaborators, uh, ability to move fast, but if you do not have a game structure that is designed to effectively measure who was the winner and do that in a fair and transparent way, but also to create an environment that isn't overly gamified, that's not overly creating a overly toy environment, but rather creating something that's going to produce discoveries and technology that has real world application. That is a very hard balancing act to do and something that I'd say that myself and my team agonized over as we were putting as we were putting this together. This is one of if not the most important aspects of designing a competition like this. So, as I mentioned earlier, part of what

went into that game design was real world software. If you create a competition and you have toy software, that sometimes makes sense depending on the maturity of the technology. But DARPA had run a competition like that 10 years previously, which had really driven forward the state-of-the-art. But now that we had LLMs, now that we had had 10 years more of research focused on software, automatic software understanding, it did not make any sense to run another competition focused on synthetic software pro projects that were designed for the competition. We had to make sure that our tools could run on real world software. And that was what motivated us partnering with the Linux Foundation, the open source security foundation. But

what also what also motivate us designing all of the challenges in the competition around open-source projects and so if I go back a little bit and we look at the semifinals you can see that there were five challenge projects the Linux kernel engine X tika Jenkins and SQLite and we actually found realworld vulnerabilities in those in that software in addition to the vulnerabilities that as part of the competition we inserted into the software and that itself was a difficult balancing act as part of the game design. We needed to ensure that there were a sufficient number of vulnerabilities in the software to really stress test the tools that were being created by the competitors. But there's also going to

be realworld vulnerabilities which means that as organizers we don't have the answer key. We don't know all of the vulnerabilities in software. Vulnerability discovery automatic vulnerability discovery suffers from a problem of false positives. And so in scoring all of the teams and again there were 42 teams just in semi-finals. In scoring those teams, we had to have a way to measure whether the team had found a real vulnerability, whether it was one that we put in oursel or one that was in that open-source project organically as opposed to them finding what they expect to be what they think might be a vulnerability but isn't actually a vulnerability. the the challenges that we uh the software projects that we developed

challenges around consisted of these projects. So you have uh curl uh IPF free RDP etc etc and so these are things that in some cases are going to be very familiar and other cases are going to be less familiar but are all software projects that are very very commonly very commonly used. We decide to focus on two languages, B and Java, and on different kinds of vulnerabilities in each of those in each of those languages. And we had different kinds of different kinds of challenges. And we tried to make again going back to this concept of realw world real world challenges game design that's going to translate into real world games. We designed challenges around the idea of how around

the idea of making something that could fit within a real world software development life cycle. So in some cases we had in some cases we had challenges that were diffs. So delta scans. So you would receive in a uh commit or a diff of what the code looked like before and what the code looked like after the change. and the system the AI systems the teams were developing which we called CRS's for cyber reasoning system the cyber reasoning system would have to determine whether there was a vulnerability in that commit in some cases the CRS would just receive a large code base and would have to scan that code base in other in other cases the CRS would receive the

output of a static a static analysis tool uh something that we called a SEAR for structure reporting format for vulnerability details and the CRS would have to determine based on the output of the static analysis tool whether that was a true vulnerability or whether that was a false positive in the software development ecosystem you when you run static analysis tools you often run into false positives so all of these The commit review, the doing full source code review, and the static analysis output review all mimicked different aspects of real world software development. And what teams would submit what teams would submit based on that was they would submit a proof of vulnerability. So they would take in these different

kinds of challenge formats and they would submit a proof of vulnerability uh in the form of a triggering input. So something that would a user input that would interact with the software in a way that triggered that vulnerability. They would submit a patch for that vulnerability and they might submit their own static analysis static analysis report or a combination of all three. So the uh scoring system was the next aspect of game design that had to be considered. How do you weight these different submissions from the teams? How do you weight different challenge uh different challenges? And then how do you weight a vulner a proof of vulnerability versus a patch versus a static analysis report.

And these are how the final teams and their scores, this is how the final team scores ended up looking. So eventually we converged on the concept of a weighted equation that could be used to combine all of these different categories into one score with different weights. And the thought behind the weights was ensuring that we were able to effectively toggle and tune just how much we wanted, for instance, false positives to count against a team. How did we want teams to suffer significant points losses for accuracy uh failures or did we want to tune that such that some inaccuracies were okay but too many were not? We also wanted to ensure that patching was more

heavily weighted than vulnerability discovery. This is a defensive competition and this is a competition that is focused on providing the software community with tools they can actually use. So finding vulnerabilities is truly not enough. All it does is give the developers more work to do. What we want to do is design something that would actually create tools that could automatically fix software. And that required waiting patching more than waiting simply finding the vulnerability. And so the weights for this equation were released prior to prior to the competition. And here we have the score breakdown of the teams, which I'll go into in just a second, but I want to finish talking about the game design aspects because

once you look at the outputs of a competition after the fact, it might seem straightforward that, oh, great, it produced all of these things. Uh, that's exciting. But it's actually so much more complicated than folks realize to produce a well-designed game that will result in these in these tools. And so one of the other things we considered was resource constraints. One of our concerns was, well, what happens when a extremely well-funded team just throws all of the compute at the problem and they beat out all the other teams? Not by innovating, not by thinking creativ creatively, but simply by using a an amount of compute that's not real world realistic. And so all of the teams had resource

constraints. The teams were given a certain budget for their cloud compute and a certain budget for their LLM usage. And they had to think very strategically about how to use that budget. And what that did was produce results that are performant and that are constantly considering which agent which aspect of the CRS is using these scarce resources and how they can best be how they can best be applied. And the final requirement for this competition, the one that I am most proud of is the open-source requirements. And this requirement ensured that all of the teams competing in the competition in order to receive prize money had to open source their CRS at the end of the competition.

This is a space in which there's a lot of nuance. oftent times tools require specialization tailoring to specific software projects and in some cases research that's not open-sourced and not immediately commercializable ends up sitting on a shelf because there was so much prize money on the line. It allowed us to put in place the open-source requirement which basically made available all of the tools and their source code to the entire community to learn from, to build on top of, and to turn into uh software security tools that could fit within a CIC pipeline. And so we're going to talk about a couple of the a couple of the solutions that have now been open sourced and what takeaways

we can learn, what might be leveraged in other aspects of software security or cyber security more broadly. And we're going to focus to start on the team that came in first with a score that was almost twice that of any of the other teams and that was team Atlanta. And as you can see, this is a team that came in with a very high accuracy score and high scores across the board in all categories, but especially in their program repair, their program repair score. So they clearly invested a significant amount in patching. They also were the team to use their resources most efficiently. And so as you can see they given a budget of 85,000 for Azure compute

resources they used almost that entire budget they used while not nearly the total amount they were allowed for LLM the highest amount of any of any team and so they thought pretty seriously about how to make sure that things were not left on the cutting board that they were not letting resources go to waste. And it's really interesting to dig into the work that they did. They've released all of their code open source. They put out a number of blog posts. Uh this is a team made up of researchers at the university at Georgia uh uh Georgia Tech and out of Samsung as well as a couple other collaborators and they released all of their code open

source. They've released blog posts as well as a 150 page paper on the design of their system and I'm going to refer to that paper uh along the way in this presentation. But as you can see they have a very complex design structure. You can dig into each of these components as we will in which they're interacting with the LLM providers. They're interacting with the AIC organizers and they've essentially split their tool on the right side of the screen. As you can see, they've split their tool into several different components. They designed one component for finding vulnerabilities in C, one component for finding vulnerabilities in Java, and one that was language agnostic. And those were their three bugfinding modules. One

of the really interesting takeaways though is that they found that their multiling their multi-laying approach was the most successful approach that they had and it accounted for accounted for 70% of vulnerabilities. And I'll apologize because my cat one of them is making a surprise appearance but I figured for bides that would be a welcome uh a welcome addition. So this also provides a nice opportunity to go into a slight aside. So as I've been putting together several keynotes on this on this topic, I've been thinking a lot about the ways in which we use AI to automate to automate tasks. And I thought, well, there's so much information now on AIC on the internet. Could I simply ask an

LLM to automate this for me? Could I ask them to automatically produce slides for me? And I went and I said, "Generate me a slide deck on AIC." And it gave me a slide deck that was relatively relatively uh uh generic and high level, but it did say some things that were true. you know, we want fair competition between AI systems, realistic complex cyber environments. And so I thought, okay, this is interesting. I could have probably paid an intern to who knew nothing about AICC but could Google to do the same thing, but this is this is useful and there is a point to this analogy. So then I said, well, let's add an image

to the slide deck. And so I asked it to I asked it to do this. In this case, it was chat GPT and chat GPT said, "Oh, yes. Let me add a a nice image." I believe it said something to the effect of, "Let me spruce up this slide deck for you." And that was the image it added. And so then GPT asked me if I wanted it to add some nicer images. And I thought, "Okay, great. It recognizes that that's not really the kind of image I want in a slide deck." And that was that was its approach to uh making the images more creative, right? Just right on top of the text. Uh and

and then it asked me if I wanted uh some additional additional features to the uh to the slide deck and I said sure. Uh, and it added a added a timeline um, which is honestly could be a timeline for quite literally any project that has ever happened. So, I decided, okay, what if I went to a uh provider that isn't just taking an LLM and having it produce a slides, but someone that has integrated knowledge of how slide decks should work and a broader structure around the creation of slide decks with an LLM's ability to generate text and generate these aspects. So recognizing that you're not going to have an outofthe- box approach and there's going to need to be an

instrumented system here. And so I asked I asked a tool that does slide generation to make me this slide deck and it had much better graphics this time around. really truly phenomenal graphics and the text was completely completely nonsensical and I I was uh glad to see that we've come a little bit farther in uh the use of AI for slide generation and I'm sure by the end of the year uh uh the tools will be 10 times as good as they are now. But what struck me was that what struck me was that you really need to combine different areas of expertise, different aspects of what LLMs are good at with what humans are good at with uh

uh potentially agents t uh trained on specific visuals, etc., and combine those into even something as simple as making a slide deck on things that are publicly available publicly available on the internet. And I wanted to give what seems like a very very basic easy to understand example of where LLMs might fall flat in our daily lives because now I'm going to talk about where they fall flat in something that's much much more complicated and where they are really effective and how those things have been combined together because at the end of the day it boils down to the same problem is that LLM s are able to solve some subtasks extremely well. And if you can

thoughtfully engineer those different uh create different agents handling those different subtasks together and combine them into an instrumented system that then uses more traditional existing software techniques or algorithmic techniques for things that LMS are not good at like let's say discrete reasoning or mathematical reasoning. you can create a system that is very effective and this was the approach that the teams the teams took. So we're going to focus on the multi-laying the multi-laying uh component of the system because that ended up accounting for 70% of the vulnerabilities that this entire system found. And we're going to talk about the boring aspects of this that have nothing to do directly with AI except for except for the cost but are things that

have been a fundamental aspect of software engineering for years and years and years which is simply engineering and performance and resource management. So this is the first uh over I think 10 maybe 20 pages of the team Atlanta report on their system was just focused on how they did performance management because it was so crucial to the problem. Ultimately AIC like many other things in software development and like many other things that tie into software security is an optimization problem. How do we optimize all of these different resources? How do we optimize all of these different agents? And they took several key approaches such as being able to reason over multiple challenges or challenge projects as they were called

concurrently. They designed a system that was as failsafe as possible to ensure that even if there were issues in some components, the entire system would be able to continue moving forward. They, like I talked about, fully utilized their resource budget and they also collected meaningful logs which we'll actually get to dive into later because this was again a requirement of the game design and it allowed the organizers to better tune or allowed sorry the competitors here to better tune their system to utilize all of these resources. So they have their five modules and if we if we look at what five modules for vulnerability uh discovery and program repair and if we look at which modules

were using the most resources they again decided to focus the most resources on their multi-lane general purpose general purpose system which is really interesting if you have a history in vulnerability discovery because a lot of vulnerability discovery up to this point was heavily heavily tailored for the idiosyncrasies of various languages. So the fact that they found that their multilanguage component made the most sense to uh put resources on was a really interesting a really interesting takeaway. They also introduced rate limits for their agents in order to ensure that different parts of their system weren't boguarding all of the resources. So they spent a ton of time thinking about how to how to create this system and they used uh Kubernetes they

used a lot of existing existing resource management frameworks and other kind of multi-node instrumentation frameworks traditional software engineering to build this to build this system. And so we're going to talk a little bit about the multi-lane system. So the proof of vulnerability, that thing that I talked about earlier is an input into a program that triggers the vulnerability. The way that we designed the game to work was that you're triggering something like ASAN or Jazzer. So you're triggering essentially a sanitizer which is little bits of code that your program is compiled with additional instructions that check whether you're writing out of bounds in memory. So if you use anything like Valr or doctor memory to check

something like this that's going to be similar to an a sanitizer. We built this on top of uh uh ASAN and U uh UBSAN. So sanitizers that are looking for outofbounds use of memory or uninitialized memory and things like Jazzer for Java which focuses on a different class of vulnerabilities. So things like command injections. Jazzer will essentially instrument the code to see whether you are writing beyond what uh uh beyond the executed whether the user has been able to inject commands into a I'm sorry I have a hat on me again inject commands into a executable aspect of executable set of code that the user is not supposed to inject commands into, for instance, that

actually gets to be very complicated when using sanitizers because you have to have a recognizable string there and then systems need to know, CRS's need to know what that string looks like, what that unauthorized command might look like. So, this required a lot of effort on the part of the game designers and it requires specific tailoring by the teams. The reason we use things like Asan and Jazzer was because they are commonly used in open source projects today, Google's OSS fuzz actually uses them in their large vulnerability discovery systems. And so by building on top of existing existing work, we could make this as real world as possible. So the AIC teams recognizing this combined traditional

approaches that leveraged ASAN and leverage these sanitization approaches and symbolic execution engines with with also existing static analysis engines like CodeQL, Jor and infer and they combine them with AI. So this ensured that the teams were building on top of the state-of-the-art. They weren't recreating the state-of-the-art and they were also using AI to fill in the gaps between these tools as well as standalone AI systems. So this is what the multi-lane system looked like for team Atlanta. And as you can see here, it looks like a giant fuzz. There's a corpus manager which is something that manages that manages all of the speeds. So the different program inputs that are being mutated and feeding back into the

buzzer. Uh there's the mutation engines themselves which are taking those seeds and mutating them in various ways. and then an executor which is executing the code running with these inputs and seeing if a crash occurs or seeing if a sanitizer has been triggered. And what's interesting is that in some cases they have just general purpose fuzzers, general purpose mutation engines that have nothing to do with AI. And in other cases they're using AI to actually mutate those seeds. So rather than just use AI as an out-of-the-box vulnerability discovery tool, they're combining it with existing program analysis tools to create something that's more performant than existing tools and something that takes it takes it to that next level.

So like I talked about it accounted for their system multi-lane system accounted for 70% of the proof of vulnerabilities that they submit during the competition but the fuzzer the general purpose fuzzer the one that didn't use AI itself accounted for 50% of the crashes found. We found this to be a really interesting uh takeaway because what this essentially on first glance might seem to suggest is that general purpose fuzzing actually was just as effective if not more effective than using AI. But this is why really digging into the approaches is nuance. We are more than happy to accept that as a conclusion of the competition. If the competition showed that AI really doesn't make a difference in

vulnerability discovery, that's completely fine. However, what we found is that and what this team found when they went in and looked at the results is that even though the general purpose fuzzer might have produced that final input that triggered the sanitizer, that input had been at some point likely mutated by the LLM mutator. And so LMS were absolutely playing a role. The other thing that we'll talk about in a second is that is that the LLM aided vulnerability discovery tools ended up finding different kinds of vulnerabilities that were more complicated that required weirder, more complex input structures to trigger. And so the LM was really helping to get deeper into code and create inputs that would

trigger very hardto-reach code paths. The team concluded that you really need solid solid engineering solid fundamentals and traditional effective approaches before adding in LLMs to truly harness their potential. So we're going to talk about that LLM aided mutator MLA and how that worked because I think it's an excellent example of agentic approaches in vulnerability discovery. And so here we have the overall architecture of MLA. And as you can see here, there's different components that are all interacting with each other essentially different agents. And each of these agents, each of these agents is responsible for a different task. And team Atlanta did an excellent job describing what each of these agents was responsible for. So there was an agent

responsible for parsing the call graph and being the expert on how to navigate through a large codebase, what thing is calling what thing, etc., etc. For large open- source projects, for large projects in general, this is essential because you need to understand how all of these different files, all of these different pieces of the software are interacting with each other. You then had a bug candidate detection agent. So, this is a uh detective style agent that looks into potential vulnerabilities and investigates them. So they tailored an agent to be an expert in bug candid detection. They also had an agent that made the call graphs that surveyed uncharted uh uh territory things that you couldn't use traditional tools to

recover. If you've ever used something like IDA something or other kinds of tools even like a VS Code and it will sometimes fail to connect especially indirect function calls and things like that. So you have to have an agent that is tailored to fill in those gaps and become an expert on that. And then you have a understanding agent that gets a lay of the land, understands where all of the entry points are, understands how the code project, the spec some of the idiosyncrasies of the challenge worked. And then you had an agent that instead of just creating payloads, so inputs that could trigger vulnerabilities, it actually generated Python code that would not just create payloads, but

mutate and change those payloads live. This is far and beyond what any general purpose buzzer is able to do. and they were doing an excellent job of saying what are LM really good at code generation for Python scripts let's apply that here and so what team Atlanta would say is that the real breakthrough in their MLA component is that it generates attack strategies not just attack payloads so each generator is essentially a a function that condenses the security researcher playbook And this allows for using using uh uh using a generator with very complex pieces of software that have very complex uh formats for entry. The teams also spent a huge amount of time learning how

they could best do prompt engineering. So they took existing prompt engineering approaches like system prompting, contextual prompting, role prompting, chain of thought reasoning, etc., and they adapted them for reasoning about code. This is something that I'd really encourage everyone to go in and check out in the open-source code projects because you can get a great sense of how the teams were able to prompt different LLMs to produce effective results for code understanding and software understanding. And I want to use this opportunity to actually compare some of the prompt engineering techniques between two of the different teams to see how they were able to do different kinds of prompt different kinds of prompt engineering. So I want to start I'm going to see if I

can get creative with screen sharing and

So I'm hoping that everyone can see this. This is an example of theories approach to uh theory is the third place team. theories approach to prompt engineering. And in this case, they give the LLM a set of instructions that's very well structured. And so they say, "A static analysis tool has identified a potential vulnerability in a modified version of free RDP. Your task is to analyze the code snippet and consider if the claimed vulnerability can be reached and triggered via malicious user input. You may need to make reasonable assumptions about a malicious input's control over the relevant data. In practice, this is basically saying you're going to need to assume that you don't know everything

about reachability over a large code project. If you'll recall, team Atlanta actually outsourced call graph analysis to several different agents. So, uh the prompt goes on to give several such examples. So for instance, one of these examples is a potential integer overflow computed from the actual length of the user input is unlikely to be triggerable in practice. However, if it is computed from decoded user input, it is likely to be triggerable. So that is an example that this prompt is given. Uh the reported vulnerability can likely be triggered via user input is the kind of details. And then there's specific instructions like you must respond with one of the following options with no other output before after likely or

unlikely. And so you can actually scroll through and all of these all of these interactions are available online, but you can scroll through and you can see the kinds of responses that the assistants were able to the assistants were responding with. And for instance, theory used this as an initial pass on all of the challenges to triage for resources. So, I'm going to go back to uh to the presentation

and talk through that free RDP free RDP example. But if we go and we look at all of these different kinds of prompt engineering techniques, you can see Oh, Joe, I see you're raising a hand. >> Um Uh sorry we just a few minutes left. I didn't we didn't organize like timing of like warnings but we got a few more minutes. So >> got it. Thank you. We're we're about wrapping up. Um you can see the different ways in which they've used prompt engineering prompt engineering approaches. So if we go through the free RDP example, this was a case where we had a backd dooror, a vulnerability uh that was offiscated within the code base

and it required certain kinds of formatting, certain kinds of messaging uh uh message formats to the inputs to actually be triggerable. And it came in the form of a of a diff that looked like this. And so the the initial LLM pass produces a assessment that there's a likely vulnerability. And the reason is that the function contains offiscated um offiscated code uh that allows the executable uh to be triggered. And what I thought was the most interesting aspect of uh this uh this interaction is that you can see exactly how much money it costs to do one of these messages back and forth. And so for instance, in this case, uh it costs 80 cents for one of theory's additional

agents to analyze and then respond with uh a specific structured input that they expected could trigger the back door. and their agentic approach was to decompose the tasks structure complex uh uh outputs and adapt to the models and so theory because they took an LLM uh forward approach which we unfortunately don't have enough time to talk about they adapted to the models as the models were coming out so they had two years in which they were constantly changing the different ways in which they would interact with and prompt the prompt the model. So, I'm going to really quickly go through the rest of this to be to be sure that we have enough time. And a couple of the

things I'm going to hit on very quickly are in some cases the teams ended up designing specific tools. So, modifying GRP or modifying cat or things like that such that they could be easily used by LLM. So they created LM specific tools. They also had varying approaches to combining static and dynamic analysis and they structured this as part of their game strategy for how they wanted to account for the different accuracy accuracy weights. And so with uh theories approach they actually had pretty low accuracy compared to Atlanta. But if you go and you calculate out their score based on based on all of these metrics, that ended up not having not having a significant impact because

the accuracy multiplier had a four exponent in the exponent. So rather than being cubic, it was one beyond that. Uh which I'm sure there is a word for. So I apologize to all the math people in the audience to whom I'm offending. But what that essentially meant is if we go all the way back to all the way back to the beginning when looking at when looking at how these were being scored by raising it to the fourth here that meant that really bad accuracy was going to really kill you. But if you were getting around 50% it you would lose a significant chunk of points but you would still be scoring relatively highly as long as you were doing well across

other across other categories especially in program especially in program repair. So to to wrap up uh uh to wrap up, I'm not going to go into failing safety safely, but that was also another core component of these of these designs. What I'd say is that all of these tools are open source. you can click through all of those different logs, all of the uh interactions that have been made available and see the different ways that prompting these LLMs produce different results. There's really significant takeaways for how these prompts were engineered that can be applied to many kinds of tasks in software engineering in creating agents that can reason over code and be force multipliers to existing software

engineers. uh there's some really interesting neuros symbolic approaches that combine existing program analysis tools with LLM and there's very thoughtful approaches to creating agent systems for the purposes of program analysis but the other aspect of it is that for the purposes of the competition software uh vulnerability discovery and remediation that is immediately applicable to the world of software defense especially as AI coding becomes very very prevalent. These kinds of tools can be integrated into the software development life cycle as part of LM driven commit review integration into static and dynamic test frameworks specializing code understanding agents on in-house software. So from a vulnerability researcher point of view, from an exploit development point of view, from

a reverse engineering point of view, there's a lot of work here in the open-source tools that you can learn from and adapt to your needs. But just from a software vulnerability discovery and remediation perspective, these projects, these CRS's are now open source and available for use. And I'd really encourage if this is the point I keep hammering on the talk, I'd really encourage everyone to check out the the tools. Uh I will um I will uh tweet out a link and a uh uh to all of these blog posts, but the primary uh source you can go to for all of this is a cyberchallenge.com. It's all available on Google. And with that, I think I'm wrapped up. Sorry, Joe, for

going a little over. >> No, no worries. I actually went over and started you late, so I apologize for that. Um, can you hear me? >> Yes. >> Oh, awesome. Thank you. Thank you. Thank you very much. Um, and um, York hats got lots of love from Portland. Um, >> I'm glad. >> And, um, I would I would hand you a nice stuffed uh, Sasquatch uh, if you were here, but I will make sure that you get it. Um, and actually, sorry, did you just say, uh, will these slides be up there as well or >> what? >> Yeah. >> Okay. Okay. And we'll we'll post them on the schedule link to that as well. So,

um, >> perfect. So, I can just Can I just send you the slides? >> Yeah, you can do that. >> Okay. >> Okay. So, I'll have the slides. I'll get them up there. Don't worry. Um, and yeah, thank you. And, uh, round of applause.

For for those of you in the room, um uh the CTF is now starting. Workshops do have some walk-in spaces if you just go and wait for the people who registered.

[Music]

finger.

[Music]

[Music]

Heat. Heat.

[Music]

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

[Music] Heat. [Music]

[Music] Heat.

[Music] [Applause] [Music] [Applause] Heat. Heat. Heat. [Music]

[Music]

Heat. [Music]

[Music]

All right, our next talk is accidental honeypot. It is my great pleasure to introduce Corey who has spent over a decade as a full stack web developer before realizing that breaking things was even more fun than building them. During COVID, he made the jump to the dark side legally of course and has spent the past four years as a cyber security consultant hacking web apps, APIs, mobile apps, and the occasional thick client. When he's not poking at authentication logic or accidentally discovering new ways companies leak personal data, he's racing bikes, going on long walks with his awesome partner, or hacking random gadgets in his free time. He's passionate about digital privacy, human error, and making

security just a little bit more relatable and a lot more fun. Corey, thank you, Magneto. Hey, can we get a round of applause for Perry Bus's keynote? That was awesome.

All right, like Magneto said, my name's Corey and this is my talk about how I accidentally created a honeypot. So, Magneto did give a a pretty good overview of me. Uh, I also go by Interpunct. Some people call me Corey. I'm a full stack web developer. I'm passionate about digital privacy and digital security. And about five years ago, I accidentally or inadvertently created a honeypot. Today, I'll tell you how that happened, what I discovered, and what I learned. All right, so back up here. Let's go back about 15 years. Uh 2010, I started really thinking about digital privacy and there's a lot of data breaches going on. So, I wanted to kind of figure out

which one of these retailers are leaking my data because at the time we didn't have have I been pawned and some of these other resources. So, I learned about the Gmail's plus addressing. And Gmail is not the only one that does this, but for those of you that don't know, you can take your email address, say for example, Sasquatchgmail.com, and then you can add a plus sign to it and a random string. So, for example, Sasquatch plus shopping@gmail.com or Sasquatch Plus retail uh I'm sorry, newsletter atgmail.com. Both of those, if you send email to those will come to sasquatchgmail.com. So, I started using that. There's an issue with that though. A lot of retailers in the input field would

invalidate the plus symbol. Some of them would allow it, some some of wouldn't. Uh, and so it was kind of like a crapshoot as to whether that actually worked or not. So I started thinking about other ways. I started chatting with some people. 2015 I was explaining about my strategy and someone said, "Hey, have you thought about a catchall domain?" I was like, "What? What? What is what what what are you talking about? What's a catchall email?" So I registered not getting my.info info and in order to set up my catch all domain. And so how a catchall works is you you create a domain and you create an an inbox, a mailbox there and then

you can set it up to be a catchall. So one, you have to control the domain. Two, you usually have to set up some sort of mail service. Uh in my instance, I'm using Zoho. Uh, not a plug for Zoho at all, but you can set up an inbox that any email address would come to a single inbox. This was fun because then I could set up Best Buy at not getting my.info or I could set up Wells Fargo at not getting myinfo. The fun thing was whenever the rep gets on the phone because they're trying to solve a problem for you and then they have to read out your email address to you. Uh Wells Fargo at not getting

my.info. What? What? What is that real? And I'd have to assure them yes. No, it's a real email address. So, as I as I walk through what a an a catchall email address is, essentially anything at your domain comes to a single inbox. Super cool. Very useful if you're trying to kind of not send not give everyone in the world your personal email. So, not getting my info is a little tricky, right? Like it's long. People ask questions it. So, I was thinking of something shorter. And so, in 2020, I had some time and I'm thinking, what's a little bit shorter? And I was like dinking around on Pork Bun or I think NameCheep at the time and came across no reply.

And it's available. And I'm like, what? This is like this is like the perfect email. Like, why hasn't anyone grabbed this? So, I registered it and I thought I'd do it on leap day because maybe their systems were misconfigured and would give me an extra year or four. I don't know. It didn't work, but nonetheless, so I set that up and I sat on it and I started moving over all my services to use this new catchall. And eventually, I forgot about it. and I came back to it and I noticed I'm getting mail that's not mine. So, I'm getting the first thing that came in was a pizza order with somebody's home address and their phone

number and then later on a survey to fill out how well or how well they did or didn't do in service. I thought that's weird. Like, what the heck? Started getting job application confirmations. Weird. Someone signed up for dental s dent dental job searches and I started getting all these dental jobs that were coming to my inbox. That's really weird. So all that was very strange and then I got and I got started getting faxes from a city government and included in the emails were attachments and these attachments had like a personal injury form for someone that got hurt on the job while working for the city. And then I got another fax maybe a week later and it included

someone's direct deposit information. So I started looking at this a little bit harder and I was like this is not good and also what else is happening here that maybe I'm not tracking because I'm getting like trickles of emails and it just kind of fills up my inbox and I'm not really paying much attention to it. So, I started creating filters and started trying to automate some of this to make it easier to sift through my emails. And at that time, I had the realization I created an accidental honeypot. This was completely as I'm like documenting and I'm tracking all of these, I can kind of see like who's sending me what and when and

where. I started getting other things. Password resets. Lots and lots of school platforms like to use my email address for some reason. I'm getting logistic bills for some logistic company where they're like asking me to pay their invoices. I'm getting internal Jira tickets, lots of Jira tickets. I'm getting service orders and I'm getting test platform credentials. Like someone somewhere is setting up a test like they're setting up test platforms and using my domain. What the hell? So I started running some stats. So from February 29th, 2020 to today, well actually yesterday, these are the numbers. I just ran these about 2,000 days and I've received almost 30,000 unsolicited emails, which averages out to about 14 a day.

That's a lot of emails. That's just very surprising to me. So then I'm like, huh, maybe other domains can be useful for this. what other no reply domains are available. So I registered no reply.tv and no reply.propy because they were available and I didn't get it. I there was almost nothing that came to those. So back again focusing on no reply us. And then about 10 months ago they someone let go no reply.net that and I jumped on it. What do you guys think? How many emails a day? Like shout out some numbers. Just

>> tree. I I like the tree 50. That's cool. >> Yeah. Yeah. That's a lot of email. All right. So, I ran the stats. Total number of emails as we saw about 30,000 for no reply. US about 14 a day. No reply.net. 240,000 unsolicited emails which makes 812 a day. And just in the last day, I've got like almost 800 emails. How like how the hell do you sort this? How do you even begin to go through this? So that's on average I'm comparing here. You can see it's column three about seven almost an 8x multiplier there. 58 per day multiplier. And in the last day it was 56x. In the last seven 96 and in the last 30 days

it's about 46. And it's a 9x increase in attachments. There's a lot of good stuff there guys. A lot. So I wrote this Python tool to just go through this and started sorting all of these and getting stats on these. And I'm not even showing you like all the stats and I can't like I don't feel comfortable showing you like the domains that I'm getting email from. So we're not going to go there. So what are some what are some takeaways here? So apparently people think that no reply is cool and it's a safe thing to send mail to. It's not. Somebody can be listening there and namely myself. Luckily I'm not a bad guy. I'm not going to do anything

nefarious with this. I think it's interesting. Dev testers love to use my domain. If you're a if you're a developer or QA tester, please don't use no reply. Someone's going to be listening. And lots of systems come misconfigured just by default. They'll need a like a reply a reply to address. And in the case of the faxes, it was actually a misconfigured fax machine that every time someone faxed their they faxed some sort of PII or whatever to someone, they had an auto reply that would kick back the mail and it would come to me with the attachment. So check your systems and make sure they're not sending mail to weird places. This is also interesting because this is

just a domain. I don't have this isn't anything sophisticated. Anyone can set this up. You just have to know kind of where to look. And I guarantee there's other default placeholder domains that are out there that people are using that haven't been found yet. If you have a quarter million dollars, you could go buy no reply.com. just the suggestions. I don't have a quarter million dollars to spend on this fun weird side project, but you could go buy it and I it would be very interesting. I did check. There's no mail server set up there. So, whoever sitting on it has no clue whatsoever the gold mine that they're sitting on. If you are setting up test servers,

please use internal domains or domains that you can control. And please please don't send faxes to me. Don't don't please don't do that. I don't I don't like having to coordinate and say, "Hey, your system's misconfigured." It took me three months to actually get a hold of the city and get someone to actually fix it. And in that time, I'm still getting mail and I'm like, I just don't want it. I don't. So, just because it says no reply doesn't mean no one's listening. That's my final thought I'd like to give you all. >> Do you guys want to see some of these emails? >> I've redacted most of the bad stuff. Uh I think hopefully

>> uh we'll come back to this slide, but let me run you through because we got a few minutes still. All right. So, this is this is one of the pizza orders that I got. I love the email here. Anybody can you guys read that email up top there? So, they used Harry had a giant sheep at no reply us. I don't know why that was entertaining. I get a lot of undeliverable mail. So, not really sure what's going on here. This is I This is some sort of system that's been set up to send automated alerts and for some reason are coming to my inbox. Here's another one. This is with some sort of piece of hardware on the edge.

It's I think it's some sort of sofos box. Um you can see here sofos at spam or spam at sofos.pl. I have no idea why I'm getting these things. Not really sure. Here's some invoices. This they usually have an attachment here. This one did. I'm not showing it to you, but you can see like, hey, what's the payment status on this? And that's all I got. [Applause] So, I will have some I have my script set up. So, right now, what I've got set up is a Python script that connects over IMAP and it pulls everything down and pulls down the attachments and then does stats on it and stores everything in a SQLite database. Uh, so I'm happy to

share that tool if anyone else wants to set up a domain sometime. Here's my uh email and my LinkedIn. Um, any questions? Do we have a Do we have a questions mic? Hold. Uh, do we have a questions mic somewhere?

>> All right. Very cool. I was thinking there's so many avenues you can go with this. I know obviously like bug bounties. I mean, you probably have a cache of bug bounty sitting there. You probably have like you can look at the demark. I don't know if you're sending up any demark. There's avenues you can go down there. You haven't looked at like no reply dashes like no dash reply, no reply. I mean, where to go next? GDPR violations. I mean, the gamut is unlimited, right? So, just throwing out some ideas. I'm sure this audience has others. Um, also you were talking about the plus. One thing I love is when you have to call customer

support and you have something vulgar after the plus and they have to say it. So that's another one if you want to be a little snarky. So I love it. No, so part of my script does pull down all the bug bounty domains. There's a cool GitHub repo um that I can also share uh on my GitHub repo. And what it does, it pulls down all the domains and then it goes through and looks and I kind of gives me a heads up if any of these domains fall into the bug uh bug bounty list. Um most of them like almost all of them do not at the moment. Um I've had a couple but they uh it was like uh like

WordPress but um it's somebody's legit misconfigured WordPress so I can't really contact WordPress um for that. Uh, next question. Where that microphone? Uh, this gentleman up here was has been waiting. Yeah. >> Can we just email you at no reply? >> Yeah, you can send me mail at no reply. >> I was going to say, can we just email you at the no reply address just to >> Yeah. Yeah. Yeah. Like I I don't really care. Like you're just going to fill up my inbox with other meaningless junk. But yeah, you could send me mail there. >> Uh, Cibios. Yeah. Have you considered crowdfunding for.com? >> Oh man, I love this. I love this. Yes,

me. I'll set up a GoFundMe and if you guys want to crowdfund this project, I would that would be so rad. [Laughter] >> No, it's like a I I agree. It's It's like a legit like this is a problem and someone good needs to jump on that domain before a bad guy does. Absolutely. I agree. What's been the most malicious or like surprising entity that's you've gotten mail from? >> How do I answer this? >> All right. So, malicious. I I don't know if I can go malicious. So, >> for vulnerability, >> I think, you know, it depends on like what the risk is. So, uh, just some examples. Um, I've gotten I'm getting emails from a Fortune 500

chip manufacturer, and I'm that's the Jer tickets. Uh, and it's they probably don't want that to be leaked out there. The information about all the problems they're having with this new chip they're manufacturing. Um, also just full disclosure, I am not shorting the company. So, like uh uh also I'm getting emails from a a court system and it's a family court and uh yeah, I've opened up a couple of those and I won't open anymore. Uh it's dark stuff. Uh but it, you know, it's the these court records should be public or private. They're not public court records. This is family court. Like real awful stuff happens here. and I'm getting emails from them. Um,

trying to think of some other weird examples. Um, and like mostly like it's it's just a lot of it is just stupid misconfigures. Uh, just password resets. um where I'm I can I get a password reset to no reply and I could legitimately go do the password reset and get in log into this internal system somewhere that's only for employees. Um but those are the two that I can think of. Next next question >> have do you have a bounceback email? uh either like I you know obviously I love it's automated so nobody's going to read it but you could send it to like web master at domain like hey fix your stuff so yes I've set up bouncebacks they

don't do anything uh it it becomes at a certain point it becomes um yeah just kind of like you're screaming into the void >> they're screaming >> yeah yes they are screaming into my void next question. >> I sympathize with your problem here. I have a uh common email address that I've received all kinds of unsolicited email. Um and it's not a domain like you have, but it's it it's communication between teachers and parents about their children. It's rental forms. It's all kinds of things. And it's very difficult to contact somebody to fix it. >> Yes. Yeah. I feel your pain. Uh it it is it's it's an interesting problem for sure. Absolutely. And uh you're not the

first person I've heard that about. Absolutely. >> All right. Uh time for a couple more questions. Anyone? Nope. Uh >> do you consider this data to be a liability for you? >> That's a great question. It does. The question is, do you cons are you concerned that this is a liability to me? And yet my answer to that is maybe. So I could see a world where I contact this Fortune 500 company that's a chip manufacturer and I reach out to them and I'm like, "Hey, your stuff's broken. Like, you need to fix it." And they don't have a public bug bounty program, interestingly enough. um and them coming back to me and going, "Holy

[ __ ] how much data do you have on us? Oh my gosh." You know, and then try to bully me with their with their uh legal team. I absolutely and that's a that's a concern of mine. Um so, you know, thinking about this from an implementation standpoint, this is really things are I'm not things are coming to me. I'm not asking for these. And I could in my head like would be standing on an argument of I'm doing you a service because I'm legitimately not a bad guy. I work for a cyber security firm and I'm on the side of good. I'm not being malicious with this, but I don't want to go to court. I don't want to go

to have to defend myself legally. So, but >> from >> I love it. I love it. All right, everyone. Thank you so much for coming.

[Music] Heat. Heat. [Music]

[Music]

Heat. Heat.

Heat.

[Music] Heat.

Heat. Heat.

Heat.

Heat.

[Music]

[Music] Heat. Heat. Heat [Music] [Applause] [Music] [Applause] [Music] up here.

[Music]

[Music]

Heat. Heat. [Music]

[Music] Heat up here. [Music] Heat. Heat.

[Music]

[Music] Heat. Heat. [Music]

[Music]

Heat. Heat.

[Music] Heat. Heat.

Electromagnetic spectrum operations or MZO with extensive experience in drone-based redair engagement. ments. He currently serves as a security consultant at Spookac and was previously the lead offensive security engineer at Phoenix Technologies. He holds several certifications including DESOC, DOCP, Cvisa Siva Suoa, Oispa, and FAA part 107. I apologize for butchering your certifications. Alec, take it away. >> Thank you, Magneto. Thank you everybody. All right, I do have a couple questions for you all. Uh, who here has flown drones? Who here wants to fly drones? Yeah. All right. Um, who here is scared of drones? You should all have your hands up. All right. So, um, good afternoon besides, and welcome to, uh, drone blind spots. Uh today we're

going to be talking about uh pentesting the airspace above critical infrastructure. So we need to define critical infrastructure. Um CISA, the cyber security and infrastructure security agency defines these 16 sectors as critical infrastructure. All of them are vulnerable to drones. Some of them more than others. The ones here on the right are our supply chain, our water treatment facilities, our power grid, our transportation systems, chemical refinement, and our dams. Speaking of dams, who here has been to Henry Hag Lake? Well, this is the perspective of a dam technician. Uh, and the photo is, you know, quite beautiful, but I I would ask you all to see if you can find anything strange about this photo.

It's rhetorical, but there is an anomaly here. So, what could that be? Is it a bird? Is it a plane? Uh, well, if this dam had been equipped with the most basic of uh DTI systems, that stands for detection, tracking, and identification. It would know immediately that this is a Chinese-made DJI Mavic Pro 3. And that drone is really there. I'm flying that drone. Uh, this is what the drone sees. This is about 1,500 feet away. It's very far away for where a drone would normally take photos. This is 7x zoom. We can now see our uh technician standing over there near the damn spillway. This is 28x zoom. Our technician is now waving at us. Well, that's not a damn

technician. That's my coworker Noah. And I am Alec Hunter. Uh I am a cyber physical security consultant for Spookseac. I specialize in IoT and um physical pen testing. I'm a licensed locksmith, proud investigator, and part 107 pilot. I provide drone services to Oregon and beyond and have been doing it for 10 years plus. I am pretty decent at following the rules, but I don't always like to. And the reason I talk about rules is because getting that photo was not easy. There's a lot of rules around drones. So, that blue X is where my drone was when I took the photo. There are some problems with our LLC. That stands for launch and landing zone. I

can't launch environmentally because under trees, um, if the drone loses connection, it will try to come back to me and it will land on the canopy. How am I going to get my drone? Uh, I can't. So, I have to find a new LLC. In Washington County, I can't fly a drone at any of the parks. I can't fly over, launch, or land. I also can't LLZ on private property without permission. That's really difficult to get sometimes. I can't stage my drone on a shoulderless roadway. Uh that means I can't take off on this stretch of land in the forest. In Oregon, you cannot fly a drone over critical infrastructure. Federally, you cannot fly a drone near a

dam. Kind of the same restriction. That's 400 ft on the ceiling and on the sides. However, if you look at the bottom right there, you'll see a little blue triangle. I was able to take off there. That is okay as long I as I'm following the Federal Aviation Administration's uh rule sets. Let's talk about more rules. So, you if you're going to fly a drone, have to be flying under one of these rules, trust or part 107. Part 107 requires an exam. It's a very rigorous and um arduous process, but for the most part they have the same rules. Um everything up there is very basic. We need to register a drone with the government,

mark it like a with a license plate, and the drone has to have a remote ID. That basically is a Bluetooth transmission that tells you where the drone is and who's flying it. Anyone around who has the receiver knows that there's a drone flying and who you are flying it. There's pre-flight checklist you have to do and you always have to maintain visual line of sight. You can only fly up to 400 ft with the exception with part 107. You can do it over structure, so 400 plus the structures height. You have to respect temporary flight restrictions. But the cool part about part 107 is you can make money. You also get access to waiverss.

The money part is how you pentest with a drone. So there's a lot of rules here. Who doesn't follow rules? Adversaries don't follow rules. Why would you? There's so many. It's kind of ridiculous. So, this is kind of where critical infrastructure sits with defenses. There are law and rules. There's premitigation, there's access control, there's detection, monitoring, there's triage, and there's response. There exists mitigation for both a ground and an aerial threat. The difference is an aerial threat is the capability of the drone times the intent of the pilot. It's a little different than doing physical pen testing when you're on the ground. Now, here's the gap. That red is something none of these facilities have for the most part,

unless it's like a military defense base, but some zones were smart enough to get a geoence, which you can apply for. And if you're flying a DJI drone or any of these other uh name brand commercial drones, they won't let you enter the no-fly zone. Their firmware uh doesn't allow it. But for the most part, your risk uh as someone on the ground is extremely high. You're probably going to get caught. You're probably going to go to jail. But with a drone, the pilot, I it's virtually zero. You're not going to get caught. And I mean, almost no one gets in trouble for flying drones in general. So even if you're breaking all those rules I just taught you, okay,

prove it. So the CISA has uh said that we should probably start considering uh implementing drone detection and it's a part of their error aware guidance and that's what we're going to talk about next. This is the aerial defense program life cycle. The first thing we have to do as operators is establish legitimacy. It is much easier to do this when you work for a company that's already providing physical pen testing services because they can just add aerial assessments as something to do right now. Getting buyin is going to is its own talk. Uh I had to cut it from this one because it's not long enough. But uh this is the definition. Get the right

people to say yes to your plan, your terms, and your resources in writing. If we can get past that, we're in the site threat modeling and pentesting stage. That requires an initial assessment. We perform some scenarios and we give them the pentest report. If they like what they see, if your recommendations are good, they might implement uh DTI or C drone. C drone stands for counter drone. Uh we have to tune those systems and then we have to retest all the scenarios we already did and we'll talk about that. But the end of life cycle is basically constant red air operations. This is the reality of someone who works as a W2 employee and someone like me who

started as a 1099 trying to get buyin with a lot of different companies to hire me to do this. It is endlessly difficult to try to be a 1099. Um, I I will tell you right now that if you work as a defender or a pentester, you're going to have a lot easier of a time trying to convince your leadership to let you be the drone guy than to be a drone guy and try to get a company to hire you to do pentesting. So, the first thing you'd have to do is you have to form an LLC. Then you have to get general liability insurance. You have to take on the liability of how you

store data. You have to do quoting negotiation contracts. The lot the liability is really on you. And then you have to coordinate your travel logistics. You have to go out of pocket on getting the part 107. And the authorization again is the hardest part. But at the end, you need a drone. You need a lot of drones. There's different drones for different missions. And let's let's talk about some of those drones. Uh but first we're going to define what a drone is. So drone is informal. That is an informal term for a UA, an unmanned aircraft. Uh basically a small unmanned aircraft is under 55 pounds. And that last S that we append on the

end means system. The system is this then these are that were collocally known as drones that we see you know DJI and all those other ones. This is what it looks like. Um I I made it color-coded so you can understand what the parts are a little easier. But it's an airframe, a power system, a propulsion system, guidance, navigation and control, payloads, and a companion computer. These are the ones I recommend to anyone in this room who wants to try to get into this. Um, if you are a physical pen tester, you should own a drone like this. These are the DJI Mavic, the Autel EVO, the Sky DO2, and the Parrot Anafi. I want to warn you, there's a little

warning symbol next to the Chinese-made drones. It is very likely that in the United States of America in December, Chinese drones will be banned. I don't know how, but uh, some sort of airspace restrictions will apply. These are the drones that counter those drones. Um, these are three examples. There are so many counter drones. Now, uh, but the Skyio X10, if you're a defender and you're trying to create an aerial um, defense plan, this is the one you want to get your company to buy you. This is the one you want to pilot. It's Americanmade. It's going to be approved. It's compliant, and it's really cool. The, uh, other two are not really going to be mentioned, but I'll explain what

softill and hard kill means. Soft kill is I want to capture the drone flying in my airspace and I want to do forensics on it. Hard kill is I don't like that drone in my airspace. I'm going to destroy it. So, um, Andrew's anvil here is actually a drone designed to crash into a drone in the sky. It does it very accurately. It's quite cool. But every defender in sight should have one cus uh uh I should have said this to to C drone, but um SUAS is the small unmanned aircraft system and um staff at least one response pilot. So, if you're a defender, you should be the guy at your infrastructure site that flies a drone

because you have only one thing you can do as a defender against a drone in your airspace, and that is follow it back to whoever is in your airspace. The cool thing about that is when a drone enters your airspace, there's already limited battery life left. So, if you launch your drone from your site, you can basically follow it back to wherever it goes. And that guy wants his drone back. These are expensive devices, right? So, you can take photos of them. You can get the car that they drove in. It's really important. So, um, this is really the only defense you have. You can't hijack them. You can't hack them. You can't jam them. Uh, it violates a lot of other

rules, but it's very important. These are called do-it-yourself drones. Now, who here has ever built a drone? Great. So, a Whoop is an indoor drone. I mean, it's it flies outside, too, but they're primarily used for uh indoor stuff when you're doing red air. Uh, the racers are what you would expect to see in Ukraine and Russia. And the heavy lift drones are your your heavy payload drones. You can do a lot of cool stuff with that. And I have three drones I'd like to show you today that I've built. This is a vulnerable drone. Uh, and I'm prefacing dog and pony show here because getting buy in, that's a huge part of getting buyin. When they do agree to

meet with you, you need to prove to them that you are the guy and you have cool things to show them. This is sort of a wow factor thing. The um, vulnerable drone here is in an RF chamber at Novas Labs. They're local to here. Um uh it's running Ardu Pilot and PX4 vulnerabilities and it has a companion computer that you can learn hacking on and you can learn how to reverse jam and hijack the drone u from within these chambers. So it's a really unique drone for that. It's a training drone. The next one's also a training drone. Uh this is a threat effects training drone. Um this does not bomb anybody. I'm avidly against warfare drones, but this

one is going to simulate what it feels like to be someone out in the field and have this chase you. Being hunted by a drone that can go 90 miles an hour is not fun. So, it never collides with anybody or anything, and it's just used for that, just to teach people that psychological effect of being chased by a drone. This is an AI recon drone. This drone flies itself. It uh also collects all the signals, processes them, and triages anything that's interesting. And it also acts as a mother ship. On the bottom there is a drop mechanism that can drop a small whoop and the drone acts as a relay. So the control link can be

shooting to it through a yogi from miles away and you can actually get the feedback in your FPV goggles and fly the small drone until the battery is out. So you can do a lot of things with drones. That's kind of the point. When you do get buyin, the first thing you're going to do is an authorized initial assessment. This is what we call a site threat model. You're going to be looking for observable and actionable items on the site and anything in between. There are observable and actionable items. I'm not going to read them all to you. Uh take a second to read over them while I explain what a threat profile is. Really, what we're trying to do here is

we're taking that recon drone, that cost drone that you all purchased as uh physical security um practitioners, and you're going to go and you're going to take photos of all the critical assets on the site. You're doing this from a competitor or an espionage perspective. Uh the reason we're doing that is because governments want espionage and sometimes companies do too but competitor this is a real thing. Uh China all the time flies drones over at our critical manufacturing sites that see how we do stuff. So it's capability is a proumer c like the DJI and the intent is structured. This is a um classification by CISA. The scenario is always going to be an ISR an

intelligence surveillance and recon. And then we're going to talk about the profiles here. This is the matrix. So this is nonexhaustive at all. But basically uh your timu toy class actually serve a purpose in drone pen testing. They can be used as a canary to see if they have any detection systems or counter systems. And and and in the case of a sophisticated threat, they would just use them as decoy swarms because a swarm of cheap drones is very effective. A proumer might be a journalist or a competitor. Uh a sophisticated person would use it as Overwatch for filming while their custommade drones do the work. I'm sure we've all seen the horrific videos from

Ukraine. Uh DIY is going to be uh you know, it might just be a guy racing his drone at some site. That happens all the time. Uh or organized crime, delivering contraband in a jail. Could be an AP state actor. This is a scenario platter. Think of it like this. I'm holding a platter and I am going to figure out what threat profiles apply to your site. what I'm going to pick from these scenario uh test cases and what I'm going to say you should try you should test for um you're going to pick them and uh again I'm not going to go all of these but there are three kinds intelligence transport and chaos these are the most effective types

of tests you can do with drones and when they approve any of them you do them and then you deliver a pentest report your pentest report looks very standard there are two things that uh stand out and that's a site threat model which is usually aerial images with the drone and then you know a bunch of markings. I put a very very simple one over here from the Scoggins Dam over at Hag Lake. But basically the placement of these systems is based on findings and the site threat model and the scenario outcomes. But I'm recommending for this site three RF detection and one acoustic. The acoustic would sit on top of the spillway because if a drone flies near that then we know

and the RF will also get bearing. So we'll know where the drone relatively is uh positionally over the dam with three um nodes. This is how DTI works. So you layer them uh from the site. So this green circle is outside of our site. It's like our outside of our perimeter. We still want to know whenever an aircraft is going near us or over us. Uh the radar detection is an expensive unit that I would not recommend for most places, but in rural areas, they're definitely effective. The next one would be a radio frequency DTI. This is the most effective one. Uh this is um absolutely the one that should be recommended basically everywhere, but they're very cheap. They

um are are tuned to detect C's drone signals uh from the control link or the feed that's going back uh the video feed. and um uh it identifies the protocols very specifically for you. The this is what we call first opportunity detection. These are called primary detection systems. So it's the earliest moment your system could have detected a drone. We're going to talk about secondary. Now you're also going to want to have an optical DTI that's going to identify the drone that's flying into your airspace. So we've detected it way out here. It's coming closer. Our radio frequencies picked it up. It goes, "Hey, that's probably a drone, but I don't know. Threshold's not totally

confident." Then it flies into your airspace where this optical camera can confirm that this is a drone. It looks like a drone. It's it's definitely classified as a drone. The last one is an acoustic. Uh they're not too effective, honestly. They're they're not great, but they are good for rural areas. Um they basically compare sound profiles. So the propeller noises and things like that. Uh so tuning these systems is really interesting. Uh it's simple. If you're you buy this CS drone, it comes with automated flight capability. They all do. Basically, you open up the app, you you create the nodes or the shape that you want to make. You can do squares, you can do circles, you can do

triangles, whatever. The point is is it needs to look granular in the C2. It needs to look like the shape on their end. And uh we want to focus on the approach corridors. Approach corridor is something where you're flying in from a place that they wouldn't expect. So a lot of time um if you take off you kind of want to fly around the perimeter and come in from the other direction these long range systems kind of detect that right kind of screws up your plan. Um and the final stuff that we'll do uh is drone skyriting. We're not really skyriting with any smoke. We are writing like a word with an automated flight uh

demo test whatever. If you can read the word then you know the system is tuned. And uh the next thing we have to do is retest the scenarios. This is really important. Scenarios are supposed to be completely repeatable. They are uh basically a drone classification, a pilot classification, and all of the other environmental effects that were in place during the initial test, including the wind, the light, all that stuff. Blue team needs to be able to detect the drone, track it, classify the platform correctly, uh produce a succinct timeline of events that matches up with your drone's flight log, and um not have any false alarms uh popped. Optionally, if they have someone who's uh already

employed or deployed to be a counter drone, to find you, the operator, they would have that guy launch it during these scenarios because it's really important to make sure that all of it works. You detect the drone, um your guy deploys, you go find the guy, you you report it to law enforcement. Red needs to execute the profile exactly, maintain signature discipline, um and then stay within the rules of engagement, uh causing not causing any accidents or safety hazards, and uh provides clean artifacts. artifacts are things like the video feeds and any of the payloads uh whatever payloads were on the drone. If you do all that, you're now prepared to begin redder operations. Reder operations are wargaming. If

anyone's ever done tabletop exercises on um kind of defending assets, that's it's going to be action, reaction, and counteraction. When we do that, usually people disagree with you. They go and say, "No, that would never happen." And then you are making an argument. That argument leads to establishing rules of engagement for this particular test. You go out and you make your red team. Your team is a specialized group of people, four to five, could be more. Under part 107, it's usually three or four. You're going to develop custom kits, hardware, payloads, and drones. And you're going to emulate structured and sophisticated TTPs. The goal is to pentest and evade the tuned DTI and the counter drone

systems. And that's my speech. [Applause] So, I'm not going to go on to what's really going on here. Uh, there's a little bit of shilling, of course. But, um, the main thing that I want to tell you about today is getting your part 107. In the middle here is a code that I was given to, uh, by Drone Pilot Ground School. They are the people I got my part 107 training through 10 years ago. I highly recommend them. It's super cheap. It's like 150 bucks. You get 50 bucks off with this code. I get no kickback. This is all for you. There's 25 of them. There's no like fear of missing out. Do it if you want. Anyway,

uh Q&A time. Anyone have questions? >> How do you find all the prior? >> It's a nightmare. And that's kind of the part of it is you need to be good at puzzles. You have to have a red team mindset. Um you need to be able to go enumerate all the regulations and you have to have the right context to figure out what you're not seeing because there is a lot of that too. It also depends on what type of site you're you're you're pentesting because commercial facilities also exist. Things like Intel, things like Microsoft's Campus, um they're they're not as regulatory heavy, but I have never pentested a military base, so I don't know what that looks like.

>> I have never done that. And they are allowed to use counter drone systems, the kinetic kind, where they shoot it down with a laser or have a drone that explodes on it. Nuclear facilities are one of the only places that are allowed to do that. Anything else?

>> How effective are the geoences that the like for DJI puts out? >> They're extremely effective for DJI. They are not effective at all for anything else. Uh even the Teimu drones don't get knockout. >> Yeah. Um do how how responsive are they to uh putting a a a geoence around something? I mean, do they have certain criteria if for a government agency? >> Almost no one does it. Even even if even if they have the guidance from CISA, no one does it. >> Yeah. >> That's why this presentation is a call to action, guys. I I want you guys to help me do this. I I need defenders and pentesters to start doing this. Like, I

don't want my power grid going down, you know? Like, this is a this is a big deal. This is a future threat. >> Anything else? >> The DG the DJI drones. Does it mean that if we happen to own one, we're not going to be able to fly it after December or are you saying get it now? Because >> I don't know for sure. Here, let me explain it to you. So, last year um the the Trump administration basically said or FA2, it was it was like a joint thing. They're like, "Hey, we want to have Americanmade products as our as our our drones, right?" So, there's a bunch of American companies. And is one of

them. And uh what they were saying is that Chinese drones are a threat. You know, they they they send data back to China. That's the claim. So they gave China one or China gave the US and the US governors uh also gave them one year to test the DJI drone systems for these issues. They still haven't done it. So it's by default just going to get banned and there's not enough time to do the test. It takes multiple months for this type of testing. >> Yeah. Well, that was my follow-up. Do you have any evidence of anybody who's ever done reverse engineering on the firmware and the equipment on the drone to see if they have phone home equipment

like has been found in solar inverters? That's the thing about DJI drones is they are the most locked down drones ever. I have tried my best to hack them. I I've I've effectively hacked everything before the Phantom 4 and I own a lot of DJI drones. O over 86 of them. >> Thank you.

>> I think I have time for one more. Anybody? Oh, sorry. Yes, over here. It's it's it's extremely effective in rural areas where there's all there's no like civilian noises, no droning. You know, animals don't make drone noises. It's not a natural noise.

>> All right. Thank you, Heat [Music] [Applause] [Music]

[Music]

up here. Heat

[Music] up here.

Heat. [Music]

[Music] Heat.

[Music]

[Music] Heat. Heat. Heat. Heat. [Music]

[Music]

Heat. Heat.

[Music]

[Music] Heat. Heat.

Heat. Heat.

Heat. [Music] Heat.

[Music] [Applause]

Hey, [Music]

[Applause] [Music] [Applause] hey, [Applause] hey. [Music]

[Music]

Hey

everybody. Heat [Music]

up [Music] here. [Music]

[Music]

[Music] Heat. Heat. [Music] Heat.

[Music] Heat. [Music] Heat. Heat.

[Music] Heat. Heat.

[Music]

All right, it is my great pleasure to introduce Darren who is a cyber security architect with over 20 years of hands-on experience across network security, cloud security, offensive security, and zerorust architecture. His career spans roles in security engineering, penetration testing, and most recently leading secure access and zerorust initiatives for complex enterprise environments. Darren specializes in secure access service edge or sassy deployments, ZTNA validation, and building adversary informed testing frameworks that bridge the gap between marketing promises and real world security enforcement. He's passionate about helping both defenders and assessors make evidence-based decisions in the face of growing vendor noise. Darren, [Applause] thank you for that. Hello everyone. Good afternoon. In cyber security, we don't get to trust by default.

So, why do we let vendors earn our trust by just slapping the label or the terms zero trust on their products? That question stuck with me and it's not just theoretical. It's the the catalyst of a lot of the content that we're going to cover today. and how zero trusty is your network access was based on a research paper that I wrote a couple months ago and it's published over as a white paper with SANS and the overall approach grew from a specific frustration that I had a while back I was asked to recommend a zero trust network access solution for a large organization that was going through their own digital transformation on paper every vendor that they looked

at claimed claimed that they supported zero trust. The marketing was strong. It was slick. Everyone was zero trust capable. But when I started to look deeper, I couldn't find a consistent way to measure whether these solutions actually enforced zero trust principles. Principles like lease privilege, device trust, data inspection, and assumed breach. So it became clear to me very quickly that without a replicable framework to test and compare the products it was nearly impossible to separate the meaningful capabilities from the marketing buzzwords. So that's what this talk is about. That's what we're going to cover today. I'm going to walk you through a hands-on testing framework that maps directly back to zero trust principles and show you how I used it to

essentially evaluate several zero trust network access providers in a controlled environment. And before we do that, I'll at least give you a little bit more content and context as to my background and why I'm here speaking. But f first of all, Magneto, that was fantastic. I appreciate that. I if you're available in the spare time, I may have to hire you as my intro guy. As mentioned, I am a currently a security architect for a global reseller. A lot of my time is spent trying to solve customer security problems, prescribe the right solutions, and provide good architecture. As you can see here though, a lot of my background is also in education. I thoroughly enjoy punishing myself by

going through a lot of industry certifications. I'm a glutton for punishment. And I also am a huge flag hoarder when it comes to CTFs. I love CTFs. I am absolutely that guy that holds on to my flags and waits till the last 30 minutes for the scoreboard to go down and then I submit everything and rise a little bit up. So feel free to throw all that hate at me. And then as a number of us, I'm also a big motorcycle enthusiast. I love anything that can go fast. I love being able to go and race. Um, I I would say anything that can go fast, get me hurt, or get me in trouble with the law, I'm generally going to be

a fan of, which is probably why I'm a natural fit into cyber security. So, with that additional context, let's talk about what we're going to talk about. So the overall agenda of what we'll go through today is we'll start off by just kind of reiterating the zero trust marketing problem space which will set the stage for understanding how we need to focus this. We'll talk about and help define what zero trust actually is. And I'm just going to do it at a quick and dirty high level. Don't have the time to dive deep into zero trust, but I want to make sure we're all on the same page. We're speaking the same language. you understand some of the terms and the

concepts that I used to build into my framework, into my methodology and my approach to doing this. We'll of course go into the lab architecture. What did I actually build for lab? How did I test and what was the environment looking like? We'll break down what each one of the tests were and talk about how they were relevant, how they mapped back to the zero trust uh architecture. And ultimately, we'll get to the point of why you're all really here. You just want to see the results, right? You want to see how these players stacked up and where they were strong, maybe where they were deficient. We'll try and round things out with just a quick highle how

can we if we have residual risk, how do we mitigate that? What are some recommendations to to put for mitigation controls? And then of course we'll close things out with just key takeaways and I'll touch on areas where this can still be expanded, how you can leverage this for your purposes and in your environment and give you some takeaways. Hopefully that sounds like a good plan. Before we dive in deep, I would say put your seatelts on, put your helmets on because this is going to be a lot. But I I do want to kind of get a poll. I want to tailor this a little bit to how everyone is familiar. Just a show of

hands. How many folks are confidently comfortable with the concepts of zero trust? Oh, I love it. Okay. How many folks are pretty confident and comfortable with zero trust network access or have zerorust network access in their environments today? Great. Okay. So, I will breeze over some of these components. I'm not going to try and bore you with knowledge that you already know, but I want to make sure anyone that's not familiar at least understands what we're talking about. First and foremost, and for those in the back, I apologize. I wish there were more screens in the back. Uh there are going to be some charts and some pictures that come up. you're not going to hurt my feelings if you get up and

and want to come closer. Fundamentally, zero trust network access. When I talk about that, I'm there's a lot of components that can go into it, but what I'm describing and what's purposeful for this conversation is we're talking about a hybrid workforce or remote workers and replacing that traditional remote access VPN. You know, what I'm going to be discussing is all going to be focused around remote workers on their laptops with an agent installed that connects up to the zerorust network access provers cloud service. From there, the traffic is tunnneled. It's proxied. Ideally, it's inspected. It's compared against policies. And assuming it's allowed, right, it's going to travel back down to our data center or to our lo our

location where our applications reside. Almost everybody in this space will provide virtual machines, right, for you to deploy within your environment that should be adjacent to the applications that your users are accessing. So, it's quick, high level, but paints a picture for what we're going to be describing. Then we get into the marketing problem that I was describing. I took some excerpts from some of the web pages of the vendors that I selected and tested and they're all advertising the same thing. Achieve true zerorust security built on zero trust principles. Zero trust is built in seamless zero trust connectivity. So, if you look at these and you look at their data sheets and their white papers, it makes it very

difficult to determine what the differentiators are. And if they come out and they're pitching to you and they're going through their pitch deck, I I I would give you a a bet. Take their decks, remove the logos, remove the the colors. They're all pitching you the same thing. They're making the same promises. They're saying that they can deliver the same capabilities. They're all basically synonymous. And that's why I felt like there needed to be some comparisons, there needed to be some evaluations. So how I started out was starting with zero trust as a whole. And a lot of you are familiar, so I won't spend too much time going down this this uh this rabbit

hole, but you know, John Kindervag from Forester coined the term back in 2009 2010. From there, I would say a more recent architecture and reference is NIST. I I'm sure most of us are fans of NIST. We like standards. You know, they built out special publication 800 207. It's a long dry read. If you like technical material, knock yourself out. But ultimately, it breaks down zero trust. And it says here are seven tenants and principles that should be guiding you as you go down this journey. And you should be applying that to the different information technology pillars, right? You should be doing these zero trust things in identity. You should be doing it in devices with networks,

applications, and workloads. And of course data around all of that. We should be getting visibility. We should be having analytics to understand what's going on. We should be automating our manual tasks and we should be orchestrating communications and capabilities between disparate products and technologies. Ideally, we have governance wrapped around all of that to help give that guiding light on where we need to go. Ultimately, without going into the tenants, right? This is all about never trust, always verify. But I needed to build upon that. And this is where I started to pull some test criteria and what I wanted to use for measurements. So the cyber security infrastructure security agency, CISA, it's a mouthful. They published the zero

trust maturity model. Most recent one you can see here is version 2.0. The document itself goes into a lot more detail. But this starts to paint a picture on okay you say you're doing zero trust and you're mapping back to NIST, but how do you measure your progress in your journey? How do you measure where you're successful? How do you measure where you still have room to improve? And so they provided some categories that you can fall into for each one of the pillars, right? Are you a traditional state? Are you still stuck in the '9s? Are you starting your journey and you're in the initial phase? Are you more advanced and you started to

implement a lot of zero trust capabilities? And of course, are you in an optimal state? Because in security, as you all know, we're never done. It's just are we optimized? So I started to handpick components from these. And what we're going to talk about when we get to the tests is I looked at things in the identity pillar like differentiated access, multiffactor authentication. I pulled items from the devices side. Okay, can we measure device trust and device compliance? What about networks? Right? Can we do some network differentiated access? Can we provide some network-based protections from the application side? How are we segmenting? How are we providing application visibility and protection? And then data. Uh I took some standard

bait and tackle. Let's just try some data loss prevention capabilities and some data inspection. Once I had picked out a sampling of test criteria, I then needed to decide, okay, who do I want to measure this against? There's a number of players in this space. Everybody in their mom is advertising some of these capabilities offers a solution. So, I wanted to test my hypothesis of could a framework actually provide metrics and results that we can use to make decisions. Can it be effective? I did anonymize the vendors that I chose because I didn't want anyone here to have bias in the results. I also didn't want to necessarily have anyone's name drug through the mud. Um, these are

constantly quickly changing environments. You know, all of this data was from just two months ago. I would say it's still very accurate and relevant for where things are at today. But if you were to look at this a year from now, results may change. And this is why I think it's overall more important to have a methodology and an approach that you all can take and leverage in your environments as you start to look at these solutions. So there are a few sources and I have my opinions on who the leaders in the space are. I grabbed Gartner because ultimately we have Gartner to blame for secure access service edge terminology for zero trust network access and I

would say they're directionally accurate when it comes to looking at the leaders in the space. I think it's reasonable for you all to assume. I probably picked some of the leaders that they like to call out. And so I grabbed three of those, but I wanted to have a variety. So I also grabbed a niche player, someone that may or may not be based in the Pac Northwest, may or may not be a developer of operating systems that we're all intimately familiar with and is not necessarily a huge player, may not even show up on some of these charts, but they have good market share. And then I wanted to also compare to somebody at the other end of the

spectrum. So I grabbed a player that's more home user, lab user, smallmedium, businessoriented. And let's see how all five of them stack ranked up and how well they could show if this hypothesis of this framework was effective. So let's talk about the lab environment and what was actually built. Hopefully everyone can read some of this, but I will summarize it. So I had to create two environments to test all of my criteria. One was an untrusted environment where I essentially had built out and grabbed two Windows laptops, Windows 11, called them client 01, client 02. I had two users, user one, user 2 on corresponding machines. One, client01 I used for more of a compliant secure device. Let's let's

turn all the security controls on. Client two, I said, no, let's work under assumed breach. Let's reduce some of the security capabilities that it has. Let's maybe equate that to a compromised machine. In the untrusted scenario, I was replicating a coffee shop, a home situation where we're working from home or where you're at bides and you really can't trust the other people around you, right? They are absolutely, you know, people that could potentially attack you. That is untrusted. On the data center side, I built out three services, three servers. One was that Ztna proxy which I would change out for each one of the vendors and that was basically just built with their custom images. I'd also built out a Windows

server 2016 to act as a file server and then I'd built out a Linux abundu server running DVWA or damn vulnerable web app to emulate a vulnerable web application. And I called those file01 for the Windows server, web01 respectively for DVWA, all sitting behind a firewall. On the trusted environment, the data center stayed the same. Didn't change anything there. But for emulating a a branch office, a remote office, a headquarters, I said, "Okay, we're sitting behind a firewall. We're in a trusted network. And ideally, we're going to have other local applications and services we need to access." So I built out another Abuntu server, threw Apache on it and called that web02 to basically be a localized resource in a

trusted environment. So this paints a picture. This gives you an idea of what my basic testing environment looked like. Now we needed to actually look at what are we going to test. So I started with the identity pillar and I tried to start with something that I felt everyone should be able to accomplish which was just differentiated user access. If I have user one and they're more of a highly privileged user and I have user two which maybe is somebody I don't trust as much, can we provide different levels of access? So specifically my success and failure criteria was based on user one should be able to access both applications, the file server and the DVWA web server.

User two should not be able to access the web application. They only get access to the file server. So pretty fundamental, pretty basic. Everybody should be able to accomplish it. And then I added another identity factor. Let's integrate with with MFA. So in this case, I used Office Authenticator. Pretty standard. Almost everybody leverages it to some point or another. And let's do stepup authentication. We know that we have a vulnerable or more sensitive web app. So when user one attempts to access DVWA, we should be able to prompt them for stepup authentication and say, you know what, we want to trust that this is a legitimate request. Let's have you go through Office Authenticator and

validate that request. Then you get access. If user one was able to access the web server without passing multiffactor authentication or the solution lacked the ability to do that, that's going to be marked as a failure. Then we shifted over to the devices category. Again, tried to start out with some relatively easy things that everybody should be able to accomplish. Keyword there being should. So let's I used Bit Locker configured disc encryption on client one kept it disabled on client two and said okay let's determine if compliance and device trust can be established by the ZTNA solution validating whether or not dis encryption is configured and in action then we shifted to endpoint protection again just leveraged Windows Defender

pretty straightforward very common and let's start out with basics can you even just detect and evaluate compliance status based on is defender present. But then let's build on that. Let's look to see okay is defender it's there but is it actually running? Is the process in memory? Is it enabled for real time protection, real time enforcement? Then I expanded one step deeper and said okay let's start to do some adversarial work. So defender is there. We evaluated that its real-time protection is enabled in some situations. And I'll just use a basic one of let's say I have some malicious tooling that I need to leverage and for whatever reason I can't or I choose not

to obiscate it. Another approach aside from disabling real-time protection uh because I I I don't want to throw up huge red alarms for the sock. I I want to be able to at least try and fly a little bit under the radar even though I'm not necessarily using all of my tactics. Let's just remove the defender signatures. So we keep it running. It's doing real-time protection, but it doesn't have signatures to match anything against. So, I can load malicious tooling on there and fly under the radar a little bit. And so, I just leverage the MP cmd run executable for that and just remove the definitions from client 02. Then, let's aside from determining device compliance status and evaluating

successes there, we moved on to the networks. And this is why I had to build out both an untrusted and a trusted network. Starting with the basics of okay, if you're in an untrusted environment, if you're at bides in Portland, we should be able to give you access to your private apps, but we should be able to also block. We'll detect that you're in an untrusted environment and we should be able to block any local area network traffic so that people besides can't attack you or you're not going to respond to any of those attacks. The the solution should be able to identify that, should be able to enforce that ideally. If not, we're going to mark

that as a fail. But then let's flip that and let's say okay now you're at the trusted location. You are at a headquarters or remote office or branch office. You do have local resources that you need to access. So I'm going to expect that Ztna solution to be able to identify that and be able to provide you access accordingly. So let's test both of those sides. And then going back to some more adversarial work. Okay. If I compromise someone that's a remote user that's using zero trust network access for their apps, one of the first things I might start to leverage is to do some recon. What do I have access to? So in this case, kept it easy, kept it

simple. I threw end mapap on the Windows laptops and I said, "Okay, let's start scanning the services across the Ztna solution." Ideally, if you're successful, I'm not going to be able to enumerate ports that are open. I'm not going to be able to fingerprint those services and the versions that they're running. if I can that's going to be a failure. Then we moved into the application side. So here expanded a little bit into okay let's just start with some fundamentals. I expect these solutions in today's world should be able to do some layer 7 capabilities. We should be able to see application protocols. In this case I picked out SMB since we have the file

server. So I expect a success is going to be we can log and get visibility to the SMB protocol and the traffic itself. If it can't and it doesn't recognize SMB. It's limited to layer 4 and can only see the port and call that SMB. That's not good enough. But then let's build on that. Let's say, okay, you can see layer 7. Can I start to build policies around layer 7? Can I say I'm going to say a rule that in there user one and user two or those groups are allowed to access the file server using SMB. I don't care what port it's running on. So success criteria there is we can actually build that

policy with real layer 7 conditions not just based upon port mappings. And after that I started to have a little bit more fun. So I said okay I specifically picked out server 2016 because it's still vulnerable. It still is vulnerable to MS17-010 or commonly referred to as the eternal blue exploit. Let's test fundamental security inspections and vulnerability protection. And I threw metas-ploit on the Windows machines. Super common. I know we all love running metas-loit on Windows. And I used the MS17-010 PSAC module to essentially try and exploit the Windows server. And in this case, I wasn't going to go for a reverse shell. I just wanted to see could I issue commands and have them execute it

as system on the Windows box and create a local file on the C drive. We can use that to test whether or not there is protection in place, whether or not it's successful and whether or not it can be alerted to and detected. Then we got into the web attacks. I threw DVWA in there for a purpose. I want to see what kind of web protections do we have against the OASP top 10. I just grabbed a sampling and I said, "Okay, let's test for some local file inclusion." a good solution should be able to pick up some very basic fundamental items like easy let's just check for Etsy password and look at what accounts are on there and then I put one

more test in there to say okay if I create a local text file and so this case I specifically put it under the var triple dub directory still wasn't directly accessible from the web application but with local file inclusion I would be able to access it this is similar to somebody going and trying to grab like the web config which shouldn't be accessible Then we moved on to command injection. Again, interesting to see how are things going to turn out, but let's throw some commands at here and see if we can exploit those on a vulnerable web app. So, I threw ID in there. Very basic, but let's just check to see if we can

determine who the identity is of the user running that service. And then for variety, I tried a few other commands, but I I standardized on pwd, right? Let's just check to see what present working directory that we're in. Should be table stakes. Similarly, I said, "Okay, let's do the hello world of SQL injection. Let's just do one equals 1 and see if we can dump a table for variety as well as for some testing methodology." I said, "Okay, let's let's expand that. Let's also try dumping the table with a different equation. Let's just throw eight is less than 9." Everybody should be able to pick up on these right? And then finally on the data pillar,

basic testing, but also looking for basic data inspection. If it's capable, I wanted to be able to look for data loss prevention and have that in place. So I grabbed text file and a PDF that had PII, you know, personally identifiable information in there, full names, addresses, phone numbers. It's a success if we can detect and block that. It's a failure if I can download that file or files that were staged on the Windows server to the remote Windows machines. Similarly, I had a text file and a PDF with PCI data in there. Build out some PCI rules, standard dictionaries. Let's just look for credit cards, CVV numbers, full names. Should be able to block that.

And then for fun, I added in some malicious file detection and reverse the data transfer and said, "Okay, if I'm on a compromised endpoint, what happens if I leverage some malicious payloads and just try and upload those to the Windows file server for staging?" So, I grabbed a couple iicar files. One's the text file, another one was the COM or batch script. And then I also leveraged since I already had Metas-ploit on the Windows machines, let's use MSF Venom. Let's create a very simplistic vanilla reverse shell executable from interpreter 64-bit. That should be very easy to pick up signature based, nothing fancy. I can always expand. I can always build more. I can always do other trade and

tactics, but this all should be easily testable things. So let's see how things resulted, what you're actually here for. How did things turn out? So I categorized these into the market leaders being A, B, and C, the niche vendor being vendor D, and then the small medium business vendor as E. As I hoped and expected, hopefully everybody can see this, but the userbased differentiated access was actually pretty straightforward. Everybody was able to do that just as I had hoped because I was like, "Okay, you're not even a ZTNA provider if you can't do the most fundamental thing. This all goes back to identity." But where I was a little bit surprised was when it came to the multiffactor step up

authentication should be table stakes easily part of a zero trust approach. But that's where vendor C being a market leader failed. The reason that they failed wasn't that they didn't support MFA. it was that the only ability or the only configuration that they supported was doing a multiffactor authentication upon logging in or authenticating to the ZTNA service. So the front door, but it did not have the ability to configure this or any kind of multiffactor step up off for specific applications or specific services that you're trying to access. As being a market leader, I was a bit disappointed in that. Everyone else was able to achieve it. Then we got to the devices side. Again,

some interesting results, but not necessarily for the reasons that you're expecting. So, disk encryption, endpoint protection, is it there? Endpoint protection, is it running? Is it doing real-time enforcement? And even the endpoint definitions, vendor D, the niche player, failed all of those. It wasn't for the fact that they couldn't do it. It was for the fact that this is a large platform player and if you want to be able to have these capabilities, oh, you got to pay more money. That's a separate licensing. That's a separate product. That's a separate dashboard where you have to go and configure those things. It's not natively or inherently part of their zero trust network access solution. I'm not playing those games.

That's a fail in my book. Like these are inherent components that a ZNA solution should be able to provide. Everyone else was able to pass those checks except for the small medium business vendor which ultimately they could detect defender being present. They could detect the real-time protection being enabled and enforced. They fell short and they were not able to look at definitions. So I could remove the defender definitions and play around to my heart's content. It just didn't have the ability to recognize that and to change compliance status based upon that. From there things started to get interesting. So the network segmentation being able to detect am I in a trusted location where I do allow local network access or am I

in an untrusted location where I need to restrict and block local area network access. Vendor D once again failed ultimately not necessarily because of a separate product issue or licensing issue. They just don't have a big network security play. They're not a networking vendor and haven't been. So they just did not have that capability. Vendor E failed both of those network location checks because they are just not that big of enough of a player. They don't have those capabilities. They will treat you the same regardless of if you're in either location. They just don't have any network recognition. They cannot block or restrict local network access. Then we got to the service cloaking. I came in with no real expectations, but I

was still was still kind of surprised by the results. So with the service cloaking again I'm trying to identify can I move laterally or at least start to do some recon laterally from the remote endpoint to the applications in the data center. Vendor A who I thought was going to be able to do this didn't do it at all. I was able to leverage end map. I could scan and see what ports are open and I could do version fingerprinting and determine exactly what applications were running over there. That was a big miss and one that I didn't necessarily expect. Vendor B, they were able to prevent me from fingerprinting. So, I couldn't see what

versions of applications or specifically what applications are running, but I was able to identify what ports are open and I could just use some manual efforts from there to do some version fingerprinting. So, I gave them a partial win, partial success, partial, you know, failure, glass half full, glass half empty, take your pick. But, they were able to do a little bit of it. Vendor C, I actually didn't expect uh necessarily for them to be able to do this. They were the best. They were the only one that could say, "Oh, you get nothing, Darren. You don't get to see what ports are open. You don't get to do any version fingerprinting, but if you

open up Explorer and you navigate to that file share, here you go. If you open up a web browser and you want to go and and browse the web application, here you go. Those are legitimate requests." So, that was the only vendor that I found was actually successful at doing proper service cloaking. And vendor D also surprised me because again not a strong network player, not a strong network security player. They did also still stop me from fingerprinting. I could see what ports are open, but they were just like vendor B. So I gave them a partial. So that was where things started to become interesting, but then I got to the applications pillar. And

this was where we started to see that divide between the players get a lot bigger when it came to layer 7 capabilities. who can actually identify the SMB application and who's going to allow me to build policies like that. It was only vendor A. Regardless of what everyone else advertised, marketed, promised, they really were still doing static mappings between ports and applications and they were not able to build out policies around that. They weren't able to actually look at the SMB protocol itself. So, player A or vendor A was the only one that had that capability. And then we got to exploiting MS17-0 and zero. Again, I expected a little bit more from the market leaders, but vendor

A also was the only one that was capable of detecting my potential exploit, blocking it, and alerting to it and saying, "Hey, big red flag. Someone's actually trying to exploit a vulnerable app over here." And we stopped it. So, I was happy that they did that, but I was disappointed that everyone else said, "You want to exploit it, feel free, go ahead, have fun." And then things continued to grow in my interest. So, the local file inclusion attacks, everybody, well, vendors A and B were both able to block my LFI attempts. Great. But when we got to command injection, this surprised me and I wasn't expecting. So vendor A blocked both of my command injections with basically

just issuing ID and PWD. I played with other commands, ls, etc. And they picked them all up. Great. Vendor B only picked up the ID command. I could run pwd. I could run a lot of other arbitrary commands. It just let it fly right through. Didn't pick up on it. Didn't alert to it. Didn't block it. which tells me and should tell all of you, they're essentially working with like a very small subset of a dictionary or of signaling that they're looking at. So, there are plenty of ways to bypass that, plenty of ways to be able to perform command injection against a vulnerable web app. Just don't do the obvious ones. But then things flipped on me when it

came to SQL injection. Vendor A that was doing so strong in all of these areas, they blocked the one equals one. Great. Everybody should. But then when I said eight's less than nine, here you go. Here's your table. Um, totally didn't pick that up. And again, it's like, okay, you're telling me that you basically just programmed this with a tiny little dictionary. You're only going to catch the hello worlds of, you know, of the space and easily obiscated, easily bypassed with anything else. And vendor B, because of the command injection issue, I expected them to fail this either partially or fully. Nope. They caught both of them. They caught all the other commands or all the other

SQL injection attempts that I made. It's like, oh, okay. So, you're better at SQL injection than you are at command injection. Interesting. Everybody else, including vendor C, failed all of these attempts. Not even close. And then finally, we got to the data pillar. Also, some fun nuances here. So, vendor A, well, vendor B was the only one that was able to do PII and PCI detection and blocking. That was fantastic. I actually wasn't sure if they'd be able to do it. Vendor A, I configured for DLP. I said, "Please look and detect. Use your PII PCI dictionaries." I could copy those files down from the server to my remote machine as I wanted. And it really confused me. I'm like, why

am I able to xfill this data when you're so strong everywhere else? When I dug into it, come to find out their DLP inspection is limited on what protocols it supports. If I was doing this over HTTP, HTTPS, okay, it doesn't support SMB. Total blind spot, as well as some other protocols that it totally doesn't care about. That was a big miss for me. Now, if you look into their whole portfolio, they have a solution that they can sell you that'll do that, but it's a separate product. It's a separate purchase and it's a separate agent that you have to install if you want to cover those other protocols like SMB. So that was a huge miss I didn't see coming. And

then lastly, taking my IICAR files, taking my interpreter binary reverse shell and uploading those from the remote machine up to the Windows file server. Everybody had a bit of a miss here. Vendor B said, "Yeah, go ahead. Pass those files all along." Part of it is because I think that they're lacking in some good malware detection and another part of it is I found that they only really inspect traffic like malicious content and arguably DLP one way. They don't actually look birectionally to do these inspections which was a big gap. So I could pass malicious content up all to my heart's content. Vendor A I marked as a partial because in Oh, was there a

question? Do you have time for a question? >> I I should at the end if I land the plane on time. >> Okay. All right. If not, you can easily pull me aside. So, vendor A, I gave it partial because I again they detected the ICAR files. Great. Easy. The binary executable that I had developed with MSF Venom again didn't do any obiscation. Didn't do any didn't do any like exor encoding. Nothing fancy here. Super vanilla. but it had a it didn't match any signatures. So this solution would allow me to do the first upload. It would sandbox it, but based upon obviously leaning towards operability, it wouldn't get a response back from the sandbox in

time and it would allow me to upload that file for the first time. If I did subsequent uploads, you know, try 2, three, four, five, it would block it. It has signatures for it. But weirdly enough, if I just regenerated the binary with MSF Venom, every time I generated a new one, same thing. It would try and sandbox it and it would allow the first upload. In a real world situation, I think that you can tweak the configuration, especially if you're willing to impact end user behavior and end user operations and say you're going to wait. I think that you could turn that into a full block. But a lot of the basic and default config and the fact that most

environments will lean towards user operability that was that was not a big win. So that was a partial but I was surprised that nobody else picked this up. So as a whole here's your full table and it really told a story and it proved my hypothesis even with just basic testing. I was like okay should I get more advanced? Should I do some stronger adversarial emulation? I didn't need to. Just fundamental tests like we talked about here showed that the small medium business vendor, they fell right off as soon as we got into the network applications and data pillars. The niche vendor could do a lot more than what's shown on here, but it's going to require

more of your money and it's going to require more admin work and it's going to require you to go into more dashboards and do more config. Vendor C did pretty strong right up until we got to the applications and data side and then they fell off. even being a market leader, being one of the top three, still couldn't achieve these capabilities. And then A and B, those guys were duking it out. They were going at it. They were exchanging blows. Both of them had pros and cons, but neither one's perfect. So, it really showed that when you're looking at down selecting, which of these providers is going to make the most sense for you, you really should be testing these

things out because, as you can see, the results will vary and may not be up to your expectations. Now, I did mention I'll talk or touch on risk reduction and I'll I'll probably breeze through this a little bit quickly just because we don't have a lot of time left and I want to be respectful of all of your time, but I I had been asked by another customer in a situation like what's an easy win? Obviously, I can do a lot of identity controls, device controls, but what's the biggest easiest win I could get to reduce this residual risk from the ZNA provider? And where these folks will come out and tell you, hey, here's our deployment architecture.

just drop this virtual machine in where your applications are. You don't need firewalls. Don't worry about that. We can even replace them. Please, please, please don't. Okay, if you test this out, one of the first things that you will learn and what I absolutely strongly recommend, don't get rid of your nextg firewalls. Right? A lot of the deficiencies and the risks that these guys still are having and that you will be owning can be mitigated just by deploying a true next-gen firewall, configuring it appropriately and placing it as a boundary in between their virtual machines and your actual applications that your end users need to access. And then lastly, if there's anything to take away from

this, it's do not trust. Always always verify and test test. Use a lot of the framework that I started here and build out your own testing methodology. Put some of these use cases in your environment to when you're doing a proof of concept. Build out applications in your local segment and subnet that are relevant for your business for your environment and test all of these concepts and break it down to these pillars to see which player is going to fit best for you. a year from now, use some of this to start to evaluate where are they at because I guarantee you things are going to change, but the question is is how much will they change

and what are some new or still remaining risks that they have. Lastly, I did want to thank I'm not the only person doing research in this space. I'm not the only one that likes to break things. That's why most of you are here, I'm sure. But if you were at Defcon this year, I highly highly recommended that people went and looked at this talk. It's also publicly available. They even recorded it. I think the video is on on Vimeo, the PowerPoint is available in PDF. But these guys from Amber Wolf, uh, David Cash and Rich Warren did some phenomenal research and took a totally different approach than I did. I'm looking at data in transit and a lot of the network side

of things. They said, "No, let's see if we can break the authentication. Let's see if we reverse engineer the client and the binaries. What does that look like?" We all assume that these security companies are doing good development and security secure development practices. In reality, they're making some really boner mistakes and they're doing some things that are just shocking and that's what a lot of the content that Amber Wolf uh provided. So, I highly recommend going and taking a look at that because it may be additional use cases and test criteria that you have, especially if you're a more highsecure environment or if you don't want your endpoints and the zero trust network access provider

software that you're installing to be become a potential compromise. And with that, I do want to thank everyone for their time today. I am always available. You can grab me in the hallways. Uh you can reach out to me on email or LinkedIn. If you like dry reading material and some other crappy pictures, uh you can always reference my white paper and download that. And with that, I will open things up for questions. [Applause] >> Yep. By all means, go ahead. the the test that >> hi the test that you were showing that had to do with layer 7 injection and all that >> in your testing was all the enforcement done on the endpoint or were you also

using the proxy or server side uh components of enforcement and detection? >> Yeah, solid question. So all of the enforcement and inspection was being done in the cloud. So the data had to actually transfer uh from the remote user up to the cloud service and then there is where the inspections occurred. It wasn't at the end point. Yeah. >> So so those solutions were not doing even basic layer 7 firewalling in their cloud firewalling. >> Correct. Yeah. Very surprising and shocking even for me. >> Thank you. >> Yeah. Absolutely. Any other questions? >> Oh yeah, by all means. Thank you. >> So, uh I'm not too familiar with um zero trust or wasn't until your talk. Great

explanation. But see, seeing as there was first deficiencies in in some of your test categories across all these different vendors and also since a lot of these different um capabilities seem to be like like you said duplicative of like the nextG firewall or I was thinking about web app firewalls, other security solutions. Does this kind of call into question like the value proposition of one unified uh zerorust network access solution? Like why buy this extra tool that's doing it all when it's not really doing all of it well? I guess no that's a solid one. So a lot of it came down to they are promising all of these things and these capabilities but they're just not delivering on them.

So I think there's a couple of reasons why folks would still want to transition over. Part of it is moving from like a capital expense model to an operational expense model and being able to have less on-prem requirements um and less on-prem equipment. Although, as you saw, you still have these virtual machines that you may be responsible for uh but you're not necessarily responsible for the upgrades or the updates of those. So ideally you're picking a solution in a ZNA provider that will be operationally easier and have less uh employee time that can be used elsewhere. But yeah, this was a bit of a reality check to say, yeah, they're promising you can do all these things. And like I

said, some of these vendors will come in and say, "Oh, you can get rid of those firewalls. We got all the security. Don't worry about it. Trust us." But this proved that it's not actually there. And you definitely need to identify what those residual risks are and adjust accordingly. And yeah, you're not getting rid of firewalls. At best, what you're doing is getting rid of VPN concentrators. So, great question. Thank you. >> Yeah. Thank you for this uh very informative. I was just wondering if you repeated the same testing matrix uh against the other vendors um that you had listed in the Gartner Magic Quadrant and my assumption is you did uh Magic Quadrant and then

Microsoft. So I guess I will dance delicately on that subject. uh Microsoft may or may not have been one of the vendors that I tested and I'll be happy to have that conversation with you on the side because I felt like they may be worthwhile contender. I get a lot of conversations especially as companies are going Microsoft allin. I'm already paying for all the licensing so why not especially from the sea levels. So yeah I it would be a safe assumption that I probably included them in that in that matrix. So solid. Anybody else? Oh yep. He's gonna throw the microphone at you. >> Thank you. >> Um, from like a career pathing perspective, like I feel

like a lot of what you described was be beyond like identity access management and also like deep into networking. What would you uh what advice would you give in terms of like skills to pursue to pursue a similar path? >> So that is a tough one. Oftent times I and I take this for granted. You as you start out in in information security or cyber, you'll probably want to pick a path and pick a vertical where you can get deep and then expand from there. I I started out doing a lot of CIS admin work, doing a lot of Windows and Active Directory. I expanded into networking engineering and then more into dedicated cyber security. Especially as I started

to see how complex and difficult security was. I was like, "Oo, I like that. That's hard." Um, but generally going down like the zero trust pathway is difficult. Um, there are folks that will solely stay in one of those disciplines. Um, I always assume that everybody has some kind of a networking background and that we all understand what subnets are and some fundamental network architecture. But there are some of my co-workers that they live in identity. They do identity access management, privilege access management. That's all they know. They can like write down an IP address, but they have no idea what subnetting is or a cider mask or anything like that. And they're absolutely phenomenal at what they do.

Um, so I think it's a matter of if you want to start going down that zero trust route, you should be decently deep on at least one of those disciplines like identity or networks or data protection and then expand out and start to learn more about those other disciplines. Um, myself, I don't see myself as an expert in most of those disciplines. I lean on my co-workers and my colleagues, but I at least know enough to be dangerous and I enjoy learning and I enjoy breaking things and I'm always humbled by the research that other folks are doing as well. So hopefully that makes sense. >> Yeah. Yeah. So, you're essentially like picking from one of those pillars that

you just mentioned in your testing, going deep in that and then Yeah. >> Absolutely. And expand from there. Yeah. But don't And I I see some if you try to be super deep on all of those pillars, you will not have any free time in your life or family. Your family will give up on you and say, "You know what? That guy's too focused." So, thank you. Any other last questions?

All right, thank you all for your time. Greatly appreciated.

[Music]

[Music] Heat. Heat.

[Music]

Heat. Heat. [Music] [Applause] [Music] [Applause] [Music] Heat. Heat.

Heat. [Music]

[Music]

[Music] Heat.

[Music] Heat. Heat. [Music]

[Music] Heat. Heat. [Music]

Heat [Music] [Applause] [Music] up here. [Music] Heat. [Music] Heat. [Music] Heat. Heat.

[Music]

Heat. Heat.

[Music]

Heat. Heat. Heat. Heat.

[Music]

Heat. [Music] Heat.

Heat.

[Music] Heat. [Applause] [Music] [Applause] Heat. [Applause] [Music]

[Music] Heat. [Music] Heat.

[Music] Heat. [Music] Heat. Heat. [Music]

[Music]

Heat. Heat. [Music]

[Music]

Heat. Heat. [Music] Heat. Heat.

[Music]

Heat up here. Heat. Heat.

[Music]

Heat. [Music]

Heat. [Music]

[Music] Heat. Heat. [Applause] [Music] [Applause] [Music]

[Music] Heat. Heat. [Music] Heat

[Music] up here.

Heat. [Music]

[Music] Heat.

[Music] Heat. [Music]

[Music] Heat. Heat. Heat. [Music]

[Music]

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat. [Music]

[Music] Heat. Heat.

[Music]

[Applause] [Music] [Applause] [Music] Heat. Heat.

[Music]

[Music] Heat. Heat.

Actually, [Music]

[Music] let me pause because I know that the AV guys want to give me the high sign first. All right. Cory has been in the information security space for over 20 years and building software applications even longer. He spent years on the east coast as a principal security consultant with the Interpetus Group before joining the in-house security teams at places like Etsy and Simple. Poor one out for Simple. He spent six years at a Unicorn tech company becoming their director of product security. Currently living on the Oregon coast, he enjoys tinkering with PCB designs in Kyikad, singing offkey punk songs with his son. You'll have to give us a demonstration and trying to convince people that video

games can be art. Take it away, Corey. All right. Thank you, Magneto, and everyone here. Um there we go. Uh welcome to uh securing GraphQL from design to production. Um it's a real pleasure for me. have been able to come to Bides here in the last couple of years and so it's really nice to get the opportunity to present here for all of you. So at the end of this talk I'm hoping you'll understand how these small little requests that we got once every five minutes coming into our work environment caused a denial to service attack for our back-end systems. Uh hopefully for those of you that are pen testers and coming across GraphQL have a few more

ideas of things to try and and do for testing. And for those of us helping to build secure applications with GraphQL, some more things to kind of keep in mind at the development stages. So quick little bit about me. Uh this is how my three-year-old thinks Daddy works, which is not completely wrong, but I do have slightly better typing skills. Um who are we coming back? Let's see. Okay, good enough. Um, so for this particular talk, I was a former director of product security at one of those tech unicorn companies uh that made a lot of use of GraphQL. So the stories that I here have here for you today is from my experience with them. I've recreated uh

a lot of the sort of services, the requests and the responses. So these aren't the actual uh services anymore, but it captures the idea of things that we actually had reported to us through our bug bounty programs, through our pentest and our other reviews. So, I'm going to assume you're all a little bit familiar with uh REST APIs. Um, and if you got to Corey Ball's talk earlier in track 2, that was a great sort of um fundamental sort of foundation for what me and the team understood about sort of security for REST APIs and endpoints. Um, but as I started with this company in 2019, they are already starting to use GraphQL. And my whole security team, we kind of had

to come up to speed and realize those differences between securing REST endpoints versus GraphQL. And over my time there, there was about a dozen or so different GraphQL services that they had in place, but they pretty much lumped into these three different categories. We had internal services, GraphQL gateways. We ended up developing a GraphQL gateway for all of our mobile apps to use. And then finally, at the end of my time there, there was a partner API that we developed um to help our our external partners integrate with our systems. And it was really our development team that was driving this adoption for GraphQL uh and really create a useful abstraction for them uh with their microservices, being able to

kind of continue to develop and change things and not break things for the clients that were reliant on them. So um they were really kind of the ones that helped drive the adoption of GraphQL. We weren't using uh MCP at the time or any LLM, so we didn't really have GraphQL integrated with that, but it's kind of the new chocolate and peanut butter uh works really well together technology. So, if you haven't hit GraphQL yet, um I got a feeling you might be experiencing it more and more here with with the current hotnesses. And if you know anything or heard anything about GraphQL versus REST API security, you've probably heard about introspection. So introspection is

special queries that can be sent to your GraphQL server and returns information in the response about the whole schema for the GraphQL service. So any queries and mutations it supports, any of the objects, any of the field names, all contained within that schema. So anything your developer needs to know to understand how to make a request to that service and get the data back that they want. Obviously as an attacker, that's great information to have as well. It's like somebody publishing their API documents out there online for you. Um, and so if you've ever come across this sort of interface, this UI before, this is GraphQL Playground. It's really popular. Um, and off to the side was

usually the schema and data section that you can expand and start poking around and seeing that information. And all that's getting uh coming from the parsing the response from one of those introspection queries. Now, if you ever get your GraphQL service pen tested, you have introspection enabled. expect you're going to get that as a finding. Uh so we disabled introspection in production, but as we were doing our own internal reviews, we'd either test against a staging system with it enabled or get the schema from our development team and use that as part of our security reviews. And what we ended up finding out uh after a little while, a really helpful way for us to understand that

schema a bit more was to actually visualize it. So this is GraphQL Voyager. not meant to be a security tool necessarily, but our team found it extremely helpful for helping us understand the relationships within the graph and kind of visually quickly understand and see things. Uh, similar to GraphQL playground, it's getting all this data from sending introspection query and parsing that response. But in this case, it's parsing that response into this interactive uh, web page that you can use to drill down on the different objects and relationships within your graph. Um, if you're using Burp Suite as your sort of testing tool, there's the InQl plugin. Uh, thank you for Xavier for pointing that out to me.

It's it's been a great tool and resource to use. It makes it really handy if you're actually testing a GraphQL uh interface on formulating your queries. But also, it now has uh GraphQL Voyager built right in. So, if you want a real simple way to just kind of pop that open and take a look visually, um, that's a really handy tool. Uh, like most things in Burp, if it's a really large schema that's taking up a lot of memory, you you might have to walk away for a little while, let it load, and come back after a day, but um, it's still really handy. So, let me show you how these relationships ended up uh, impacting us

at the place that I previously worked at. So, I realize this might be a little hard to see, so I I'll kind of talk a little bit about it here. On the left hand side, we have the GraphQL query that we're our mobile app was sending to the service. And over on the right hand side, this is the response data that was coming back. Um, I apologize a lot more of these down the line is hopefully going to be clearer to read in the back. Um, but basically our mobile application was searching our locations over a certain time period. Uh, and asking for any bookings that were already being taken place at that location. And so

that was the data you're getting back on the right hand side. So UYU IDs, timestamps, publicly available information. Our security team was perfectly fine with this information being returned uh to the clients. But if we took our graph and we actually visualized a simplified version of that schema around how we were handling those bookings, this is what that graph would have looked like. So we do have our booking object there in the middle. Uh we got our booking UU ID, our start dates, a relationship over to the location object that had all that building location information. But it's probably jumping out at you already is that there's a a relationship to a user object as well, which made logical

sense. We are it was our users of the system that were making these bookings. They'd have to own these bookings in some sense. Um, but our client side wasn't requesting this. But since we do see that it's here in the graph, it is something that we could form a valid GraphQL uh syntax for and see if we can return that data along with it. And GraphQL makes that really easy to do. You just say, "Hey, uh, can I have that user object? Uh, I think I saw that there was some PII around there. First name, last name, email address. Great. Return that to me." And oh, also, wasn't there payment object related to this? Uh, there was and all that data did come

back. So, uh, this was something that that kind of was an eye openener to us and understanding these graph relationships. Um, while I talked about this in context to introspection and having that schema and being able to visualize it and see those relationships on how to ask for the data in GraphQL, really the issue here is that we didn't have the proper off checks around when we're allowed to have access to that user object. So that was what our fix was to basically build into here. Um, and in fact, we had introspection disabled on this service. I mentioned it came from our mobile app. So the person who reported this to our bug bounty um

actually did it through um decompiling our mobile app looking at all the different GraphQL requests that we were sending and kind of building their own schema and understanding that this relationship existed. So this was sort of a wakeup event to us on the security team of using Voyager more and understanding those relationships in the graph and start looking for them. And in fact this almost happened to us again when we developed our partner API service. So our partner graph was quickly becoming this everything GraphQL API. So it have lowrisk operations like what time a building was open and hours and stuff like that, but also these really high-risk queries and mutations like getting invoices and doing bill billing

and payments. Um and so this really caused a lot of those relationships to be there between high-risisk and lowrisk sort of objects. To some degrees we try to separate that out. I think at one point we worked with having maybe a public user object and then a my user object for more of those internal things that somebody would set for their account ended up causing a lot of confusion and being a lot more complexity than what our developers wanted. Um so uh like I was saying this partner API was turned into this everything app and just because you can add everything together into a single graph it's not always the best idea. We definitely experienced our graphs

growing larger and larger and larger. Um, but we did look to Shopify as a good example of one resource where they split out their admin GraphQL API from their user API, their storefront API as they call it. Um, so this was something we referred back to again and again as we were figuring out how we're going to separate these services and not lump everything into one single graph. Uh, one last introspection related item uh that I want to talk about. Um, so you know, as I mentioned, we would disable introspection uh from our graphs, but we had a number of times where the graph was still trying to be a lot more helpful when it came to any errors or

typos. So if you didn't know for like in this example that it was a user object and you just guessed like maybe it's called username. Do you have one of those? Uh the error m message over here was trying to be helpful. It's like oh I don't have username but uh we do have user. So disabling introspection and disabling either these hints or suggestions is often two separate flags in most GraphQL implementations. So you'll want to check for that. Now uh if you're like me and I were coming from using a lot of your web server request logs has sort of understanding how your users were interacting with your different REST API endpoints. Uh those logs are going to be

probably a lot less useful for you when you come into GraphQL. And a big reason for this is that all your GraphQL request introspections, queries, mutations, they're all post requests to the same endpoint. Um, so if you want to understand what your users are actually asking for and doing with your graph, you're going to need a lot more logging in your application side to get that data. And if you're starting to do that, one of the things that we learned is that the operation names are optional and users can kind of alter those and choose whatever they'd like in most cases. Uh so if you are going to try to tag those or log those operation names,

make sure you do that with something you have tagged on your server side. Um also we had instrumented our our graphing or sorry our our logging system to do alerts based on a lot of 500 error messages. But in a lot of cases for GraphQL, if there's errors, it's actually going to log a 200 okay message and package those errors in its response to the client. Um, so I'll show an example how this kind of got us a little bit later down the line. Uh, but that's something to be aware of as well is that you'll probably see a lot less of those 500 errors in your logs once you move to GraphQL. Uh, and the other logging feature that

we need found we needed to implement more was having a consistent uh, request ID associated with each of our requests that came out of our GraphQL service. Typically, we'd have one query that would come in to our GraphQL and then that would spawn maybe a dozen or so different requests to our backend services. Having a request ID that would tie all those different backend requests with the same client side requests that we got was something we found we actually had it built into our GraphQL system for us. So with those error message I sort of mentioned that um it's always going to typically log a 200 okay message for those and then package the errors in the

JSON response as long as you have valid GraphQL syntax. Originally our GraphQL developers really like this approach because their team was responsible for the graph and not most of the backend services. So graph was running fine. You know here's the error message from the backend service. Go and talk to this other team. that's where your your issue lies. Um, and on the security side, you know, we got our GraphQL system pentested. We were looking for verbose error messages. We checked the configuration for that. It wasn't enabled. Um, but there was more and more back-end services being connected to the graph. Uh, most of those had rev error messages disabled, but one of the days something changed,

one of those connections changed. Uh, and it was a pretty subtle error message. um you know we don't get a full stack trace here. It was just kind of a URL that was part of that response. Uh and tacked on to that URL was also the API key that we're using to communicate to that third party service. Yeah. So uh basically anybody who saw that that error in detail what it was they completely bypass going through our front end uh didn't go through our W or any of our other sort of security controls just directly connect to that third party service. So obviously we had issues around how we were authenticating with that service that we could improve

but this event was sort of our wakeup call that we wanted a better way of handling error messages with graphql. Um and fortunately graph most graphql implementations actually do offers a very simple sort of catch and formatting for error messages. Uh this is was a great way that we could say hey we know these error messages are helpful to our clients. They're coming from systems that we've worked with before. uh we have good validation around that, but we're going to have a default generic error message that's returned in all other cases. Um and so that was something that ended up being really useful and helpful for us, but it was also something we needed to help our

developers understand to build in early on to sort of the GraphQL development process and not have it be a last minute thing that came back to them from the pentest. One last big topic I wanted to touch on was rate limiting for GraphQL. Uh, as you can already imagine with what we just talked about from the differences from GraphQL to your REST endpoints, uh, if you have rate limiting rules set up in your W or gateway firewall around sort of REST request, those aren't really going to translate as well to helping protect you uh, from different GraphQL attacks. In fact, it's normally a feature of GraphQL to take what used to be multiple REST requests, package

them into one GraphQL request, so your client's actually interacting with you less. Um, but this can lead to two types of attacks that we've we frequently see with GraphQL. Um, they're kind of query stuffing attacks. There's sort of this idea of a depth attack and a breath attack. Um, creating this depth attack is sort of this recursive GraphQL query that's really compact and usually simple for clients to figure out and send to your service, but it can cost a lot of resources and and time for your server to actually process and handle. Um, this very simple gen generic example here is you can imagine you have a store webfront that has products in it. those

products belong to a store that has products in it that belong to a store that has products in it. So on and so forth. And this is all very valid GraphQL that you could ask for. Um most GraphQL implementations do have a simple configuration setting to say this is the maximum level of depth that I want. It may take you a little bit of trial and error to figure out what's the right setting for your environment, but that's usually a pretty easy uh problem to solve. Where it gets more complex is this idea of breath where it's somebody can just ask for this very large query with lots of different objects. Um, and some of those may be very simple for

your system to return and gather and other ones might require a lot of resources in your backend to piece that data together. Uh, and that's basically what led to this issue with our services here. Um, here's these Splunk logs. Again, this is a single request. It was spaced out every five minutes. Uh and when this hit in the evening time, our internal service would be dossed for, you know, 15 20 minutes or so trying to handle these. And at first, we didn't even think what was causing the problem was happening through our graph service. You know, there wasn't a whole lot going on in these logs. There wasn't a whole lot of traffic or other activity. Uh we had

some of those logging issues like I mentioned earlier. Uh but then when we dug into here, we realized that some of these request response sizes were really huge. Um, and that was sort of our tip off that okay, something weird is happening with with these queries and and the service. Um, so as we started digging more into it, uh, we realized that this was actually one of our partner teams externally sending a request that was basically scraping our locations for all those booking informations. Uh, like we showed earlier, uh, they actually thought they were being very efficient with how they were formatting their query, which is true to a certain degree. they were um but they had no visibility into what

that was causing to our backend systems. Um so in our case it was you know the DOSs was coming from inside the house. We could just reach out to that team and say hey we we got to restructure this query and how you're accessing the the data. We added some more data loading to make uh these systems more performant. Uh we also changed the size of our our overall post request buffer that we were allowing in because it didn't need to be as large as what they were doing. Um, so we had some pretty quick fixes for for our issue. Uh, but really it started us down a path for a long-term fix, which is needing to be able to calculate the

cost of these GraphQL queries and requests coming into our system. Uh, and so basically you needed to do the cost at two different points. One, how much does this query or mutation actually cost you uh and our systems? but then also a way to look at a GraphQL statement before we actually run it and figure out how much this is going to cost our system and what's a reasonable limit to expect. Like with most rate limiting tools, a generic one-sizefits-all out of the box solution is probably not going to give you the security you need for your systems. Um, and so there's a few tools that let you get as deep with this as you want. And typically it means adding

annotations to your schema and saying how much each of these operations are is going to cost you. Um but you'll need a way to sort of figure that out and decide what makes sense. Um we really look to GitHub and Shopify. Again, they have some great blog post about how they've set up their rate limiting rules. Uh, and if you're using Apollo, which is really popular, um, they actually point to an IBM study uh, about um, how they went through GraphQL and started figuring out what costs should be in most environments. So, I know I'm I'm running a little bit long here, but um, I did want to talk a little bit about what we learned from

doing pentest against all these GraphQL services. What worked well for us and what didn't. Uh, what didn't work well is just saying, you know, here's an here's a here's the website introspections enabled. go for it. Um, we definitely found to be better with our pentest, we would try to include as much documentation for our pentesters as possible. While we couldn't share the actual code for for our projects, um, we would have them test in a staging environment with introspection enabled, we'd also share with them any of the saved queries and mutations we're using in our CI/CD pipeline for developers to test things, as well as any queries that we would save on the security team. Uh,

and a lot of times we sort of had a Postman package that we would allow the pentesters to use. And seeing what some of that default data looked like and what a valid response should be was a huge timesaver and really made the pentest a lot more efficient. Um, also during the kickoff calls, we would demo GraphQL Playground and Voyager for them. Um, we would talk about what queries and mutations were more high risk for us and costly. And if we did have any rate limits in place, explain to them what those rate limits were and if they could bypass them. Uh so with that, um I know I couldn't cover everything, all the security

features of GraphQL. Uh obviously you want to keep the whole OAS top 10 in mind. There's a great OAS cheat sheet specifically for GraphQL nowadays. Um so that's really handy. Um and so with that, uh thank you all for coming. I'm local from around here. I'll be in the hallways. I'd love to chat with you about this or anything else afterwards. Um, uh, thanks very much. The website has the slides on it and also if you're in the CTFs, there might be a little something extra on there for you as long as it stays online. [Applause] >> Two minutes. All right. Yes. Yeah. >> So, I understand this is a little bit of a business decision, so I don't expect a

discreet answer, >> but sure. All right. So the question is um I again I understand this is likely a business decision so very case specific but from somebody who doesn't do a lot with GraphQL where do you draw the line between implementing controls at the GraphQL endpoint for like rate limiting and cost versus like data structure design and SQL queries and like whatever is doing the stuff on the back end because it seems like they're kind of at contention right like you always want to be more efficient then your GraphQL endpoint can do more but where like how did you draw that line in the hand. >> Yeah. No, and definitely that that was difficult and I think something like our

development teams kind of went back and forth with in some cases it was easy when we were designing sort of like the mobile client that was talking to the GraphQL gateway. But in other ones that were supposed to be publicly accessible and we didn't really have a good sense of what queries might be coming to our system. Um it was really sort of a a back and forth. It typically with most security sides, you know, we're fighting for sort of a a tighter control around the rate limits and the cost. um and our developers would kind of want it more towards the looser end. Uh so in a lot of cases that was really where sort of

tracking like performance metrics with the GraphQL service and the backends uh was something that we needed to look into and to an honest degree like only some of that decision was part of the security team. A lot more of that had to be done by the engineering team. >> Fair. Okay. So like you use hotel data or something to to tell you about what's happening on the back end for all that. I suppose money talks, right? Whatever's cheaper to implement. >> Whatever's cheap to implement. I I mean, you know, we we did pay more to have Splunk at at certain points and stuff like that, and that's what we're using for a lot of our metrics before we moved

to APM. Um, but uh yeah, there's not kind of a I wouldn't say that you want you could have all your controls just in the front end. There was definitely like a lot of times where it's like we need to rely on the backend API to have like the proper rate limit controls around how many requests or how much time we'd allow this to take. >> Okay, fair enough. Thank you. >> Yeah.

Hi, thanks for the talk. I missed the first part of the uh your talk. Um are you so you're on a security team at a uh business or >> I was the director of product security at a company using GraphQL. Have you had much success or any any um effort in controlling like uh team like developer API um allow list like endpoints or it sounded like sort of the um the rate limiting was kind of along the same lines because we've looked at it with um like swagger docs through the W like this isn't in the swagger docs so the request isn't going to work but um but then kind We always backed off on it

because it seemed really sticky. So I was wondering if you had any experience with that. Sorry, what what part exactly with >> Oh, with uh API endpoint like allow listing um or if uh the rate API uh GraphQL rate limiting with uh kind of passing that down to the dev teams if you've had any experience or success with that and if it's as sticky as it sounds or if um there's something that you >> Yeah, I guess to a certain degree with like I was kind of mentioning with the rate limiting, it's it's going to work differently with GraphQL. Well, how you you really are going to have to need to kind of calculate these costs to go

along with it besides just setting like I'm allowing these number of requests that come into my service or these many this amount of requests coming from our GraphQL gateway to the endpoints. Um, you know, we basically had to yeah make an allow list for the GraphQL service to bypass any of the regular REST uh API rate limits we had in place because it was our gateway and funneling traffic for multiple users in a lot of cases. Um, so we did that and then you were kind of asking a little bit about an allow list and it was really part of our GraphQL implementation itself that would determine which API endpoints were getting queried from GraphQL. Um, so it

it did sort of work as sort of a a gateway for us that would limit access to only the API endpoints that we built into the graph if that makes sense. >> Yeah, that's really impressive. >> All right, well I think that's time. Thank you everybody. Heat. [Applause] [Music]

Heat. Heat. [Music]

Heat. [Music]

[Music] Heat. Heat. [Music]

[Music] Come on in. Grab a seat. There's plenty of room.

It's my great pleasure to introduce Garrett who is an offensive security re researcher with over six years of experience in information technology. He has conducted successful engagements against organizations that include the finance, healthcare, and energy sectors. Garrett enjoys researching active directory and developing offensive security tools. His background also includes roles as a security operations center analyst and systems administrator. Please welcome Garrett.

Definitely not that tall. Uh, awesome. Thanks for the introduction. I know I'm weird because I like AD security, but uh just bear with me. Um, so right, my name is Garrett Foster. I am a senior security researcher over at uh on the team at Spectrops. I've been there a little over two years. Kind of like my day-to-day responsibilities there are to do a bit of tradecraftraft discovery and red team enablement. So kind of like the if you're at a stuck point, my goal there is to help them get through that stuck point and then whatever the results of that research, we try to to take that uh those results and then implement them in our product uh the

Blood Hound graph. Um I don't have a ton of time so I'm just going to press on with the agenda. So what I hope to share with you today is a bit of a recap on what SECM is and kind of do a a very quick uh explanation of what it where it is where it came from and then some of the security research that has been done by us and and the community as a whole and then from there we'll talk a bit about uh co-management. So if you have an active directory environment that has a is hybrid and is using uh entra as well for some type of identity management co-management might be

relevant. And then we'll talk about the administration service API, which is an on-prem API that grants kind of like this administrative access to SECM. So, it's just another way for admins to be able to uh uh administer that that service, but it's going to be relevant to the co-management piece. And then we'll finish things off by taking advantage of all of that co-management to go from a low privilege user context to completely taking over all of SECM and therefore all of the managed clients in that service. So, SECM uh stands for system center configuration. I lost it. Where'd it go? Uh >> oh. >> It's back. Okay. I'm just I'm not going to touch anything. I get too close.

Okay. So, System Center Configuration Manager, SUCM. Uh it's gone through a bunch of different name changes. It's about 30 years old, so it's it's been recycled a bit. I'm a little bit more partial to this uh beautiful piece of artwork by my co-orker Craig um because it's now known as Microsoft Configuration Manager. Um just to like the the quick recap, this graph uh was made by uh Carson Sanker. He made a blog post a couple years ago kind of detailing how we got to where we are now. So uh SECM was first introduced in about 1994 and it didn't really get in too much offensive attention until uh Defcon 20 when Dave Kennedy uh was

presenting on how he was able to leverage that service to basically pop shells on every client. So at it at its core, SECM is just endpoint management software. It'll have an agent running on on whatever hosts are being managed by that device. It has system level access because it's responsible for driver updates, Windows updates, uh application deployment, and like policy deployment. So Dave took advantage of that and realized that if you could just take over SECM, you'd therefore control every client. And and the the research kind of continued from there. uh Matt Nelson, one of our co-workers, um he kind of picked up and and ran with it a bit, had a few blog posts and tools and then uh

it died for for a number of years and then Chris Thompson and Dwayne Michael kind of picked up the baton and kept that going and the last few years it's it's kind of exploded from from an offensive point of view and um it's exploded so much that myself uh Dwayne Michael and Chris Thompson developed this kind of living document called misconfiguration manager. It's a GitHub repository that uh tries to take all the known offensive tradecraftraft and uh gives it a taxonomy. So if it is a takeover or you're able to grab credentials from it, we label those and we outline how the offensive uh the attacker can take advantage of that while also supplying like defensive uh

measures that the defensive team can take. So it's it's good for both sides, both red and blue. But since then there's been a number of like community contributions and probably the most notable has been from uh the Sactive there their team over in France. They found an un unauthenticated SQL SQL injection that led to complete compromise from an unauthenticated perspective to control all of SECM and then therefore every client. That's not really it. Um there's been a number of different uh like community contributions whether it be blog posts or tweets or tools or or just conversations about how they have used this service to accomplish some level of goal or objective during their assessments. But the point I'm trying to make is that

we have focused so much of that effort to only what's on prem like we we only what's ever in the enterprise network. But I it was time to explore how it would work when you are integrating your on-prem you have a hybrid deployment with the cloud particularly with with uh Entra or Azure or whatever they're going to name it next. Um so there's three ways that SECM can actually have this this hybrid or co-managed deployment. The first is tenant attach. Uh this is less um less of a expansion. It's more of a like DLC for SECM. So there's a couple extra pieces you can do from Entra, but the two that are most relevant are the CL

cloud management gateway and then co- management. And both of these are two different deployments, but both solve the same problem of managing a remote workforce, which over the last few years has been kind of an explosion. So the cloud management gateway or the CMG, the way this process works is you'll have a virtual machine up hosted up in Azure that your clients, your remote clients will connect to and it basically proxies that connection into your uh your physical network. So this is used when they can't have VPN access, they're not in office, they're across the globe, they can connect to this endpoint to have policies applied and so on and so forth. So on the other side of that is

co-management a little bit different. This is actually going to integrate with Microsoft intoune and intoune is uh Microsoft's cloud endpoint management solution. So the way this works is it just kind of integrates and you pass off workflows between the two depending on what your your configur uh your requirements are. But the the lines do cross a bit and it's due to how or the configuration requirements for setting up both those those um solutions and you'll do so from within the the configuration manager console and you'll set up what's called configure Azure services and the piece that you'll be doing is called cloud management and the wizard is actually like it's relatively easy it's streamlined there's not a whole lot of

nuance to it so we don't have to spend much time on it but the key piece that you're doing is you're setting up two entra applications there's a client app and a web application and these uh work together to create an ooth flow for users to sign into uh to this specific scope. If you scroll down a bit, there is a kind of a key thing that um you might overlook. There is the option to disable Microsoft Entra ID authentication. So, if you were just skimming through this and weren't paying attention, you might have missed this, but that implies that Microsoft Entra authentication is enabled by default for the tenant depending regardless of how you're

setting things up. So once that completes, we don't we don't have a lot of visibility into what the those applications even are or what like permissions we're granting it because we're setting them up with uh global admin or cloud application admin level permissions. So let's take a look at what's actually happening. We have the client app and it's grant it is being delegated user impersonation to the web app and whatever that is scoped towards. So this is pre-conented. So the user that's that's being impersonated does not have to approve it. And this will happen for any identity that's in intra from the app side. What we're actually being what's what we're actually accessing what's what's being um uh

granted to us is we have this API URI but we we didn't actually set any type of backend up for this. And this is where the admin service comes into the picture. So the administration service is that on-prem O data rest API and uh it's hosted on the SMS provider role. The SMS provider, see if I can get this, provides API interoper interoperability access over HTTPS. So the SMS provider actually contexts the database through uh WI. So what they did is to grant you API access to it. It they just put an API in front of WI. So you have another layer of abstraction but it gives you administrative access from this perspective and that is implemented. If we take a

look at the service binaries, we can see this URI exists. That admin service URI is present. And this is relatively well known if you've been ever ever messed with SECM. Uh there's tradecraft around this. There's post exploitation that's been taken advantage of here. But when you look at the source code because it's all innet, so we can reverse it. Um there's a completely different URI path that I really wasn't aware of that seems relevant to Entra because of the token off uh string included in it. So if we keep kind of digging into that source code and see what's actually being implemented by the service, it is it is handling any type of CRUD uh

operation being submitted to that API. So if you're trying to make some type of change or you're trying to recover data, it's going to process that request and and kind of pass that off to the Wii endpoint. So there's a few checks. It's going to instantiate a uh a new Windows identity class, uh a C# class, and this just represents a Windows user. So we'll just kind of have that that ready to go. and and usable. And then from there, it's going to check and see like is Azure AD enabled or entra now that we've changed things. And we know this this has been enabled by default. So we can kind of press on through that logic and

and hit this method called parse bearer token. Now this method is going to take whatever was in the authentication headers for that API request, pass off that token to this method, and that's where the fun really begins. Uh, so it seems like it might be a simple um a process to to parse this bearer token, but you be mistaken. It's about 15 layers deep of different kind of poking and proddding to make sure everything's in line. But the end result is it's doing two things. It's validating the token signature. It's making sure that the token that we are providing is coming from where it says it comes from and that hasn't been manipulated in any

way that we didn't try to change any strings or any values in it. So the signature for that is still still good. And then it's just going to validate the tokens issuer and audience. Those two strings are like the the big piece to the puzzle and that's arguably the the only interesting piece from all of that from uh the all of the the whole flow of validating this and it the it's stored in the get authentication info method. So what we'll do is we're going to build a list of issuer audience metadata and assigning. So how do we get that data? Where is that stored? SECM essentially everything lives in that database is in the site database.

Uh you can consider that the brain of the entire operation and then the the site server itself is the body for everything else. The way it pulls that data from the site database is it runs a stored procedure which is you could think of it as a script to be to run SQL queries and that that stored procedure is called get token validation info. And the result of that if we if we take that syntax since it's a test environment we can take it to the site database and issue that or or run that stored procedure to see the data that's being returned during like standard operation. So we get an issuer audience the metadata and then the

signing search. So the piece that's relevant to validating this token are you're going to take the string value of the issuer and the string value of the audience. You'll have your bearer token. You'll des serialize that and then it's just doing a onetoone string match for both. It's saying are they equal? Is there is it 100% accurate? And that's the validation steps. That's it. We want to make sure it's signed and make sure that these two strings match. So cool. We've made it back up to the parse bearer token method. We're saying, "Hey, cool. Here's your validated token. Go ahead and press on with processing that request to transform it to WIN and and keep going." And that's when we get

back to our Windows identity. So, we're going to instantiate the Windows identity and store it in this identity variable. and we're going to pass it the user principle name value from that entra token or validated token. So the UPN is a claim included in this. So we can pull that out and actually extract the domain user UPN from uh the token. So now we passed it off and we created this class as this user. So if you look how the Windows identity class operates when we overload it with just a user principle name value, it's doing one thing. It is going to call uh it's going to construct it by calling the curb s foru loon structure. So if you're

unfamiliar curb s foru is a window is a kerros extension implemented by Microsoft uh called service for user or service for user to self. So that service is going to create an impersonation token on that endpoint as that user. So domain user is now impersonated and it's saying this is the identity that I'm going to process the request for. So I stood on out of it and was kind of curious like okay so it's just doing this S for you lo on it's this kind of makes sense like what's the actual inherent risk here? Um and this is kind of my experience on on how this went down and the conclusion is is that the admin

service impersonates any UPN from a validated token. There is no authorizati