
testing. Okay.
So, okay. Audio is coming straight out.
Yay.
Not sure if that's from
One two three.
Testing. One, two, three.
Hello.
Testing. One, two, three, four.
running together and I've got speaker cam audio all the way up
might be cutting off a little bit early or just getting testing. One, two, three.
speak and then should be able to hear me here. Not the real test yet. >> Okay, now this is the real test because this is how everything should sound. Excellent.
No echo or anything.
>> How was that? I heard that coming out. >> What? People with wheelchairs need to get through to this accessible area. >> Yeah. >> And I thought, what's your idea to just because you've got people to come up here as well? >> It was just to stop it being the random walk through that it usually is. >> Well, that's why I set up over there point. >> Exactly. I visualized this spot in my mind for two years. >> How's it going? Yeah. I think I'm still going out. I don't know. >> Yeah, I am.
Hello.
You just need to clip this onto your lanyard.
Um, this isn't on switch on, but I'm getting nothing. Unless you can tell me that it's >> Yeah, I think you're good. >> Hello. Is that working? >> Good. Okay.
>> Okay, everyone just uh settle down. We'll kick things off. Thanks, Annie. Wow, that went super quiet. Super quick. Does anyone mind if we grab a selfie? Is there anyone that doesn't want to be in a picture? All right, if cover your face or something,
>> perfect. Thank you very much. Filled the room again. Uh, welcome 2025. We've had a year's break and we are back. Um, where do we start? Thank you. See you later. It's awesome turnout as always. Um, yeah, we got two full days of awesomeness in front of you. We will run through a few housekeeping bits before we kick off and trying to keep us on time. Red shirts, as always, are your friends. If it's a red shirt, red badge, you need anything at all, you come and find one of us and give us a shout. Uh we've got pretty much everything here. We can point you to the bathrooms. We've got mental health first aiders. We've
got all kinds of bits around. So give us a shout if you need those things. Um emergency evacuation procedure is posted on most of the doors. It is really an exit out through the doors and the assembly point is out on the grass out to that side. Um bathrooms and breakout. If you haven't found them already, just out the door, turn left down the stairs. There's some there. There are some further ones down that side to the right. And there are some behind us next to the cafe area. Um, Margaret River Roasting Co. is providing the coffee. So, say thank you. Grab them. Say hello. Grab a coffee. That's all good. Lunch. Uh, we've got the usual pizzas coming.
Please be patient with us. Two years ago, this was a pain point because they came in two batches. Please don't be sad if you've got to wait for a pizza slice. Uh, they will appear. Don't grab an enormous box and go and sit under a tree with it as soon as the first ones arrive. It's not cool. Have a slice or two, wait for the next batch. But uh they are awesome pizzas and the usual mix of uh vegetarian, vegan, gluten-free, and everything will be in there. Um schedule is online. If you want to see the schedule, it's pretty um dynamic on a mobile screen. Just go to bsidesperth.com.au, hit the schedule button, hit the link,
and it will pop up on screen. We've got it here as well, but don't come and stand at the front for it. um sponsors. This has been a super tough year to get kind of organized. Um tech has been hit a little bit this year. Um businesses have got smaller, people have been laid off, everyone's chanting about AI kind of taking all our jobs kind of thing. It made it super difficult to get sponsorship. So for those that did say thank you. Um most of those people are around here today.
Literally within the last few weeks, we've been running around because as people came in and gave us bits of cash, we've been able to go and get things like lanyards and shirts and speakers and all kinds of things. So, say thank you. Appreciate it. Um, I will stop this from doing that. Uh, so UWA as always, thank you for those. Uh, they sort us out with the venue. We always get an internal sponsor as well, which makes the venue a little bit cheaper. Makes this happen. For those that have never run an event in Perth, and I've had this conversation a few times this morning, um, if you want to go from this size to
the next size, you are looking at an exhibition center or the Optus Stadium or something crazy. It takes the cost from 10,000 to 15,000 to 80,000 plus and you have to have their coffee. It's not very nice coffee and it puts it up puts it up by about another 30 to 40k just for coffees and bits over the weekend. So, um, we're kind of happy here for the moment. The wait list is huge, so we're going to revisit if we get a bigger venue in the future. We'll see how that pans out. Um, what else have we got? Uh, Infosct, thank you so much. Uh, Silver Sponsor came on board, really made this kind of
kicked it off and happened. Got in early as well, which really helps us get things moving. For those that have run conferences and bits before and you want to do things like the t-shirts, the badges, all of those things, you need cash to get in. You need to be able to lock in a venue so that it's actually real and people believe that it's going to happen. So, anybody that comes in is amazing. Uh, Horizon 3 came in, I think about a month ago, and really helped us kind of get some momentum behind things. Um, Securo as well. Uh, I know that they've got a booth outside. They've come in from kind of Brisbane Victoria.
So, say hi to everyone. pass on your thanks. And a heap of supporters that came in, some of these in the last week or so, just to help us sort out things like lunches and coffees. It is not easy to get a good coffee or a lunch around here. We're a little bit dist. Um so all of these people really helped to make things happen and they've got our deepest kind of thank yous. So appreciate it. Um the badge, uh Tim's going to talk to you about the badge. So, it's flat at the minute. Everyone's got commentary on the badges. Uh, previously you put components on things and they stick out a bit and scratch
people. And, um, there are component packs for those that want to go away and build the badges. Um, no, >> I was trying to pause it. >> No, it's cool. That's cool. Uh, Tim, tell us about the badge. >> Yeah. So, um, for those of you who are here in person, um, you've got your physical badge. Um there are component kits that you can um collect at the registration center or registration desk. Um they come with a ESP32 clone, a um LCD screen, and a handful of passive components. Uh it does take a little bit of soldering to put the physical badge together. Uh probably takes about 10 to 15 minutes. Um very easy. Uh if you've
never soldered before, um give it a go because uh there's not much that can go wrong. Um yeah, famous last word, right? Um if you do have any trouble, uh come see me. Um I'll be um around. Um for those who uh are watching on the live stream or um aren't ambitious enough to put the uh the badge together, there is a virtual version. Um in fact, it's a it's uh it's the same same badge, but it's uh posted through a web browser and so you can go to badge.besides.com.au um uh to access it. Uh it's basically a game um and there is um all sorts of things in the game. There's you know hidden sheets, Easter
eggs um and you know few bits and pieces. So yeah, check it out. Thanks. >> Cool. Thanks Tim. Tim has spent Tim has spent months putting these together and prototyping them and getting the boards. So, if you're happy just with the badge Randy Neck, that's all good. We've got a couple of hundred of the component packs for people that want to tinker about with them and cables as well. So, they'll power off your laptop, probably off your phone. So, have a play. Like he says, we didn't use micro components cuz soldering those is no fun. Uh it's all bigger pieces. So, have a bash. We've got we'll set some solder stations up in a little
while. And now the rush has died down, we can probably start to give out the component packs at Rejo as well. Um, physical lock sport. Um, CCX are helped out this year. Sorry, CyberCX give them the full name. Um, physical security challenges, lock challenges. I think there's a bit of a competition as well. Speak to Evil D or have a chat with the guys over on the desks. They are set up out in the foyer at the bottom of the stairs. Uh, and the CTF. So ASD ACSC have helped out with some IO based CTFs for this year. There's a couple of challenges on there with several steps, but if you go to bsidesper 25.ctd.io,
you will find the challenges. Obviously, the registration code is westside bestside. Uh we we mostly know that their choice, not mine. Um have a play with that if you've got any questions. Dave will be around for a while or hit myself up at some point. And the wildlife. So, every year we kind of have a bit of fun with the badges and choose an animal. Uh, we started off in 2017 with the cow hidden away. 2018 shot was pretty popular. 19 brought the black cocker too. Evil Quawker 2021. The emu was pretty popular in 23, but I've got to say my favorite this year is Cyber Akidna, who's um he's looking pretty cool. Um, good thing about these is they do make a
pretty awesome sticker, and everyone knows we love stickers. So, um, yeah, grab your badge, grab your stickers, uh, grab your t-shirts and bits. Rejo, so I know there were people that were hitting us up about previous con shirts. I think we've got some of the cockatu ones and the emu ones. they'll just sell those off at the rejo desk later on when it quietens down. Um, in the meantime, welcome again. Thank you for all your support through the year and as we set up and for all the emails and queries and offers of support and other bits. We really do appreciate it. Um, it makes it happen. Myself and >> yes, Adam will just do the physical CTF.
So this was very intentionally hidden away so no one can figure this out ahead of time. There is a physical CTF. It starts at 11. It is objectivebased. In other words, there are 12 challenges. The solutions are figure it out. I've not made a very specific solution and I've left it incredibly vague on how you solve them. This is intentional. I want to inspire as much creativity with this stuff as I physically can. There is one 500 or 600 point challenge that is actually really hard technically. Good luck. You all will be revealed at 11. It will there'll be a QR code at the retro desk. You can register from there. And yeah, I expect creativity more than anything.
>> Thanks, Adam. Appreciate it. There you go. Um yeah, that's pretty much it. Thanks for everyone. Um we'll get we'll get ourselves warmed up. Everything is getting live streamed out as well. So feel free to drop out to breakout rooms. If you want to do the CTF, if you want to go do the physical side, plug in your headphones or something. It's live streamed on the YouTube channel. Um so don't feel that you have to sit in here. We are sometimes a bit space conscious. Um we will get set up. There'll be small bits of handovers just while we clip on mics and bits for speakers, but we will take it away in two minutes with uh
Drew. So Drew, if you want to come and set up. Thanks again and catch you all in a bit.
my noise on the other side, but it looks like you got enough. >> Oh, we got heaps of >> table.
Um, we want that slideshow. Start from first slide. That's not what I want.
[Music] >> While you're doing that, I will seductively lean in and you. >> Oh, yep. Oh, no. That's what I want. That's what I want. Extend the desktop slideshow. Start from the first. >> Come on. There we go. Excellent. I'll plug power in.
And then finally, if you just give me a second. Oh, that >> we're good for water. >> Yeah. No, I'm good for water. Uh, just want to get a few things out for show and tell. That that, that, that, that. No, sir. Thank you. That, and that, and that should do it. All right. >> Cool. That's good. You're good to go. >> That's switched on. Good to go. >> Switched on. Excellent. >> All good. Looking forward to it. Thank you sir. >> Thanks for kicking us off. >> Very nervous, but yeah, we'll get through it. >> You'll be fine. Everybody liked you last time so >> yeah. Oh, thank you. >> I will just grab that for you.
>> Cool. >> All right, if you grab a seat. Um, one thing I missed on housekeeping, if you got a mobile phone switch on, throw it on silent or something or the world will hear your ringtone maybe. Uh, and I will hand over for Drew with what's the frequency. >> Thanks, man. >> Thank you, sir.
Uh thank you everybody. Um I'm Drew Hamilton. Uh very happy to see all of you. Um and very happy to kick off uh besides 2025 here in Perth. Uh yeah, let's set the tone. So um my talk today is what's the frequency Kenneth? Um now with Xband um this is basically using softwaredefined radios to uh listen to weather satellites um to get interesting and fun data from them. Um the picture you can see in that photo is Noah 19 before it actually launched. Um come to find out bolts are important and uh yeah um sadly this satellite is no longer active but we'll get to that. So moving forward, who am I? Uh as I said I'm
Drew. Um, I used to work in higher education out here in WA. Uh, and then in 2019, I moved into penetration testing. I was at Asteris for maybe six weeks before, uh, we got bought by CyberCX. Um, and not too long ago, I moved from CyberCX to Bunnings and now I'm an internal penetration tester there. Um, I'm originally from North Carolina and I've been in Perth for over 25 years now. Um, I'm very uh interested in anything related to rocketry and space. Um, as you can see, there's a photo of me uh back over a decade ago now, nearly 15 years ago, when I got my level three in high-owered rocketry. And then on the right is a rocket that I
flew to Mach 2.64 and uh 28,000 ft up over 8 kilometers. So, that was pretty fun. Um, but yeah, today we're not here to talk about rockets. We're here to talk about satellites. Um, this is a hobby that I picked up during CO. Um, obviously during lockdowns, uh, rocket launches weren't happening that often. Um, there's only so much time you can spend in the shed sanding before you get bored. So, uh, I started looking at softwaredefined radio uh, satellite reception. What got me started? Um, actually, oh yeah, I should probably go through this for for the talk today. Basically, we're going to start with Lband. I'm going to give an update from my previous talk from 2023 besides.
After that, we're going to do a little bit of Sband weirdness. Um, and then from there, we're going to move straight into Xband. Um, I'll try to walk you through how to stand up uh your own station if you wanted to listen to Xband uh transmissions. Um, yeah, and hopefully it'll be fun. So, yeah, to start off, why did I get involved in this? I mean, yeah, it's realistically a very reasonable question, but back uh during the co times, um these videos appeared on uh social media. Basically, I saw them on Twitter. Um the thing that shocked me about them was that as you can see in these videos, there's there's no SpaceX branding. There's no time
counter. There's no flight log, you know, current um part of of the launch. uh layover for this video. And the reason why is because this was received in Europe uh with that grid dish. Um and that set up to the right that you see right there. Um SpaceX was not encrypting their video downlink streams for their second stage Falcon uh rockets. And um yeah, we even got really interesting things like inside the tank video that normally they didn't show in the uh live stream because they didn't want to show that. So um yeah, once I saw this and then I saw the setup that people were using to do this, I thought to myself, hey, I I can get a grid dish,
you know, like I can buy a $50 SDR. So I decided to start um yeah, playing with satellites. So the first step that most people do when they start doing satellite is um basically the 2meter ham band uh roughly 145 megahertz in frequency. Um there there used to be two constellations. There was the Noah uh pose constellation which was the polar orbiting uh constellation of Noah 15 18 and 19. and they would do what was known as automatic picture transmission which was black and white and then an infrared band. Um earlier this year uh the Noah um constellation was decommissioned. Um it it was a pretty disappointing thing in the hobby. Noah 15 had been up for
nearly 10,000 days and a lot of people in the hobby were actually looking forward to hitting that milestone. Um and also they were still producing really good imagery uh on the 2meter band, on the Lband and even on S-band. Um so yeah, it was a bit heartbreaking, but that's the situation that we live in. One thing you can count on with satellites is that over time things are definitely going to change. Um but that being said, there are still targets that you can do in the 2meter band. So this is from a Russian um weather satellite called Meteor M. Um there are there are more than one Meteor birds currently up. There are more being planned to be
launched by Ross Cosmos. Um the planned launches are still meant to have all three of the sort of more amateur band uh offerings. And uh yeah, long and the short you can receive these images which are one square kilometer per pixel uh threeband images that you can build a natural color composite out of with nothing more than a three element 2meter Yaggi or something like a QFR or a QFH antenna which is one of those weird antennas that looks like an egg beater. Um, so yeah, long and the short, if you want to get started in the hobby, this is a great place to start. Um, and and yeah, I I for one have a really soft
spot for low rate picture transmission because for the amount of investment that you put into it, which is basically uh a $50 SDR and a Yaggi or another compatible antenna, you can start producing imagery like this. Um, you can also basically build a station to collect all this stuff automatically. So, if you had a QFH antenna sitting on your roof or a V dipole antenna install, um, you could basically automate this and have the program uh, in the top left sat dump basically pull all this data. Like I said, I I've got a really soft spot for 2 meter band uh LRPT transmissions because yeah, the color is great for what they are, as is the
resolution. Um, unfortunately, only Meteor is doing this stuff now. Um, so it it's a bit limited in terms of the number of passes you get per day and the amount of data you can get. So, moving on from there, uh, I want to give a quick update on Elband. Um, as I said, the no opposed constellation got pulled down. It was also doing L-band. It was great at the time. Um, but we still have offerings. So, um, Metup, uh, advanced, uh, high resolution picture transmission and Meteor also does an AHRPT service. Um, yeah, basically these are it's the same Meteor satellite that Russia flies. It's just different band with higher resolution. METOP is a European Space
Agency uh satellite constellation. Um they've flown METOP A, B, and C. A is now decommissioned. B and C are still active. Um up in the top left, you can see a satellite dish. I picked that up from Scitec up north uh from here, about 20 minute drive from here. It's a a terrestrial satellite dish company. Um mostly video uh sort of satellite installs. Uh that that was an $82 dish, brand new. Um they're incredibly cheap. If you want dishes, uh Foxtail dishes are pretty much everywhere. Um if you're keen on one, come and talk to me. I'll bring you one tomorrow. Um I have a shed full of satellite dishes now, as you might imagine. Um
for LBand, basically, you need a dish of of reasonable aperture. Like I said, 60 cm to 90 is more than adequate. Um, I've got a helix on it that I built uh off of a 3D printed scaffold. Um, there's a New Soulird Goes low-noise amplifier on that and then an Airspy Mini plugged into my laptop. So um to do LBM basically you need a low-noise amplifier uh softwaredefined radio and a computer and an addition and and uh Helix and you'll be able to play these games. Um on the right this is an image that I got from Metop B if I'm not mistaken. Um yes Metup B. Um, I wanted to show you this image because this is sort of a more
represent representative image of a low earth orbit weather satellite transmission in terms of the length and and width of the image. So, as you can imagine, these satellites are sitting roughly 6 to 800 kilometers above Earth in low earth orbit. So, they can only see a certain amount of the planet at any given time given their altitude above the planet. So, this sort of gives you an idea of both the width that the imager can cover, but also the length. So, this was about I think it was 4,200 lines long. Um, if you had a clear line of sight from horizon to horizon during a satellite pass from a low Earth weather satellite, you could get over
5,000 lines, you know, maybe up to, I think, 52 to 5,500. Um so yeah you can get quite long images. Um this is a uh natural color composite that I put together with sat dump and as you can see I al also did the the city overlays. Um sat dump has the facility to where basically you can tick a box and it will put all the cities on it. Um, because we're here in Western Australia, I have to basically pull the city slider to nine to show everything on the map whatsoever just to even get it to slightly fill up the space because yeah, um, we don't have much stuff. Uh, a lot of people out here is yeah, what I
should probably say. So, yeah. Um, Metop and Meteor, uh, AHRPT are both very good services. um really fun to get and a little bit more technically challenging but actually not that hard. Um hand tracking on LBand is pretty forgiving. Basically that dish I hold from behind with my hands on either side of the outside of the dish and just twist it by hand during the pass to keep it locked. Um I'll just step over here real quick and show you. You can see that we've got a constellation here. So that's the the this is a quadranter phase shift keying um transmission. So you've got the four clearly defined dots um and then the signal to noise measurements for the
satellite. Um basically if everything's in green you're good um in terms of signal to noise ratio. So um and I happen to screenshot that just in time where you can see all the bumpy stuff basically on the hump of the satellite transmission. This is actually filler at the time if I'm not mistaken. So um the satellite radios are constantly transmitting data but sometimes there's no data to transmit. So at that time they have to transmit filler. Um and quite often you can see that because it basically makes all those bumpy lines on the transmission. So that's currently where we are when it comes to Lband uh low earth orbiting weather satellites. Like I said things
used to be better. Um, Elband's unfortunately probably not going anywhere. Um, beyond the meteor constellation satellites that are going to be launched by Ross Cosmos, um, ESA and Noah both have no plans to put L-band transponders on any of their new satellites moving forward. So, um, whilst this is fun, um, it it will be timelmited. Um, and you can 100% count on ESA to de-orbit uh or decommission a METOP satellite while it's still fully working. Um, because that's exactly what they did with METUP. Um, so moving on from there, uh, want to give a quick update on Elband uh, geostationary stuff. So um this is actually a slide from my last talk but I liked it because
uh up on the top left it gives you a perspective of the earth and all the various low earth orbit weather satellites from a distance perspective and then it shows you how far away geostationary orbit is in comparison. Um, so basically geo stationary is what it says on the tin. From our perspective on the planet, these satellites are stationary in a single point in the sky. Um, in actuality that's not actually the case. Often they perturb around a point in a figure 8 perspective. Um, it depends on how much fuel they have, how much station keeping they can do, etc., etc. Um, some geostationary satellites are better than others. Some satellites you cannot point to, leave your dish in
the same spot and expect 12 hours later to have sync because it will have moved that much even though it's in geostationary. So, um, Geo's good because you can just leave the dish set up. Obviously, once you point and get a good signal to noise ratio, you don't have to track anything. Um the downside of geostationary is because things are further you don't have as much signal strength and so therefore your margin is lower in terms of getting lock. Um but what I would say is here in Australia uh especially here in Western Australia um I feel like we're kind of in the Goldilock zone for geostationary if I'm going to be honest with you. Um so there
are currently two Russian geostationary weather satellites uh within view of us. Electro L3 and Electro L4. Um there's also Fingyong 2G and 2H. Those are Chinese geostationary weather satellites in an older generation. There's also Fingyong 4A and 4B. They're also Chinese geostationary satellites but a newer generation. Um, we have what's now known as EWSG2. Noah basically takes old GO satellites that they use over the continental United States. They basically run GOE's east and goes west and then they have a a hot spare in orbit as well in case one of them fails. Every time they send up a new satellite, they end up having a second spare. So what Noah has done is that basically they hand that satellite
over to the United States Space Force and then Space Force pushes that thing over um basically to the sort of uh over the Middle East and North Africa and then it becomes a United States Space Force geostationary weather satellite. It's still the same ghost satellite. It's still unencrypted, still running the same services um that it did historically. It's just now it's owned by the United States Space Forces and is a military asset. Um beyond uh EWSG2, there's also uh Geocat 2A or GK2A. That's a South Korean geostationary weather satellite. Um and then finally, there's Himaari 9, which is Japanese. Um, you would probably recognize it uh because all of the bomb imagery that the Bureau of Meteorology
uses from geostationary comes from Hima 9. So if you go to their website and look at like the current satellite photo of Australia, that's where that data is coming from. Um, so yeah, we we're very lucky out here in terms of the number of geostationary targets that we have and the amount of data that we can get from them. So from my talk in 2023 to now um a couple things have changed with Electro L. Um firstly on the left hand side that's an Electro L3 image. In 2023 it never did a fully illuminated full disc image. For some reason Rosscosmos turned this on earlier this year. Um I don't know why. Nobody knows why in the
hobby. I can tell you now that it's also off. So, they're now no longer doing that time slot on a daily basis. Um, but yeah, it's it's a bit weird. It was one of my biggest gripes about the satellite. It only does images every 3 hours. Um, but it would never historically do the fully illuminated uh full disc image. Um, now it kind of does. Um, and then on the right, this is Electro L4. Um, this is the newest Electro L satellite that's been deployed by Russosmos. As you can tell, it's further east from Electro L3. Um, when New Zealand's not covered in cloud, you can actually see New Zealand pretty well. Um, but yeah, that's an
interesting satellite as well, just because it's it's nice to have options. Um, Electro L3 is going to be replaced by L5 within well, it was meant to already happen this month. Um, that launch has not occurred. Rosscosmos will get L5 up. Uh when they do, they will drift that over to replace L3 and then they're going to take L3 and move it over to replace Electrol L2, which we can't see. That basically sits over uh Western Europe. Um and it's beyond its Xband service. It's basically completely broken. So, I can tell you right now all the uh European hobbyists are are very excited about the opportunity to start receiving that weather satellite. Um, moving forward from here, I want to
quickly show you a few photos of stations that I've got. Um, I'm not running all these anymore, but at the time I was. On the left, that was a Geocat 2A uh low rate information transmission station. Um that satellite that's the South Korean one I mentioned earlier. Every 10 minutes it will send uh a single infrared IR 105 band image down. Um it's not great because it's infrared only but because it transmits every 10 minutes you can build very nice looking animations out of the imagery. Um in the middle the the sort of cream color dish that's my electro L3 station. um that's still running now. I've had that up since April of 2022. So I've
collected a fair amount of electroheld um geostationary imagery now. Um enough to know that yeah that satellite is interesting because of its lack of reliability in my opinion. It things are constantly changing with that thing and often not for the better. Um on the right, this is a new basically uh go dish setup that I brought back from the states years ago. Um the two dishes on the left, you can see that I've got helical antennas on the front of them. Well, helical feeds I should say. Um on the right, this dish is a linear feed. So it's designed, it's optimized for linear instead of helical transmissions. Um and because of that those are very suitable
for the fingong 2 series satellites of which there are two still available. So moving on from here um after the bides 2023 uh conference um that October uh was basically when EWSG1 which was originally go 13 was being replaced by
in elevation to our west. Um, still plenty enough altitude for you to receive it with a a good setup. Um, unfortunately because I live up in the Perth Hills and I'm surrounded by trees, um, I I can't get a 20 degree elevation lock on any satellite. Basically, about 30° is uh about the lowest I can receive from that's a 1.8 uh meter prime dish. Um, because it's a prime, I threw a cantana in front of it. Um because it's an L-band Cantana, the dimensions are quite big. Um and yeah, it was pretty weird throwing that together cuz I finally realized my best bet was to go down to the local IG and buy two of the
very large Milo tins. Um, basically I took one Milo 10, cut the bottom off the second one, wedged that one onto the first one to get it the proper length, and then threw uh an SMA flange as they're called, which is basically a SMA connector with a metal rod attached to it, cut that to length, and that was my feed. So you you might look at this and think, "Oh, that's that's incredibly technical." And it's like, "No, it it it's literally two Milo 10 slammed together with an AliExpress SMA that costs maybe a dollar." Um, and yeah, this is what I was getting off of it. Uh, bit of a funny story. We don't drink Milo in our house. Um, but my son
has a friend who does. So yeah, we had the really awkward situation where we showed up to school one morning and handed over two Ziploc bags full of a brown powder to um to another parent like while we were dropping the kids off, you know? And uh yeah, that was always in hindsight, I thought maybe maybe I shouldn't have done that, you know, like maybe maybe that wasn't the best of ideas, but hey, you know, like if I started telling people why I was doing it, I think they'd be even more concerned. Okay. So, um, moving on from here, once I started investigating EWSG2, um, I pretty quickly realized that, okay, well, this is a geostationary
satellite operated by the US Space Force. They they've got to have a ground station for this thing, but we're on the other side of the world from America. Where are they going to put this ground station? Um, and the obvious answer is Western Australia. Um so up north from here um Swedish Space Corporation operates what's known as the Western Australian Space Center. Um SSC is a uh private corporation that basically operates um ground stations throughout the world for commercial services and governments to basically have satellite uplink and down link capability. Um it it kind of blew me away. I reached out to them out of nowhere and said, "Hey, I'm a weird dude in the Perth Hills
receiving satellite imagery. Y'all want to catch up?" And uh pretty quickly go, "Oh yeah, the the the whole Noah like ground station teams coming out in late October to commission the the new or the new satellite uh at the ground station and why don't you come up and meet them?" And I said, "Yeah, that would be really cool." Um, so yeah, I got to drive about 5 hours up north. Uh, you can see up there's Geraldton and then inland quite a bit is where the space uh um the space center is. Um, there's a lot of stuff up here in general. Um, there's also a a listening station up there. Um, one of the guys that I met that was working at
SSC had actually just resigned cuz he was going to the listening station to get a job. He had just accepted a job there and it was starting in 3 weeks. That was a fun conversation. Um, yeah, but it's it's it's quite a large facility um with multiple dishes uh across across the site. Um, it also does some search and rescue work uh which most of that stuff bounces off satellite as well. Um but yeah, very interesting to go up there. So, um while I was there, uh on the left, that's the Noah US Space Force dishes. Um these actually came from Wallops Island in Virginia where Noah has their their ground station for uh the if I'm not mistaken,
the East Coast uh goes satellite goes east. Um they fully disassembled these satellites, shipped them to Western Australia, and then rebuilt them on the spot. Um when I when when I was trying to organize to go up and visit, it was an interesting conversation because obviously this is a United States Space Force asset. Um that's US military. Uh I wasn't sure what, if anything, I I would be able to see. Um, I drove up on the day when they told me to come up and when I got up there basically I was informed that that morning like word had come through from Washington that like they could show me around. Up until then they didn't know which was pretty
interesting. So, but honestly I didn't care anyway because there's more to the site than just the fenced off Noah US Space Force ground station. Um, but it was very good because I ended up being able to go into the shelters, which are the little buildings you can see there. Um, no photography in any of the shelters, but that's basically where all the the hardware, all the ground station hardware uh runs that basically receives and transmits uh to the satellite. Um, very interesting time. Very nice to meet the Noah team. They were lovely people. Um, I' I'd like to go back to the States sometime in the future and drop into Wallops and hopefully catch up with them
again if I get the opportunity. Um, while I was up there as well, in the middle of the this image, you can see one dish pointing straight up. That's actually a uh Space X owned asset. Um, SpaceX uses that dish for Crew Dragon video downlink facilities. So basically anytime SpaceX launch launches uh a Dragon capsule with people in it. Anytime it's over the Indian Ocean uh region, all the video downlink from that uh capsule is received by this thing. You can't see it because I stupidly actually you kind of can just see it. That's my car in the way. But just above the windcreen you can see the Starlink uh dishy terminal. Um that's how they're
back hauling cuz as you can imagine the Telstra network connectivity up there is worse than garbage. Um so like yeah they're just like no we'll just throw literally they they just got the the the cable running from out of that little shelter there on on the ground dish sits out there and yeah that's how they back hole all the video back to the United States over the Starlink network. Um, on the far right you've got a Jaxa uh owned dish. If I'm not mistaken, this is both C-band and Xband. Uh, JAXA uses this for launch telemetry. They use it for satellite station keeping. Um, when I was there, there was actually a JAXA engineer there working in the
shelter while while I was being shown around. And it was awesome. like shoes outside the shelter door, you know, just like I I I wish I could have gone in because I reckon it would have been the cleanest and best shelter um in the entire place. But yeah, uh unfortunately because they were actually doing work at the time, um the SSC people said, "No, sorry, we can't go in there." Um I I'm hoping to go back up and visit again um in the near future, maybe in the next few months. I I have reached out again um and I've touched base with them again, fingers crossed. Um, ISRO, which is the Indian Space Agency, also
utilized this site. Um, that was really interesting. Like in the main building there, they have some uh HP thin clients that we could probably go down to Ross' auction and buy like 30 bucks a pop, as many as we want. That's what they're running their their their like tracking uh software on top of. Like I I love ISRO. like they they run on a shoestring budget and they do amazing things in space with it. Um so moving forward from there, I want to jump over here um to a little bit of SPAM weirdness as I mentioned earlier. So um there's a there's a guy online called Scott Tilly. Uh he he's Canadian. Um I'm pretty sure he's up in British
Columbia. his his sort of claim to fame was Scott found a a NASA satellite um roughly a decade after they lost it um and it never worked for them and he found it found it transmitting sent him an email uh on the weekend saying hey guys I think found your satellite you know and and on Monday like his inbox just got swarmed with messages from researchers and you know uh uh other people who were basically just over the moon because the satellite was functional. Um they were able to get data off of it and everybody was just like, "Oh, you know, we thought this was dead years ago." And like, "Yeah, you just greatest Christmas present of all,
right?" You know? So, um in December of 2023, uh uh Scott reached out to me. Um it's not that I I'm anything special or or am even very good at these things. I I I have dishes, I have hardware, and I can listen on multiple bands. So, that combined with the fact that I'm out here in Western Australia and there's not very many other people out here, and we're pretty far away from everyone else. Um, it it makes it a useful place to look for things that obviously you can't see from North America because there's a planet in the way. Um, so Scott reached out to me in December of 2023 and said, "Oh, you know, the
Chinese are launching a space plane." And like basically what we were trying to do is find emissions from either the space plane or the objects that were deployed alongside of it so we could basically get an idea of the orbital parameters with which that satellite was uh orbiting. Um there's a tool in in the hobby uh called STRF which is basically a Python software suite where you can take a bassband recording combine that with accurate time and location via GPS and compare that to TLE's from NORAD space of all the objects that are currently orbiting the planet and actually fit the object uh to a known well fit the transmission to a known object. So, it allows you to sort of map
who's transmitting what above you when they transmit it. So, um sitting out in the backyard, you know, with my 1.2 meter with an Sband, uh uh helix on it looking for, um objects from the space plane. So, this is, you know, an image of SAT dump. Over here, I had gredict up and you can see that I basically imported all of the the two-line elements for the space plane objects that we were looking looking at to see what transmissions uh they were they were potentially doing um to try to figure out basically what what are they doing with the space plane and all these objects um about yeah quite quite long into the pass um as you can see here. So it was
starting to to get quite low. It was just above 30° elevation. Um, I got this stonking signal, you know, and I'm just like, uh, wow. Like that just turned on out of nowhere and like that's really big and quite powerful and it's on the frequency that we're looking for. Oh, I found like either the Chinese space plane or one of their objects. So, I sent the basement recording to Scott Tilly in in Canada. Um, Scott used the STRF tools to try to fit it. Um, come to find out, YUbuntu NTP isn't very good is the long and the short of it. At the time, I was running Yuntu on my laptop, um, because it was a fairly new laptop.
Um, and yeah, uh, Scott tried to massage the data as much as he could and he's like, "Oh, I think I've got a reasonable fit and it looks like that was object D." So, um, Scott starts posting about this on Twitter. Um, other space news websites start picking up on it. I had to send a weird email to my mother saying, "Hey, like, uh, there's screenshots from my Twitter, like on the internet now, like in the news, just in case you're aware." And come to find out my brother-in-law had already told her, which was very confusing. But yeah, um, moving on. uh we basically realized that it it wasn't the Chinese space plane or or any of the
uh Chinese space plane companion objects. Um a after uh a bit more work on on time and then me starting to do recordings on a Debian laptop in instead of the YUbuntu one I was using at the time. Um we started to quickly realize that um this transmission was from Yaoan 30X. Um Yao GAN is the name of the Chinese spy satellite networks. Um there are multiple. So there's 30 31 I think 34 35 36 38 39. They're into the 40s now. Um there are a lot of these birds up. Um, they're all transmitting over Australia. Um, the interesting thing about this that, uh, put Scott Tilly a bit on the back foot and and the reason why we sort
of got confused initially is none of this happens over North America. None of these satellites transmit on these frequencies above Canada ever. But here in Australia all the time from horizon to horizon these things are are loud ass. Um so yeah it was a pretty interesting sort of thing. Um as you can see from here Scott went back redid some STRF analysis identified it actually as Yao GAN 30X. I started doing more uh recordings after that just to do more STRF matching just to confirm um 100% it's Chinese satellite uh supply networks um and yeah it's it's all uh encrypted um up in the top left you can see the post where basically we provided
the the bassband recording to uh Allen who's in his 20s he's the lead developer of SAT dump he's uh in France Um, the best way I I like to describe it is he sort of looks at satellite pictures, crosses his eyes like it's one of those uh 3D images, and he can sort of pull out the imagery out of it. The the he he can basically decode stuff off the fly almost. It's uh a rare talent. Um, but yeah, he built a decoder. Uh, we had a look. All the data is encrypted. And that's where I'm going to leave it because I don't really know what else to tell you other than yeah, they they're
doing this stuff above us, but over North America they're not. All right, now we're going to move over to Xband. Um, this is my current Xband setup that's currently still set up in my backyard. Well, in my driveway. So, um, I'm going to quickly take you through this. That's a 1.2 meter offset dish. In front of it, I've got what's known as a wave guide. That's what takes uh the transmissions that get reflected off the dish. You might look at that wave guide and think, "Oh gosh, like what what is that?" And it's a it's a 28 mm to 42mm copper pipe reducer for plumbing and a 26 mm ID 28 mm OD copper pipe with an
endcaped on on the back of it with a single SMA flange cut to length at 7.5 millimeters. It is not rocket surgery this stuff at all. Um yeah, it's it's quite simple in theory. Um, it's basically a cantana. So, above that, you can kind of see a box. That's what this is. This is a different model, though. This is a low-noise amplifier for Xband. Um, I got this from Down East Microwave. They're in Florida, uh, in the United States. Um, initially, and I'll show this to you in the in a couple of slides, initially I was using a single LNA. um I wasn't able to get strong enough signal to noise ratio. Um about six months into Xband
receiving, I started chaining LNAS. Um and that's when everything went good. So yeah, um I now run two LNAS. You always run the first stage with the lowest noise floor. Um how RF chains work, basically the first device that that sets the noise floor sets it for the entire chain. So, you use a low noise or a very low noise amplifier as your first stage. Second stage, you can basically put a microwave amp there. Um, it doesn't need to have a low noise floor because that's already been set by your first component. And then that basically boosts up the signal. So, as you can see, basically, I've got one sticking up and then it goes to a second one, which
is this little guy right here. I picked this up off of eBay a while back. Ah, should have unwrapped that beforehand. This is a Narda Mate LNA. Um, used off eBay. All All of this componentry is going to run you at least $100 US used um plus shipping. Um, a a new LNA from the States from from Demi is going to run you 150 to close to $200 US excluding shipping. Um, this is the first problem with Xband. Um, it's not cheap. It there's no sort of like wide hobbyist market to lower costs uh uh of sale. So, yeah, it's a bit of a disappointment, but what are you going to do? Regardless, moving on, once you
get through the low-noise amplifier stages, you then need a down converter. Uh, basically all the softwaredefined radios that we use, um, unless you're going to buy one that costs what a car would cost, um, they're going to max out at 6 gigahertz. All of the Xband frequencies sit basically between 77 meghertz or 7.7 GHz up to roughly 8.4 GHz. So, I need a device that basically takes Xband frequencies and then downshifts it to a lower frequency that my SVR can then listen to. So the LO of this down converter which is hobbyist made from a guy named Arvd is uh 7,000. So effectively whatever I'm listening to at at any given time it the frequency
gets down shifted 7,000 and that's what I listen to. So but first why do Xband? Um and the answer is uh so many channels is so much higher resolution and so many more instruments. Um these are all the current generation weather satellites uh from their respective countries. So this is uh a mercy to um the medium resolution spectral imager. This is on the Fingong 3 Chinese satellites. As you can see there's a ton of bands and there's very good resolution. Basically the visible bands that you can see are 250 meter per pixel. Most of the infrared bands are 1 kilometer per pixel. So um moving forward started out in February 2024 doing Xband immediately hit some problems. Um top left that's an
image from Aqua which is a United States Xband weather satellite. Aqua has no error correction. It has no no effect uh enabled on the transmission. So once you drop below 10B decibb in signal to noise ratio you start getting drops from the satellite. So as you can see multiple lines across that were where I actually dropped in SNR and couldn't maintain lock. On the bottom right this is an image from Fingong 3D. Um, this was sampled at at 50 million samples or sorry, 40 45 million samples per second, which is quite grunty, let's be honest here. Um, once you start sampling at that rate, you start realizing that SSD and NVME drives have buffers that fill
and then start stuttering and then you start getting drops. Um, and it's not fun. So, like even though that reception that I did, I should have had ample signal to noise ratio. the entire way. In actuality, I didn't because I was having drops from my hardware. So, move Whoops, I just pushed past one. So, once I started using the second stage LNA, um I started getting much better image imagery. This is from November 28th, 2024 from Aqua. Uh that satellite that was the top left in the last one. As you can see, I was able to for the most part maintain like a high signal to noise ratio. You can see a couple blocks up
there and then a little band up there where I dropped. Um, but yeah, this is sort of what you can expect to get from uh Aqua, which is part of the Earth uh observing system uh constellation. There was Aqua uh Terra, which is unfortunately now failed. And then there's also Aura, but Aura doesn't do any visible band imagery, so I don't have anything to show show you from it. So um moving on from here, this is uh Suomi NPP. So Suomi was the first satellite in what's known as the JPSS constellation. Um the joint polar satellite system, I believe it's called. Um this is the current generation Noah low earth orbit weather satellite system. um
Suomi and Noah 20 or JPSS1 as it's known both have um severe nullles in their transmissions. And basically what that means is at certain elevations throughout the pass the antenna on the satellite basically doesn't transmit as well as one would like. It encounters a null and therefore the result is you get a drop in signal to noise ratio when you're receiving that transmission. And then once it passes the null, the signal to noise ratio increases. Again, um you could have a perfectly fine lock, it doesn't matter. The null's going to get you. So you need to have ample amplification to basically overcome the NLES. Um yeah, so this is Noah 21 or JPSS2. um with Noah 21 when when they launched
this basically it solved a lot of the Sucomi and and Noah 20 null issues. Um this is also a little bit of a higher resolution. So for Suomi and Noah 20 um I sample at roughly 25 million samples a second because the signal is at roughly 15 megahertz in terms of width. So Noah 20 21 for instance is uh 25 uh million symbols per second. Um so I sample at 40 million samples a second. Um it does a true color image which you can see on the bottom left. Uh but that's a lower resolution image in terms of like the visible bands. on the right. This is a uh uh a 321221 composite which is
actually much higher resolution in comparison in terms of pixels per per square kilometer. So moving on from there as I said instruments and and more channels are good. So this is an example from Noah 21 um the fire temperature RGB composite. So in sat dump when you when you've received the imagery and you go to process it there are multiple composits that are available that are preconfigured and you can say all right give me the fire temperature RGB composite and so as you can see here in this middle image you can basically see those spots on the image that's actually the hot spots from the imager from the satellite that correspond with the fire that's currently taking place in the
visible imagery. Um, these satellites are what Bomb and Bushfire IO use for our fire prediction stuff. So, like if you're looking at bushfire IO at a fire and you see a bunch of small dots, those small dots are sample measurements from Noah 20, Noah 21, and Suomi. The larger dots are from Himoari, the geostationary satellite. And the reason why is because the lower Earth orbit satellites have a higher resolution. So, the dots are smaller. So because they can see basically 250 m per square pixel whereas Suomi or sorry Himawari up in geostationary can basically only resolve at one square kilometer per pixel. So next time you look at a fire you can realize that like you can actually click
on the individual dots and it will tell you oh this one came from Noah 20 oh this one came from Suomi and it'll tell you the date and the time. Um or you could just stand in your driveway and do it in real time like a normal person. So now we're going to move on to um the Chinese low earth orbit uh Xband weather satellites, the Fingong 3 series. Um I got to be honest with you, China is flexing. These satellites are phenomenal from a visible visible band perspective as well as from a uh multiple images or sorry multiple instruments perspective. So um as you can see here, yeah, that's Fingy Young 3D from August 1st, 2025.
Here's a bit of a zoom in from down south. Um it it's just absolutely phenomenal imagery. 250 m per pixel per 250 m squared per pixel image resolution. Um and you can get full true color composits off of it. Um Fingy Young 3E was the next one that was launched. Um it has Zuvie on it which is a solar imager. So um you can basically download images of of yeah the sun in near real time. Um, you can also build animations because it sends multiple images. At the end of this, I'll probably throw up a quick animation to show you what that looks like. I didn't want to embed it because I tried to and
I'm using Libé Office and it basically crashed my entire computer. Um, yeah, 11 megabyte GI GIF and yeah, it's it's 2025. It's cool. Um, FY 3F, this is uh another one of the three series. This does morning passes. So 3D is basically the afternoon satellite. So it comes anywhere between 2:30 and 3:30. This comes anywhere between sort of 9:30 to 10:30ish. Um this has the Mercy 3 imager on it, which is slightly better than the Marcy 2 instrument that Fing 3D runs. Um and yeah, they're they're just phenomenal uh imagery. Um and yeah, the morning passes are quite good. Phenon 3G uh is weird. Um the it's basically sideways SAT instead of in a polar orbit. Um this thing is in a
uh drifting orbit. Um it requires reboosting as well because it's quite low. It's only roughly 400 km above the planet. Every time it reboosts, you lose um transmission at that time. But it's a really interesting satellite. Also wanted to quickly show on the left that's not a corrected image. On the right that is a a corrected image. So on the left that's basically what the imager sees one to one. But then because of the curvature of the earth you actually can correct out that image to make it look like a proper map. And that's what you've done on what I've done on the right. So basically all all the images bar the one on the left that I've shown so far have
been corrected images. All right. Fingy Young 3H. Um, this this just launched September 26th. Um, a little over two weeks later on October 12th, uh, people in the hobby started getting uh, images from it. I I got this on the 14th. Um, yeah, really, really good satellite. This is the replacement for Fingyong 3D. Um, so the first of the Fingongs I showed you with that gorgeous imagery. This is its replacement because that thing's been up for eight years. Uh, China's not playing. Um, yeah, 3H is very crisp, really pretty, great, great resolution imagery. Okay, Xband Geo. Um, yeah, I got a dish problem as I mentioned earlier. Um, it was not fun driving this 2.3 meter
prime home in the back of a trailer down highway. Uh, I I I don't want to do that again. Um, but I needed a big dish because Geocat 2A has an Xband ultra high rate information transmission service. Um, this is what's known as a CAST. So, it's not raw data from the satellite. This has actually been transmitted down to a ground station, transmitted back up to the satellite, and then reprod back down again. But this is basically their commercial service um that I'm not paying for. Um, and it's the best service that they offer from their satellite. So, the transmission's roughly just over 15 megahertz in width. So, I sample this at 31 million samples a second. Here's a
bit of an image of what it looks like when I'm recording it. It's on 8070 uh uh megahertz. On the right gives you an idea of one transmission in terms of the data we're dealing with. Um, it does this every 10 minutes. This is just the full disc imagery. It also has additional data. The SAT dump team hasn't even started decoding that data yet. Um, this is just the global full disc images. So, yeah, uh, lots of bands. Um, four visible, 12 infrared, 16 channels in total. uh the visible images are anywhere from a half a kilometer to 1 kilometer per pixel and then 2 kmters for the in infrared channels. So on Monday um I was chatting with uh some
of the developers in the sat team and I asked them for a true true color composite from the satellite. This is what I got. Um this basically uses the visible uh number six image um which is the half a kilometer per pixel but it's that's a uh a mono or a panochromatic image I believe it's called. Um, so looking at it alone, even though it's very high resol resolution, isn't that interesting, but you can take the RGB channels, the other channels, and combine that with the panromatic image in a process called panochromatic sharpening, and you can build absolutely massive composits. So, this one PNG file is 1.13 gigabyte in size. Um, this was one 10-minute time slot transmission
from GK2A. There's a crop of just Australia. Um, zoom in. Looks pretty good to me. And then, yeah, it's a full disc of the planet. So, like, yeah, it's quite good resolution. So, um, yeah, basically, I'm going to throw this up and say here's some places to learn more things. There are Discord servers. There are a couple of YouTube channels that are really good. There are some guides on the web. Um, I'm here for all day today and tomorrow. Um, as you can probably tell by now, I'm very comfortable in talking about this stuff. Um, I I would love to talk to you about it. Um, if if you're willing to put yourself through it. Um, so yeah, I I guess from
there uh I'll quickly say what's next. Um, I got a a new toy. This is called a RFNM. Uh, RF not magic is basically what it stands for. Um, this can do 128 million samples per second. Um, METOP or sorry, ESA just launched METOP SGA1. I've tried it a couple times. I I'm going to have to tune my setup a bit, but yeah, fairly soon I'll have the new hopefully uh Metop ESA satellite as well. Um I think we're running pretty close to time. We might have uh enough time for a question or two if that's okay. Yeah. So um anybody have any qu Yes.
>> Oh, okay. You know, that's a great question. So basically there are two rough designs for dishes. There's a prime dish and an offset dish. So a prime dish uh effectively means the dish is designed to reflect all of the the data it it's receiving to a centralized point at the center of the dish. An offset dish is offset. So instead of reflecting it to the center of the dish, it actually is designed to reflect it to an offset point. So, the beauty of an offset dish is your actual uh feed and all of your other equipment, the receiving end of the dish setup is actually not in line blocking any of the radio waves that are coming in
I find for smaller setups. Um but yeah, when it comes to to geostationary uh um I find primes a little bit better. Um just because yeah, you can use a prime feed. Uh your feeds al also have to be designed to the dish as well. It's complex. Come and talk to me. Um other questions.
>> Yes. a lot. >> Yes. >> Yeah.
>> Yes. So those big dishes um up up for US Space Force. Um they're hurricane rated. So like literally a hurricane could be coming through that point, a category 5. that dish is designed to stay stationary and functional during that weather event. Um, but it's it's a very good comment because basically the the person who took me through the station from SSC uh had spent the the two days before that giving the Noah team a ton of grief because he's like, "Oh, well Drew's coming up and he's doing what you're doing with a 1.5 in his backyard." Like, why do you need these huge dishes, guys? What are you doing? Um, so yeah, it was
uh it was good. They they they found it quite funny that like yeah I was basically receiving their stuff on you know an order of magnitude less sized hardware. Um we good? >> Good. >> Excellent. All right. Um thank you everybody. Oh just coin. >> Ah the challenge coin. Thank you. >> We can do the handovers now. Just a point of note, by the way, the live stream camera is just here pointing across. So, slope that way if you don't want to be on the feed. It's better that way cuz it means we can get easy access at the front. So, just give us a second and we'll be back with you.
There's no guarantee. >> Yeah. Yeah. RTLS. It doesn't have once you get to know what that means. You can buy one.
Don't worry.
We'll
test it.
Is there
Yeah. Intro. Yeah.
Thank you.
So it was mentioned earlier AI. So this talk about who am I? I've spent 20 years in it to CTO building systems around humans for humans. I think that's pretty important. A lot of system implementations go and a lot of humans get in the way of systems. I've built software and infrastru healthare from tiny startups to emergency healthare and even work for international companies software companies. So why am I here?
AI has main language models and so a quick
how about research coding? Yeah. Cool. Communication. Who's getting to write your emails or your text? Cool. Who's using agents? Okay. Keep your hands up if you're running your own MC MCP server. There's a few here. That's cool. That's cool. So, what happened in 2025 that was really big on the internet? Something significant has happened just recently. Who wants to yell out a guess? >> No. Nobody. >> Internet search died. >> No longer does Google behave like a search engine that we all grew up with and abused while we were learning how to do it. It's very different now. This is a massive change with how people are accessing information, searching. It's faster. Why wouldn't you use it?
So, those people who didn't put their hand up because they don't use AI. When was the last time you did a Google search? It would be AI giving you that answer now. Uh, it's very different. Um, >> the other thing they've also done is they've removed the number of pages down the bottom, if anyone's noticed. So, they no longer use their banner cry about how many pages of index they've got anymore. That's gone. The other thing that happened this year, and makes me sad, and I reckon there'll be at least one person in this crowd that would know why, Jamie from the JRE podcast has been replaced by AI. No longer does he get to troll through
Google while they're making the show. >> It's all now perplexity. >> So, even the best well-known Googler in the world has been replaced. Well, maybe it just looks different. Maybe it's still search. But what's happening to websites and the traffic on the internet's changed as well. There's been a decrease of 25% in clickthrough to sites. So, traffic isn't getting there anymore. There's 96% fewer site links, 8, sorry, 84% fewer videos, and now 58% of Google searches result in no clicks at all. People aren't searching on Google anymore. So, is that good or bad? It's faster. People are getting their answers quicker, but we're just taking what it says. I'd be really interested to know what
that's also done for their AdWords model because they're not getting any throughput now. >> So, AI is here. Organizations are using it. Everybody wants to put it in. Enterprises are using it. They're actively advancing generative AI in their systems. The stats say that 92% of Fortune 500 companies are using it. There's millions of users, billions of requests. It's not going to go away. So what has that changed for that's a very broad bush approach but what has it changed for IT security? The stats are scary. AI ends up supercharging all our old tricks and you end up with fishing attacks increasing by 1,265%. That number is crazy. The number of reported AI embedded cyber attacks has risen.
Red team public competition of AI agents is crazy. They they the pentest GPT gets like 28 out of like capture the flags out of hundreds. Like it does really well. Breach volume is at records and it's also high high records because of social engineering. One of the weakest points that AI, especially large language models, is being used to to leverage is humans. The spam that it makes, it's beautiful. It gets past all the old filters. So, if you think about what you're doing, is your toolkit full of boring tools? But what does it mean to be bored? I call it you. You've got it here. The tools are there. Are you actually using them enough
to keep up with the adversaries, the guy sitting next to you, the guy going for your job, or the girl that's DMing people and using chat GP? It's out there. I've seen it. So in the past year, I can think of a bunch of times that I've lent on it to develop power simulation systems for battery systems, fixing DB queries for slowness, debugging e-commerce websites, like it's great. Plus emails, chats, all that kind of stuff, planning trips, it's all for. So, what are the bad guys doing? Well, they're automating everything. Your recon and exploits. There's a tool called Hex Strike AI, and it's scary. It's got a laundry list of tools that it has. It doesn't automated open source
intelligence. It scrapes LinkedIn. You can tell it to look for GitHub, Showdown. It's amazing. So if you wanted to uh sit there and kind of surf the web looking for targets, that's a slow way of doing it. Now fishing is getting automated heavily. The ability for it to draft emails mimic your CEO's tone, especially if it's got access to its mailbox. Like if it's had some of the communications from your CEO, it'll be able to pick it apart. translations. That's not a problem anymore. It's pretty hard to pick out the Nigerian prints. And they're even using voice cloning and video cloning now in some attacks. Not only that, the campaigns can be generated very quickly. Low amount of
effort. They also can be tested in the actual box itself. So you can run your proper tests and see how it goes. And deployment in hours. coding in malware. This is crazy. I'm not a coder. I'm a hack from simple scripts. It'll do it. Uh and to actually like do things like hide the code. I know was talking to my sister over here about one attack that she's seen. It wasn't encrypted or obstated in any way, but the tools are there to do it. Why aren't they doing it? So with all those tools, these hackers, they're getting some superhuman skills. Social engineering 2.0. Hackers are using it to generate profiles resumes to pose as developers, legit actors.
There's even and I won't go into it too much because I know it's in a later talk, but there's a cool story that involves North Koreans getting hired >> and staying hired. And I know there's another really interesting one with Open AI where they busted the Chinese using it to draft malicious attacks to US citizens. Um, that's a really interesting one where they're using their own tools which are meant to be, you know, slided for them. They're actually going, "Oh, I'll use that to generate the content." So, that kind of threat vector has changed. You've now got AI as like a social engineer, not in human form. It's going to be faster. It's going to
be quicker. And it erodess and it creates an issue where your biggest problem or it grows to be a biggest problem is trust. Who's generating it? Who's doing it? The amount of information that there's now being generated is crazy. How do you keep up with that? So, we've now enabled the hacker toolkit skin for the toolbox. What do you do from the other side of the fence? Well, you automate all the things for defense. Uh AI can triage alerts, do you summaries, you know, free up people to do that higher order work. Um you can, you know, all those uh repetitive tasks, prompt engineering. When you build this, you probably should keep a human in the loop. Otherwise,
you know, you you'll lose your job. Don't automate augment. But automate everything you can because you'll need to be defending its machine speed because that's what they're attacking at or, you know, collecting information at. So the future of work it's going to be augmentation rather than automation auto automation. Uh the jobs and skills Australia put out a report that they think that 13% of tasks will augment sorry automate 13% of tasks and augment 55%. Reallocating labor rather than destroying it. Don't know why they'd use those words about destroying labor. Maybe it was a liberal read. Now, clerical reception and accounting roles are one of the ones that are going to see the biggest decline. Those work of just moving information
from one system to another, answering the phone, reading emails, that's going to see a big pull. Cleaners nurses construction hospitality workers, they reckon that's going to come up. So, I guess we're not getting robots anytime soon to do the cleaning. The biggest thing they point out is that AI literacy and adaptability are going to be the biggest things that matter and jobs are likely to evolve rather than vanish. Um, I had a really interesting chat with a radiologist friend of mine and he used chat GBT to automate and write an interface for his home automation system. Now he's a radiologist. He's not a coder, but he went to chat GPT. He told him what he needed and I gave it to
him and it worked. He gets on the phone to me and he's like, "It's coming." I'm like, "No, it's already here." And like the the conversation went to what is his kids going to do? Cuz he'd been pushing his son down a path of do computer science, get there, do this, do engineering. Kids's not really engaged. But what are they going to do? So I tried to turn it into a light-hearted conversation and asked him, well, what would you still go to a human for over a computer? And he came back with the answer of food. So I think we should all become chefs. But, you know, probably use the tools available, use it to make recipes,
things like that. do your ordering because who likes paperwork. >> So agents and MCP servers or you know context protocol servers are the next frontier. Collaborative AI is emerging which is an interesting concept where the different models you use and change together almost act like general contractors. I when you build a house, you get a plumber, get electrician. You know, when you start looking at trying to build things, you go, "Okay, well, I'll use an illustrative bot or I'll use a text bot or don't try and use the PowerPoint bots, though. They draw boxes everywhere. It's crap." So in doing this talk in between submission and now one of the most interesting things that I found
that happened was the amount of effort I had to do early on to like do submissions and things like that. It's changed so much since March. what I had to take time to craft put together the agent mode creams it does it so fast goes away does it itself it's not cheap but it does it but if you think about using AI in the workplace getting it to build you know task list for people make it be a project manager if you want to. I wonder what we're going to see our work look like, what the pace is going to be, cuz you know, project manager, a good one, he keeps you moving, throws you tickets, asks for
updates, moves the work around if he needs to. But if that gets automated, you're not looking at like a garden hose anymore. You're looking at like a SpaceX delu system. worth of, you know, job requests or what you need to do or give us an update because if it's automated, you know, if they're creating that pipeline, go bug the guy every 5 minutes, I guess we'll just have to automate back, send an update every time he asks for one. So, in conclusion, embrace the board. And he got silly and chose the wrong kind of board. Prepare for the agents. The agents are where it's going to be a big multiplier next. The individual models are running out of
improvements. They're trained off fast and they're running out of stuff to feed it. But the agents chaining agents together and getting it to do the work for you is going to be where you're going to see the most amount of mundane work fall away. probably want to keep the humans loop so you keep a job. Like I said, but the judgment and ethics, you know, these models can go a little bit astray. So, you've all been hacked. Bides has been hacked. Well, kind of.
It's an unauthorized access and privilege escalation. The vulnerability was identified when we're able to put in a generative AI full submission all the content all the structure all the images all the jokes not my fault so it was prompted to compose the submission later the entire talk but it's a proof of concept and it's success because I'm standing here that an AI author talk could presented live. You start making questions about what do we do? Which content do we trust? What stuff are we reading, watching, you know, it's AI slop? So if if AI can be used to hack or engineer access like this, what else could it engineer an access? Especially if there's, you know,
a lot more resources, state bag actors. What could happen? So some final thoughts. Who's really holding the keyboard? If AI writes your code, drafts your, you know, emails and plans your day, at what point does that authorship accountability change hands? Do you blame the II if it screws up or is it you that was crafting the prompts? Can we outsource wisdom? Now, knowledge has kind of now become infinite and instant. You can find out anything really quick. But wisdom requires judgment, empathy, and restraint. You know, an AI could just basically not have feelings. So, you know, if we hand over that thought leadership to AI where we go, yeah, all right, that sounds good. Let's do that.
you know, is all those interactions in business and, you know, people to people, are they still meaningful? And what happens when there's no more real original thoughts and it just becomes a regurgitation and a replication of that's what that thing said. Let's just try that. The next concept is, can you patched human trust? Ever had somebody do something wrong to you, do a dirty? How easy that's to fix? Every exploit has been about trust. Someone believed the wrong input. Someone told you something, you believed it. The could tell us something, we'd believe it. And then the next question is if you can gain root access to your attention, you know, like when you're sitting there
on a phone just scrolling, what does the AI install in us? You know, we might have sat there and firewalled our networks, put AI on uh, you know, AV on our machines, but we haven't really kind of prepared our minds for this. And I don't think the general public's ready for it either. you know, how do you believe what's being put in front of you? So, you know, are we letting AI reprogram us, our workflows, our values? How do we decide which parts that we should automate of ourselves? And it goes back to that comment I made earlier about a mate of mine. Look, we'll keep him innocent, so we'll call him Simon. Simon asked me the other day. Hey, how
can I automate this DMing on these apps to try and date? You know, can I just like make it come up with really remarks and put it in? And then I don't know, phones must have been listening to us, but came an advert for I think it was Perplexity's Comet, the new browser. And it was actually two people sitting there talking about how to use that browserbased tool to do DMing and even book the dates. >> Is that really something we should be handing to AI? So thank you. No,
[Applause] >> no questions. >> A very good friend of mine for the exact reason that you brought up. Y >> she failed >> the company that she represented at the time, 300,000 a month and the management was like shal.
>> Yeah. another guy in DevOps who was developing a tool to track people using clicks to generate revenue. >> Yeah, it's kind of gone anywhere. >> Yeah, you're not going to get there anymore. Um, you are right. Um, I've had a very very similar experience just recently with an e-commerce website that been contracted to fix and we fixed these performance issues like the owners are super happy which we used AI to work out what to do. 100% transparency there. But the next topic he wanted to go and go ahead and fix was he wants to now sell an SEO package to people who use his SAS platform like his SAS e-commerce platform. And yeah, I'm trying to have
that conversation with them now that it's dead. Don't worry. Don't put the investment into it. You need to be making your websites ready so that you know you're providing information that AI agents are going to be able to pick up and serve to people if you want to direct that traffic or make those sales. Because if you look at those click-through rates, that's changed. Yeah. Cool. Thank you very much. Thank you. [Applause] Thank you very much. Uh we're heading into the morning break now. So I forget schedule set of time. The first batch of pizzas is due to arrive in a little under an hour. So we will set up in the meantime.
[Music] You want it? [Music]
Heat. Heat. [Music]
[Music] You're Everybody [Music]
[Music] Do you want to introduce? >> Yeah, Angus. Got the mic on. >> Okay, grab a seat. Get comfy. We'll get ready to restart. [Music] So you with them with Kylie and Sylvia? >> Yep. >> From Russians. >> Yeah. >> Not Russia. >> Bosses will judge me all day. >> No. A little bit nervous, but once I'm in the fall of it, it'll be good.
Almost at the top. [Music] I assume it's one of the radio mics, right?
Nobody wants to cut into your meeting time, although it gives us an opportunity to get them all in. Grab a seat to get comfy. Um, I will hand over to Angus and walk through of an endday Android GPU the arrival vulnerability. Uh, Angus, take it away. >> Thank you very much. Can everyone at the back hear me? Okay. Is the volume good? Thumbs up. Awesome. Um, so good day everyone. My name is Angus. Um, I work at Infosct. Today I'm going to be telling you about um some bugs in the GPU driver for Marley, which is a type of GPU that it's made by ARM. It's used in a lot of Android devices and stuff. Um, and yeah, so this talk uh
was first of all, I want to be clear. Um, the bugs that I'm going to be talking about this in this presentation, I didn't find. Um, they were originally anonymously reported and the way I came across them was this blog post that I got sent a link to by one of my co-workers. It was published by a Singaporean company called Starabs. They published a lot of really interesting research on a lot of stuff. Um, the vulnerability dates back to 2022. And when I was looking through this blog post, they did a really interesting job of describing how they exploited this vulnerability and turned it into a full Android exploit, but they didn't dive much into how it worked. And in my spare
time, I was just looking through it and trying to understand how it worked. And I learned a lot about the Linux kernel and the ARM Marley driver along the way. And I thought it would be really interesting to share what I learned with you guys so that we can all learn a bit about um Android GPU drivers and how they work and how they can be exploited. So got a lot lot to cover this talk. We're going to go through the background of how drivers work um the Marley driver itself, how the vulnerability works and how it can be exploited. Um so let's start with a bit of background. So, as I mentioned, ARM Marley GPUs, they're uh one of the three
most common GPUs used in the Android market. Uh the other two being Qualcomm Adreno GPUs and um Imagine Imagination Technologies Power VR GPUs. Uh it's used in lots of different phones. You'll see there's that big diagram on the side there. That's a it's a screenshot from Wikipedia. There's hundreds of phones that contain it. your Google Pixel devices, your Samsung Galaxy devices, a bunch of Huawei devices, um media tech chips used in a lot of lower-end Android phones. Um also bunch of embedded devices like Rockpie, uh STM electronics, they use it in a bunch of MCUs as well. So Marley is everywhere especially in the mobile and embedded market. Now, one thing to note about Marley GPUs and uh and better GPUs like
it is that it uses an integrated shared memory model. So, it's not like if you're a gamer and you have like a gaming desktop with like a CPU and some RAM and then like a gaming GPU with like 16 GB of RAM that's separate to your main RAM for your computer. uh in mobile GPUs, they normally share their memory with the same stuff that's being used by the CPU and then it's up to the Linux kernel or whatever you're running on your system to manage that memory and share it between the CPU and the GPU. So, let's have a look at the Android drivers uh driver stack for graphics cards. It's really complicated. When you
look through the docs, you'll see there's so many acronyms everywhere. This is a screenshot of the Android doc somewhere. It says h is defined in the aid and the h. Like what does that even mean? I don't know. There's like diagrams. They're confusing. There's arrows pointing absolutely everywhere. Um, look, I don't know about you guys, this doesn't make much sense. So, I spent like a day digging through docs and have tried my best to summarize it down for you. So, there's sort of five main parts um that are important in the Android graphics driver stack. So, the first part is your application. Um, say you're developing the next Candy Crush or something. It's going to talk to your
graphics stack using standard APIs like OpenGL, Vulcan. If you're a gamer, you probably seen DirectX on Windows. If you're on Apple, you might have seen Metal. Um, these are standard APIs that are used across all applications and all GPUs. These are going to talk to the next bit which is called the loader. Now the loader is responsible for working out which GPU driver to forward that API call to. So normally this is quite simple. There's only one GPU on your system. It just forwards it directly to that. And that's the case here with our Marley driver. But in the case of say a gaming desktop with like a separate graphics card and integrated graphics card in your CPU, it might forward that
request to a different graphics driver depending on what your power settings are set to. The next thing it gets forwarded to is the user space driver. So this is where most of the magic happens. It takes those OpenGL calls and like translates them into instructions that GPU knows how to run. It like works out how to translate all your instructions to like coordinates or triangles and stuff. I don't know. It's a bit complicated. This is really complicated where most of the complexity happens. Um, but we're not going to be focusing on it too much today because the next step is once the user space driver has sort of compiled your API calls down into something the
GPU understands, it's going to forward it to the kernel driver, which is the bit that actually talks to the GPU. Now, the GPU kernel driver, it's mainly responsible for memory management, direct communication with the GPU, power management, handling interrupts, all that sort of stuff. Um, and then it talks to the GPU which will have its own firmware for foring those requests to the right parts of the GPU to do actually useful things. And so in this presentation, we're going to be focusing on the Marley kernel driver, which is a driver that runs inside the Linux kernel or which is what runs on Android phones. Uh, and the way we interact with it is using these IO controls um, which are a
way on Linux of communicating with device drivers. So why are kernel GPU drivers like the Marley driver subject to a lot of public research? Well, the answer is, and this is probably obvious, they're an attractive target for exploitation by attackers. But like why is that the case? So the thing about kernel GPU drivers is if you go back and you look through like your Android security bulletin and all the CVE records, you'll see so many bugs in all these drivers. They're big, they're complex, there's a lot going on, lots of room to make mistakes. So, attackers love them because they're full of bugs. Uh, the other thing about GPU drivers, um, is they're quite a common attack surface
across multiple different phones. As I mentioned, they're only really three main GPU manufacturers for Android. And if you think about like the commercial spyware vendors like Celebrite, Gray Shift, that sort of thing, they want to write one or two or three exploits that work on all the phones. they want to write exploits for. They don't want to write a million different exploits for a lot of different phones because that's really complicated and expensive for them. So when they're finding bugs, they want to focus on areas that are common across lots of different Android phones. The other thing about kernel GPU drivers is because most applications want to use the GPU to, I don't know, play your
Candy Crush game or something, they actually need to access the GPU, which means that the GPU is a very like unprivileged resource. It can be accessed from anywhere uh almost anywhere including untrusted app which is the standard um Android sandbox that normal apps run in. The other thing about um bugs that you find in kernel GPU drivers is that they often provide very flexible um exploitation primitives. So what's really common to see is page use after freeze um which are a really powerful primitive that can be used to turn it into a full exploit. Uh and finally because the kernel GPU driver is running in the kernel, if you find a bug in the kernel driver and you exploit it, you
get full access to everything the kernel has access to, which is everything. This is utterly catastrophic for the end user of a device because it means that everything happening on that device is fully compromised. Kernel driver bugs are kernel bugs which are really bad. So there's a lot of research on kernel GPU drivers from both attackers and defenders precisely because they're really really interesting attack surfaces for local privilege escalation exploits. So I'm going to give a bit of background on virtual memory and physical memory and how it works because that's really important for understanding this presentation. So if you've ever taken like a basic like binary C course or something like that, you've probably seen a diagram
looks something like this where you've got a process. It has a bunch of areas like the stack and the heap and stuff like that. Has its own addresses and this nice little diagram. We think the process is the only thing running and it all works and it's very nice. Under the hood, what's actually happening is the OS, the operating system and the hardware work together. uh to give every process their own independent virtual address space that's entirely isolated from virtual address spaces of other processes. These virtual address spaces are broken up into fixed fi fixed size chunks known as pages. Typically they're going to be 4,96 bytes in size or hex 1000 uh as we
see in this diagram. And then what the operating system does is it then maps those pages to areas of actual physical memory. And whenever a process goes to access an address, say hex 2000, the hardware and the operating system will work together to translate that to the underlying physical memory that backs that allocation and returns that to the user. And this gives every process the illusion that they have their own independent address space. Now, one thing that virtual memory lets you do uh is this concept of demand paging. So, say you make an M map call to map a file into a processor's address space. We might like to think that when you map a file into your address space,
it's going to bring all of those pages from disk into memory and then set up mappings to all of those things so that the user can access them. This isn't quite the case. What actually happens is when you go to map the memory, it'll allocate some virtual memory in your process for that file mapping, but it won't actually set up the mappings quite yet. You'll see here I've just marked them as not accessible with a star to say that the operating system kept track of a bit of metadata here. And what happens is when the program goes to actually access this file for the first time, say it goes to access the first page here, it's going to trigger a page
fault in the hardware. And then the operating system is going to see that page fault. It's going to notice, hey, they've mapped in a file here. It's currently not accessible. But what we can do is we can go and bring in that data into physical memory and set up a mapping for us. And then the user can go and actually access that page as they originally intended. The advantage of this is it means that only the pages which actually get accessed get brought into memory and they only get brought into memory when they're needed for the first time which can be a significant saving in physical memory usage. Another important concept to know is the
idea of the page cache and copy on write. So let's say we have two different processes and they want to share some file in memory. So say we have one process here and it wants to map lib C. So, lib C is the C standard library. It's used in basically every program running in your system because almost everything is written in C or uses a C interpreter at some point in the chain. Uh so, lib C say they map it into their address space. Due to demand paging, as I just mentioned, this mapping doesn't actually set up a mapping to physical memory quite yet until the user goes to actually access that page for the first time, at which
point the operating system is going to bring that physical memory um from disk into the memory. Now, if another process goes to map that same page, again due to demand paging, no mapping is going to be set up yet. But when it goes to access that page for the first time, what will happen is the operating system will map it to the same underlying physical memory. Because these two processes want to access the same data, we can save physical memory by sharing that data in what's known as the page cache. Now the problem here is originally these processes both wanted readr permissions to these pages. But if the two processes shared that data with them both being able to read and write
to it, they'd then be able to interfere with each other via that underlying physical memory, which would be bad. So what the operating system does is it marks those pages as read only temporarily. So both processes can happily go and read data from those pages. And when one goes to attempt to write to it for the first time, it'll trigger a page fault at which point the operating system is then going to go and make a duplicate copy of that underlying physical memory and update the mappings accordingly, giving them full read write access. And so by doing this, we've maintained the guarantee that two processes which share some memory are not going to be able to modify it. And
if they want to modify it, they have to get their own copies. So they're not interfering with each other. And this is a core security guarantee of the kernel which we're going to investigate a bit further. So back to the Marley driver. Um so how does the Marley driver work? Just like pretty much every device driver on Linux is exposed as just a character device in /dev. As you can see here, it has the GPU device context which means uh that we can access it because it's a GPU. uh we can open open the file just like any other device file or any other file on Linux and we get back a file descriptor representing that file and then the main
way we interact with this driver is using IoT or IO controls. I don't quite know how they're meant to be pronounced. I'll say ioctals throughout this talk. Uh and so when you're doing an IoT, what you'll do is you'll pass it the file descriptor of the device file that we just opened up here. We'll give it an ioctal number. So here we're saying we're going to do the version check ioctal to find out what version our driver is. And then we give it some data. So here we're going to give it this version check which is going to fill out for us with the version information. And so when we call it octal, it's going to fill it out and
then we can print it out and we can see our version is 11.35 in this example. You can do more complicated things using these octals. For example, what if you wanted to allocate some memory? So, we're going to do that using the Kbase ioctal memocal. When we're calling this, we're going to pass it this octal mem allocruct. We're going to tell it how many virtual and physical pages we want to allocate. And we can also give the pages some permissions. So, we can say CPU is going to have read access to this and write access. But maybe and then the GPU is going to have read and write access in this example. And then once we've allocated this, what
the kernel is going to do is it's going to set um a GPU VA field in this strct. We can see it here, which is sort of like a handle to that memory that we've just allocated. And then what the user can do is they can call the M map sis call, which will map that GPU memory into the CPU address space. Now, this is all a little bit complicated. So I'm going to bring up some diagrams. I love diagrams. I hate code. So I'm going to have a lot of diagrams in this presentation. So when we call the me ali what's happening under the hood. So it's going to allocate some under actual physical
memory for the GPU and it's going to set up a GPU mapping in GPU's virtual address space. And it's also going to create a Kbase VA region strct which is used to keep track of GPU virtual addresses. The other thing that's going to happen is it's going to create a KBASE mess alruct to keep track of that underlying physical memory that's backing the allocation. And then finally our user program is going to get that GPU VA handle that I just mentioned which is going to point to that via region strct. So the user has a handle to this underlying memory allocation via these strcts in the kernel. Then when the user goes to actually map
that memory, it's going to resolve that handle pointer to the VA region strct and it's going to set up a new CPU mapping strct which points to those memory. It's just mapped. Now due to demand paging as I discussed earlier, it's not actually going to set up a mapping from the C users CPU mapping to physical memory quite yet. This won't happen until it actually goes to get accessed for the first time at which the kernel will follow through all these pointers, figure out what memory it points to and then set up the mapping for the user. So that's an example of how um allocation works. Let's have a look at the actual vulnerability. I'm just going
to take a quick drink.
Okay, so let's have a look at the vulnerability. So the vulnerability is called CV 2022 2276. So I'm not going to bother reading the second one. Uh the description is quite short and not very descript. It says Marley GPU kernel driver may elevate CPU readonly pages to writable. Oops. A non-privileged user can get right access to readonly memory pages. Doesn't tell us where the bug is. how it works, anything like that. So, we're going to have to dig a bit more. Thankfully, there are a bunch of public writeups. Um, I found these really useful when preparing for this talk. Um, I'm going to have the links at the end if you'd like a copy. And so, this
vulnerability was disclosed by ARM in 2022. It was anonymous anonymously reported to them. It affected a whole bunch of driver versions for like 6 years. So, this is pretty crazy. A bug that existed in driver for six years and was exploitable. I I did a bit digging, found the patch for this. It's not immediately obvious because ARM's source releases are not the best. Uh but we can see here that it seems to have introduced this like new write variable and it's like checking some flags and it's happening in this function called Kbase JD user buff pin pages. Not immediately obvious what this does, what it means. So let's dig a little bit deeper. What is this Kbase JD user above
pin pages function and what does it actually do? So this is all related to a part of the Marley driver that's used for importing memory into the GPU driver. So say the user already has some memory that they've allocated and they want to import that into the GPU driver so they can do useful stuff with it. So this is done with yet another octal called Kbase ioctal mem import. Now internally when you go to import some memory it requires two separate steps. The first step is we need to reserve some GPU virtual address space for that memory when we import it into the GPU. We have to reserve it so that if someone else goes to allocate some
GPU virtual address space in the meantime it doesn't clobber it. And this is imported in a function in the kernel called Kbase me from user buffer. The second step is the memory that we want to import. We need to sort of pin that so that the kernel can access it and use it. Now there's two ways this can be done. It can be done immediately at the same time that we imported the memory in that same KBS me from user buffer function. Or alternatively another way that it can be done is you can put off the pinning step until later on when we actually go to do a job with the GPU and actually do something with the data. And
this is where the Kbase JD user buff pin pages function contains our vulnerability. This is where it's actually used. So let's walk through an example of how a user might do this. So say the user wants to map some me sorry say the user's already mapped some memory and they want to import it into the driver. They're going to specify the address of that memory that they want to import along with how big it is. They're going to pass that to this p handle value. And then it's going to tell the driver what permissions it wants to import it with. So here it's going to say we're going to give it CPU read and GPU read permissions
and then it's going to call the Kbase ioctal mem import and this will do the actual importing. So what happens inside the kernel when we do this? Once again loads the code here. I'm going to show you diagrams in a second. So the first thing it's going to do it's going to allocate a strct to keep track of the GPU virtual address region that we're about to create. It's going to create another strct to keep track of the underlying physical allocation. It's going to keep track of a bit of data about the memory that we wanted to import, including the virtual address that the user wanted to import and how big it is and what process imported it
and stuff like that. Then it's going to allocate this pages array and this is eventually going to store all of the imported pages once we've imported them. Now depending on this Kbase red share both flag it might set this pages variable at the top to be equal to that array that we just allocated. Uh in this case the Kbase red share both flag is not set. So we can assume that pages is still going to be null in the next step. Then it's going to call this function called get user pages which I'm going to talk about in a moment. It's going to pass that pages variable that is still set to null. So what does get user pages
actually do and why is this important? So get user pages is part of a family of kernel APIs. It's related to another family called pin user pages. And basically the idea of this API is it's used for the kernel to access uh memory from a user space virtual address. Now by default in this case because the pages argument is null all it's going to do is it's going to take that user virtual address. It's going to follow along all the page tables and find out what physical memory that corresponds to. It's not actually going to pin it for use or anything like that. It's just going to find out. It's just going to traverse the page tables and it's fault
fault them in in case they weren't already faulted in due to demand paging. Uh so after get use pages is called we return from our mem from user buffer function here and then actually adds that virtual address region to the GPU. Now as I said this is all a bit confusing. So let's look at a diagram. So say the user already had some memory that they've mapped in and touched. They call the import which is going to create the virtual address region strct and the physical allocation strct. The kernel is going to call get user pages which is going to traverse the user virtual address and find out what physical memory it corresponds to and
fault that in if it wasn't already, which it is in this case. and it's going to set our fizz alex strruct to still be null in this case. And then we're going to add the region to the virtual address space to reserve it. So, so far we've kept track of some metadata of what we wanted to import. We've reserved some GPU address space, but we haven't actually pinned the underlying data quite yet. So, we still need to pin that physical memory so that the GPU driver can start accessing it and update the GPU page tables so that the GPU can access it, too. And the way you do this is with a this JD soft xres map job. There are so many
acronyms and constants all through the kernel code. It's utterly disgusting to read. Which is why I have more diagrams. Uh so the way the user can do this is they say okay I want to import this GPU address region that we just imported earlier. We're going to say we're going to do the soft x-res map job to pin it for us and we're going to call job submit to submit that job to the GPU and this is what's going to actually pin those pages for us. So the way the pinning is going to work is it's going to look up the user virtual address that it gave us originally and then it's going to call
pin user pages to pin those underlying pages for use by the GPU. Now, in this case, we saw that our pages argument is not null. It's our it's an array of pages that we can store to. And if we look at my description of what get user pages does, if we supply a pages argument, it's going to actually return a list of all those underlying pages that we've just pinned and it's going to bump a reference count on them so that the kernel can safely access this data while certain locks are held. And then finally, it's going to insert the pages into the GPU's virtual address space and update the memory management unit to handle it accordingly.
So diagrams again. So what the kernel will do is it'll call get user pages which is going to follow through all of these pointers to find out what physical pages the imported memory corresponds to. It's going to save that to our physical allocation strct. So now the kernel has a pointer to the underlying physical memory and then it's going to update the GPU virtual address space to point to that same mapping. So now both the user program and the GPU have access to this imported memory. So let's refocus on the vulnerability for a second. So why is all this import functionality important? So let's look at the patch again. So we can see here that the patch is in that pin user pages
function um pin user pages remote and what's happened is they've twiddled some flags and it looks like previously uh if only the GPU write flag was set it would set this full write flag whereas in newer versions it's going to check a CPU write flag and GPU write flags and then it will set the full write flag. Once again, this still doesn't make much sense because we don't know what full write does yet. So, let's have a look at that. So, the final part in my description of get user pages and pin user pages is to do with that full write flag that I've just talked about. Now, the importance of this flag is it's used
to determine whether a user can actually write to the underlying pages that they've just asked the kernel to import. It will check whether they have write permissions and if there's a copy on write mapping, it's going to break that. So let's have a look at an example here. So if we remember back to the start of the talk, if we have a copy on write mapping, what's going to happen is two processes are going to have mappings to the same underlying physical memory and the kernel is going to set them to have readonly access only until one process goes to write to its page at which point the kernel is going to go and create a
separate mapping so that the two processes aren't writing to the same shared memory. This is our core security guarantee. two programs should not write to the same shared memory. Now the way get user pages and pin user pages work is they obtain a reference to the underlying physical memory. So if I call get user pages on a copy on write shared page, it's going to follow the pointers and find that it's this shared physical memory here and the kernel is going to get a pointer to that underlying physical memory which is fine as long as you're reading. But if the kernel wanted to write to those underlying pages, this would be bad because it would be writing to memory
that's shared by two different processes, which breaks our core security guarantees. And so this is where the full write flag comes in. If the kernel intends to write, you should pass full write to get user pages. And what the kernel will do is it'll notice that it's a shared copy on mapping and trigger that copy on mapping to be duplicated so that each process has its own independent copy. This means that when the kernel gets a pointer to the underlying physical memory, it's a separate copy so it's safe to both read and write to without breaking any security guarantees. So once again revisiting this patch for a third time now and hopefully we're almost in a spot to understand what has
changed now. We can see that previously what would happen is it would only set full right if the GPU write flag was set whereas now it's going to get set full right if either the CPU or the GPU write flags are set. So it looks like maybe we could set the C. So before this patch, maybe it would be possible to set the CPU write flag and we could pin a page without using full write, which would mean we could potentially modify some shared copy on write pages. So could we exploit this? So let's do a bit of an example exploit. So it's a high level idea of what we're going to try and do. We're going to
create a copy and write shared mapping of some shared file. in this case lib C which is used in a lot of processes including root processes. We're going to import that memory into the Mic driver giving it CPU write permissions. We're then going to pin it using our vulnerable function which will hopefully call pin user pages without setting f which means the kernel will obtain a reference to shared memory and then we're going to attempt to write it and break some of our security guarantees. So first of all, we're going to open the file. We're going to we're going to mm map it, which is going to set up a shared copy and write mapping. So here
we can see, say we had a root program that already had a mapping of this file. When we m map it, it's going to set up um a region in our users virtual address space. And then when we touch it for the first time, it's going to set up a copy on write shared mapping. You'll see here that the user program has a readonly mapping for now because it's copy and write shared. Then the user is going to go and import that memory noting that we're going to set uh we're going to import it using the Kbase octal me import that we saw earlier. We're going to pass it that address that we just mapped and we're
explicitly going to set CPU read and write permissions but only C GPU read permissions. So we haven't asked for GPU write permissions. And then if we look through the code, it's going to allocate some strrus. Boring. It's going to keep track of what address that we're importing. It's going to call get user pages. Now the problem here is when we look at get user pages, although we're not actually pinning the pages quite yet, it does check both the CPU and GPU write flags. So we haven't even reached our vulnerable code yet. And unfortunately we're already calling get user pages. We're checking against the CPU write flag which we've set. So what's going to happen is get user pages
is going to get called with this full write flag set here which isn't good because what it means is that after we've allocated our strrus, kept track of the address and we call get user pages. It's going to follow the pointer to the underlying physical memory. It's going to see that that memory is shared between two different programs. It's copy on write. And so the kernel is going to be like we can't write to this shared data. We're going to make a copy of it. And then the result is going to be saved. Even though we're not saving the result, we've already duplicated the page. They're no longer shared. So when we go to actually submit a job
to try and pin these pages for use by the kernel um using the pin user pages function noting that because it only checks the GPU write flag it's not going to set fight in this case even though full write is not set the kernel is going to traverse the pointer to this underlying physical memory and get us a pointer to our duplicated copy not the original one that we were trying to modify. So our exploit isn't successful here. We've attempted to write to some shared memory, but along the way to our vulnerable function, we've accidentally duplicated that memory, which is bad. So our copy and write mapping got broken during import before we could reach the
vulnerable code. And in fact, if you look at the history of the code, you'll see that this line that thwarted us, it used to be the same security vulnerability where they only checked the CPU the GPU write flag. So yeah, we can see why they patched this because this would have helped us exploit it. So this vulnerability isn't reachable, right? Unless if we could do something a little bit special. So we'll note here that when we imported the memory, it keeps track of the user provided virtual address that they wanted to import. This is just a virtual address. The user controls their own virtual address space. And when it calls get user pages, it's going to call get user pages on that
address, but it's not going to save it because the pages argument is null. So we have a we have a reference to the user's virtual address not the underlying physical memory. What happens if the user just like unmaps that data? They can do that. They have full control of their virtual address space. If they unmap it now the colonel has a dangling pointer to some memory that's pointing to nothing. That seems weird. So we can call the unmap sis call. What else could we do? We could just like map some new memory in the exact same spot that was there before. Even better yet, we could map the same thing we had before. And this is going to set up a brand new copy
on write mapping all over again. So we see here we call mm mapap which is going to cause the memory to become unmapped. We're going to call m map again which creates a new mapping for the lib c library. And then we're going to touch that mapping for the first time which is going to set up a new copy on write mapping. So here what we've done is effectively by unmapping and remapping the memory all over again we've forced it to become shared. Once again we're breaking the kernel security guarantees that there is shared memory but somehow we have a right handle to it via the kernel. This is bad. And then eventually following through
the rest of the code. So this has a handle to it via the user virtual address. When we call the job submit function, it's going to actually go and call get uh pin user pages remote on that underlying memory because we haven't set the GPU write flag. It's not going to set full write. So this isn't going to trigger an unshare. And so what's going to happen is the kernel is going to get a pointer to that underlying physical memory. And it's all shared still. So so far so good. The exploit's working. How do we actually read and write to that memory? Because we want to write to it, right? So, we have a handle to a GPU
region that's mapped to a read only physical page. But the problem is we can't write to it from the GPU because we never set that GPU write flag. We also can't write to it from our CPU mapping because the problem is it's still a copy on write page. It's set as read only. The second we attempt to write to that CPU page, even though we have CPU write permissions, it would trigger a copy on write break um and the kernel will make a new copy which would stop our exploit from working. Now, so what can we actually do here? Well, if we remember back to the beginning of the talk when I was talking through about how we could allocate
memory and map it into our address space, we use this GPU VA handle. And the idea was that we could go and call map on that GPUva handle and that would allow us to set up a mapping in our user address space to GPU memory. Well, this diagram that we saw is pretty similar to the situation we're in now. In both cases, we have a GPU VA handle that we have that's a handle to the underlying GPU memory. So what we could potentially do is we could turn this into yet another mapping. So if we just call m map providing this GPU VA handle, what'll happen is the kernel will go and find the GPU virtual
address region corresponding to that. It's going to check to see if we have CPU write permissions, which we do because we set the CPU write flag and then it's going to map that into our CPU address space. So, we've turned this handle into a new mapping and then when we touch it for the first time by just like writing some data to it. Um, what's going to happen is it's going to trigger a page fault. The page fault, all it's going to do is it's going to insert that mapping into our CPU address space using this VMF insert pfn function. And so now we have a second mapping here that's pointing to the same underlying
physical memory except this time we have both read and write permissions to it. So this is a success. We have some underlying physical memory that's shared between a user program and a root process. And we have both read and write access to it which is violating core security guarantees. And we can just write arbitrary shell code to this. We can start injecting malicious data. We can write malicious shell code. And because this root program also maps this page in its lib, whenever it goes to try and execute some lib code, it's going to start running our malicious shell code. So we can use this to pivot into root processes and do whatever we want. So yeah, so reading and writing to our
new lim map page will modify lib in the page cache which allows us to control what data root own processes will execute. So to summarize what we've seen here, the Marley driver has the ability to import pages into the kernel and they'll be pinned using either get user pages or pin user pages. The pinning can be per performed at import time or as we focused in this talk by submitting a separate job later on during the import if write permissions are requested get user pages going to be called with f write. Now this is correct and what should happen because if you're trying to write to some pages you should set fold right and this will trigger a copy on write
unshare to occur. The problem is uh because only the user's virtual address is saved during import not the underlying physical pages the user is free to go and do whatever they want with those virtual addresses unmap them remap them and they can reestablish broken copy on write mappings later when we submit a job to actually pin those underlying pages because of our vulnerable function frite is not set even though we set the CPU write like and therefore that means that we'll get a new mapping to some shared memory and then we can use this to write data to memory that's shared between root processes and unprivileged processes. So the two issues here the patch fix
fixes the second issue but in my opinion the patch doesn't actually fix the first issue here. The first issue isn't directly exploitable. Like it's not like the driver is vulnerable today, but I can see how this could potentially lead to behavior in the future if they went and changed it further. So, how could this be exploited? Uh, I'm probably running short of time. H maybe not. Um, but I'm not going to get into a huge amount of detail because this is way more complex than even the stuff that I've talked about today. I highly encourage you to check out Starlab's blog post because they have gone into a lot of detail on the exploitation side
of it. I've only dived into the actual bug itself, but to give a bit of a summary of what they've done. Uh so for context on Android, it has a bunch of sandboxing and um like a lot of protections in place to stop apps from doing things that they shouldn't be able to. uh one primary thing one primary protection known as SE Linux um protects what like uh device drivers files etc processes can access and so a key step in exploiting a vulnerability is to disable SE Linux and so's chosen way of disabling SE Linux is they eventually want to load a kernel module which is just a way of running code in the kernel
now because kernel modules are really powerful the ability to load kernel modules is very very locked down for good reason. Uh so what they've got to do is they got to do this complicated dance to eventually reach a spot where they can load a kernel module to disable SC Linux. The way they do it is they inject some code into lib C++. The init process which is like the first process running on your Android phone. Um it has root privileges. It's pretty powerful. problem is even though that process is very powerful, it's normally sleeping and doesn't really do much except when it needs to be woken up to do some sort of specific task. So the way they're going to wake it up
is they're going to write some code to another shared library that's used in another process called va d and they're going to inject some code into that process which can communicate within it and tell it to wake up. Once in it has been woken up, it's going to run this like neural network service which has some more permissions that in it doesn't have for some reason. Um, we're also going to overwrite the lib log library so that you can inject code into that neural network service and get code execution there. Once you've got code execution that neural network service, you're going to write a malicious kernel module to the folder where you're allowed to put kernel modules. Um, the
reason why they use this neural network service is because it's one of the few things that actually has permissions to write to this folder. Then once we've written a kernel module to this folder, we're going to use in it to run this script which has permissions to insert kernel modules. So that's going to go insert that module into the kernel using the binary called mod probe. And finally, we've got code execution inside the kernel, which is complete compromise of the system. One thing they do is they use that code execution to disable SE Linux checks so that processes can do whatever they want. And finally from that root init process they start a reverse shell. So as we can see
at the bottom here they're able to run ID and they get root and SC Linux is disabled. So this is just a very brief overview. It's complicated. Please feel free to ask me about it later. Um I encourage you to check out their blog post because they've done a far better job of it than I ever could. So yeah, this is all I have to talk about today. Um, thank you very much for listening. If you found this interesting, I encourage you to apply for a job with us at Infosct. Our email's there. Alternatively, if you have any questions, feel free to email that or ask me after the presentation. I do intend on writing this up as a blog
post at some point. I've been promising Kylie I will write it up for the past six months and haven't done it yet, but I will at some point. And all the talk all the blog posts that I mentioned during this talk are on this slide. So please feel free to take pictures but yeah pictures
[Applause] questions >> I know there was a lot there so welcome any and all questions. Yeah, >> I just want to say that I think your presentation was really good. Like it's very challenging to computer science topics like that to convey it in a way that's easy to follow step by step. So the code diagrams you did really well. >> Thank you. >> Feel free to come ask me afterwards if you have any questions. >> Awesome. Thank you, Angus.
There we go. >> Oh, yes. That seems kind of useful. Should I plug it in to charge? >> Uh, actually, we're going to go straight to a little squirt can't help. Can't hurt.
Is there actually um I'll wake up later.
If you take a photo of this, um, I'll upload it to this vlog at some point in the future. You can keep an eye out there. I'll Yeah. Alternatively, if you shoot an email to us, I can get in touch and I can send them through. Thank you. No worries. >> Thank you. If you have any questions, [Music]
good. Uh, you going to give him a coin and no coin. He's still around. I got two. The prestigious beast. Thank you very much. Very cool. Sorry I missed the closing. Trying to sort out some fires outside. Thank you so much for allowing me. >> No, thank you so much. Appreciate it. Please. Yeah. Yeah. How you doing? Did you get printed out? No.
as well with several technical requirements. But fortunately with chat GBT we can also fine-tune our prompt to request for more userfriendly methods. Over here I noted that there is a suggestion to try this thing called O Lama which through the Linux command line and since I was familiar with Linux from my experience as a penetration tester I gave it a try. To my surprise actually it worked it was installed and worked quite smoothly but then I noticed some problems. In this slide I managed to successfully download and install the local LM and got it working. But as I've learned in my initial testing, it could only recall events up to 2019 and that is literally
before COVID. So I had a feeling that this was going to be actually harder than I thought. Now so most basic tutorials actually when you actually Google for them, they will actually teach us how to install a lightweight generative AI model such as Olama. But it's actually not very useful for our real work here because the models are usually outdated and usually it's difficult if not impossible to actually upload files on the command line and it's also not very smart compared to modern models such as chat GBT or deep sync. So actually after that it's back to square one and I continue my research into solutions that allow the local upload of local files which will make it
more relevant and helpful. I came across this solution called Neo4j. It claimed that it could do lots of awesome things but some of you may notice right is a Google sponsored result and as you know the rule of thumb never to click sponsored results. So it wasn't actually really helpful and ask for payment for many services. Here's another result that I found. It allows us to check with any PDF. But of course, you see the red flag there that says a 7-day free trial. So, most of the solution that I counted at this stage actually will either be too complicated or they will actually ask for payment which would be all of them will end up
as dead ends. So, the struggles of research this is this is there. So along my journey I came across this term called rack retrieval augmented generation. Rack is a method. Rack is a method that enhances AI models by combining their knowledge with exter external information sources instead of relying only on when the what the model was trained on where it can retrieve up-to-date or specialized documents before generating a response. This approach makes makes answers more accurate. reliable and grounded in real data. It is called rack because it retrieves info first then augments it answer with it. So my next step was to use Google again and this time to build my own rack and run it locally.
I had found some promising results and tutorials even though they did seem a little heavy. I was willing to give them a try especially when I had a programming and coding background. So you know good things never come easy. However, this phase of experimenting experimentation was actually the worst. I encountered many errors, dependency issues and overall many frustrations when I was trying to follow the tutorials in practice. And this was even with a programming and coding background. So this step actually took days and weeks as I jump from tutorial after tutorial trying to follow the steps to get a single solution to work. So in the end I was decided that actually I was barking up the wrong
tree. A good solution shouldn't be so troublesome and frustrating to set up. I cleared my mind and revisited the problem from another angle. As I have suggested the problems from my previous research was that the solutions were expensive and technically challenging. So I refined my search query to Gen AI rack free open source easy.
Lo and behold I came across a promising solution on Reddit. Anything LLM by MLEX Labs. So I went to check it out. This all-in-one AI application chat with docs, use AI agents and more. fully locally and offline. Well, this was certainly getting somewhere. Leverage powerful AI tooling with no setup. That's perfect, especially after my frustrations with setting up previously and full privacy which addressed my problems of sending sensitive data to third parties. So, there we have it. Anything LM free and open source. I'm going to walk through quickly how to quickly to install it and set up on your computer, then show a demo. First, you download the program. Give it a quick Google search. It's called
Anything LM by Mintlex Labs. Then the next step is to install the program. Then we proceed to the setup and configuration phase. We set up the LRM preference. I'm going with the default which is uh anything LM. Then I read the data uh I read the page on data handling and privacy and move forward. This this page request for your email which I skip it and move forward. This this page is where you name your first workspace which I name my bsite setup. Finally, we get to the dashboard. Before we can use the chats, we have to download and set up an LM model. So, click the settings icon on the bottom left. Click the LM tab where you can see the
LRM models available. If you're not familiar with any of them, just click the default first option which is which will be Llama 3.2. Click the desired LM model and you save changes. After the model is downloaded, we can proceed to chat. Go to the chat function and test that is working. But we didn't download this program just for a simple chat board. We can also upload our own custom documents allowing the model to learn new information. So you can click either of these icons. An upload page like this will open. Click on the bottom box to proceed. I've uploaded some documents to the general workspace. Now I can choose to move it to a particular workspace.
I click my Bside setup workspace. Scroll down and click save and embed. The documents are now uploaded to my Bite setup workspace. All right. And that's done. I will now provide a demo to show you the power of this solution. Here we ask the LRM what is besides Perf and what it is about and it answers perfectly based on the uh information uploaded even including helpful content such like the official schedule links and social media links. In the next test, I asked the LM something a little bit more specific, checking if he's able to find out the address and email of Bites Perf and it got both of these answers correct. As you can see, the LM is able
to passse and understand the uploaded information and answer basic questions about it, making it very useful for GRC work where the work requires processing of many long and worthy documents. So, there we have it. Today I've introduced all of you a generative AI solution anything LM which is free and open source easy and intuitive to set up and ensures data privacy able to upload custom documents and it's also customizable for advanced users. I hope you have found my presentation useful. Thank you very much.
>> Thanks Lee. Any questions for Lee?
I can't hear him. >> So, a bit louder. Michael, >> did you do any speed comparison between anything? >> Oh, speed comparisons. No, because uh right now the focus of my uh project is actually to uh perform to use to use local AI for uh work reasons. So speed wasn't a concern. It was more of the features and the data privacy that was involved. >> Is this running locally on your workstation or is this like a cloud service? >> Uh is is running locally on my workstation but you can also uh uh install it on your network. So on so your network can actually serve multiple workstations if you want to use it in the office server for example.
>> Okay. You upload the documents. Um, can you also monitor it with your storage area network? >> Sorry. >> Can you monitor the storage area network? >> Use it with your storage area network. >> Oh yeah, you can you can upload you can up you can use the model uh you can upload it to a server and then you can upload your documents to that uh model itself. >> Yeah. And then you can use the use the information as part of your work or part of your research. Oh, right in the back
information as a reference to other questions that might be >> Can it use uploaded PDFs as a reference to answer questions on other PDFs? Is that right? So for example, I reger
for the analysis. >> Okay. So uh for this uh LRM itself, it actually pares all of the information in the PDFs. So you use your it depends on your prompt and it will actually pass the information and analyze analyze all the information as part of a general um general pool of knowledge. Yeah. >> Was there any limit? >> Oh >> I haven't checked that out but you can probably check it out on the website itself. I didn't build this. I didn't build this. is more of like a research to for you. >> Um my my my PDFs were like only uh 1 to 2 MB. >> Okay. >> Yeah. >> Oh, >> what's the spec use just to run this
basic test? >> Oh, this is my work laptop. >> Yeah. Uh but I I do recommend uh that one of the minimum requirements is to use get a GPU for it was going to make it run faster because this will actually take a while because using a CPU. Yeah. >> One of the issues I think with using online models and co services for security is there's often guard rails built in that gives chatbots a reluctance to talk about technical uh content related to like hacking. Have you found that those guardrails still exist with the models in the software that you've been using? >> I have not tried it because uh right now my my focus is on GRC which is actually
they don't really have much guards for GRC. Most of the time it's guardrails for offensive security like yeah so I mean I've I've tried it. So uh you could actually play around with it. Uh play with some of the models that they actually allow you to download. There are tons of them and maybe you can find out yourself. Yeah. >> Oh, one more in the middle.
>> Can you kind of point it to directories for example? >> Good question. Um I have not tried it. I have not tried it. I don't think it's able to uh pull the directories automatically. You will probably have to uh click the click the folder. It can it can upload by folder. So you you do have to up I do know you have to upload it manually. Yeah. >> And last version like how do you actually upgrade version >> upgrade the version of this uh software? >> Yeah. >> Yeah. It's open source. So you will you're going to have to upload it uh you're going to have to download it and uh repatch it to a new version. Yeah.
>> Cool. All right. Thanks Lee. Thanks for coming to Perth. >> Thank you so much. A selfie selfie with all your face if you do not want to be in it.
and be kind. >> Thank you. >> Awesome. >> Thank you very much. I hope you enjoy the rest of your time in here tomorrow as well. >> Uh, probably not. Yeah. >> Okay. No worries. Back. >> Oh, yeah. >> Awesome. Time for lunch. >> Interesting.
Hey, have a seat. Settle down or kick off the afternoon.
[Music] So, display all the CTF flags. Perfect.
>> Cool. All right, we'll just get the AV set up. >> How's the pizza? Pretty good. [Music] Yeah, it's always good. >> Timing seemed to work. Uh, perfect. All righty, everyone's full, had dreams, stay hydrated. Um, I will hand over to Adam for hunting in the safari zone. Adam, over to you.
Hello. Hello. Okay, so this is going to be a very fun talk. I got a lot of slides and a lot of random stuff I have that is a lot of anecdotes to start with. If you want the slide deck, there it is. Save yourself the headache. I'm not going to just like make everyone take photos or slides if you want to know about something. Uh we're going to talk a little bit about the history of Safari zoning, the idea of like applying internal techniques externally, some fun vulnerabilities, and uh yeah, maybe dropping tools. Who's to say? Anyway, if you don't know who I am, you've not been to Bides Perth enough. Uh, I've been here since the first one,
just teaching people how to lockpick. Some of you may know me as the guy who teaches you how to lockpick. Uh, or that guy over there, he does the thing. I like to release unlicensed tools because I don't like licenses and software. I just like to let people do things. Um, yeah. Anyway, so let's talk about Safari zoning. For people who don't know, you know, there's multiple terms for Safari zoning. The idea is basically that hunting freerange vulnerabilities out in the wild. This was really popular with Shodden like very early on when that came out. People loved hunting for cameras and everything else. And I've had a few friends who started reviving this in private little channels
over the days. And it's always cool to like find things in the wild go I've probably seen that like on a client job or something. or just digging into something just for fun. Um, when I do J job though as a pen tester, all I do is I dig through fileshares half the time because so much good stuff is in fileshares. You can get so much good lootage out of file shares. Seriously, it's amazing. Um, most people love it because it's very easy to get to. It's always available with AD and most people with fresh user credentials can somehow reach HR's information in two clicks. Um, those default Snapper rules kind of suck. So, I kept making my own rules and
patching Snafler. Mike, please add compressed file support. I swear. Um, but one of my favorite ones are these ones. These are beautiful compressed files and virtual disc files. They're everywhere on the internet, everywhere. And I like to find weird ways of getting impact. So, virtual disc files are really common because you pull out credentials out of them all the time. Most of them are actually just the image of a Windows machine. Um, sometimes you literally just have people put NTDS in a zip file. I'm not going to announce what point that was, but boy, it happens. Um, sometimes you just have offline media as well, ISO files for SECM, that's always fun, but there's a lot of different ways
that you can find weird data in stuff you just don't expect. And uh, I get bored a lot. It's a chronic problem. Terminal for some places. Uh we'll get to why in a sec, but um I like to just find weird stuff and dig into it. And you know, I like to dig into random stuff. And recently I've been looking around at public file shares, which is always fun. And open storage blobs are basically just fileshares on the internet half the time. If you just consider it the same thing, it'll make your life 100 times easier. If you haven't dealt with this sort of stuff before, stuff like open S3 buckets, services like Ray Hat Warfare
automatically index all of it. And I love to use it as a jumping off point. You just search up a couple of interesting phrases. You dig in, you go, wait, they put what on the internet. Um, yeah, easy to use. It's basically just Google. And I'm not here to promote a particular tool, but look, what if we used it like Snafler, right? We just search for extensions, keywords, download it, pass it locally. Um, see if we can find some fun stuff with it. Cool part that makes my life easy, they offer an API. The API sucks sometimes though. Like seriously, do not like try and build your own version of passing this API. It is a nightmare.
um add some post-processing just because Azour file locks have a whole story behind it where if you have a mounted file system running or try and open a file that's already open in another Azour instance, it it's a whole weird stupidness. Um anyway, I drew the rest of the owl and just added tools that did all this and it came out with some really interesting stuff. Uh you can't see the extensions very well, but that's all VHD files. So, virtual discs of random people online. Um, now we're going to get into some actual stories of how this stuff happens. And I'm being very careful about what I say with some of this because some of these
are still in disclosure. Uh, I'm being very general, vague, and I am actively putting disinformation in this talk purely so you can't figure it out. Um, but we're going to start with my first personal favorite one. I was over at Y and I was able to have a really fast internet. I have really bad internet at home and I decided to download massive files of VHDX files and one of them spoke to me and it was just a fast food restaurant name and my favorite hacker tool 7zip. You just open it up. It's great. Opened up. Oh, look. That's a full Windows directory. Uh this was just one file for a single branch of one fast food location uh in
the US. And you know it had some interesting stuff inside. Build scripts, hardcoded credentials for loading users in there, HR information, payroll with social security numbers, some fun things. Um and some weird software. What's that weird software? Well, this is where we get to the fun stuff. I I like hunting weird software because every vendor has their own weird software that's made in house and it's always the best. Uh so we're going to call this one instance service and it's basically like a middleware service. The idea is that restaurants don't want to deal with all of the back of house stuff. They just want to get things done. Uh it's all written inn net xi.comconfig files
dumped FTP creds azour file storage and web endpoints in there. The web endpoints made me really interested though, waiting for people to realize. So, turns out that the credentials in there were for every instance that they've ever had on the internet. And it was hardcoded. So anyone can just take these file this username password look up an instance that's currently running for this and connect to it. I'm sure there's nothing bad that could happen from this. Anyway, there's a bunch of stuff there. S3 credentials as well. Database backups. I mean this this file name says it all. um some cloud formation scripts and logs. The fact that I'm just passing over all of that should really say how
bad this kept going. Um and stuff like SSH credentials and I'm not crazy enough to go and test if those work. Not in the slightest. Anyway, partway through this I started seeing the keys getting rotated. So immediately I started reporting it as fast as I physically could. I didn't even completed the full research into it. I was about maybe four or 5 days in and I went, "Yeah, I probably need to go get this dealt with." Um, part of why I'm calling out is that there was no response to these. This happens a lot if you deal with vulnerability research. You will find that a lot of people don't want to talk to you. They just want to
get the thing shoved under the rug for now. Um, but now we're coming back to Australia. Uh, this one's a little bit more interesting. Uh, I love to search up keywords for very specific things and I was looking at in tune because I thought me maybe if I look at in treat it like SECM. Um, but one that popped up was this just said deploy. Um, deploys are always fun. And this one's a major company. I'm I'm intentionally not listing who they are. there. It's this just happens anyway. You download an unzipper and it's the entire Jira instance.
My personal favorite was when you for people who haven't tried to like export Jira and go through it, it gives you multi- gigabyte XML files that you have to go through. If you try and read through that with a standard text editor, it's going to crash 90% of the time. you have to pagnate it every time and it is not fun to go through. Some of the fun stuff though was stuff about supply chain security and chats about the IR playbooks for this exact situation. Um so found the issue Wednesday afternoon. I emailed their head of risk uh got a response uh sent an email back and the issue was all sorted out. I actually wrote the email while I was
taking off on a plane, which was uh a very fun time because I was like typing it and hoping that it sent and then I landed in Amsterdam went, "Oh, good. It actually sent like hours ago. Um, but you know, you get bored and you'll go to check things again." And I checked back a month later and there were still files there, a couple of other extra ones. Some interesting ones that I didn't bother to validate. Um, mainly because I find that if you start using credentials for enter from a random location, you trigger off a couple of incidents, just a couple. Um, but yeah, Jiren, Service Now, GitHub, all just a JSON blob, couple of gigs. Uh, you can
actually probably still find the location for this, but it's been removed now. Um, emailed them the I made sure to include the recommendation to please remove all of these files off the internet. I got a response making sure that I didn't keep a copy of all of the GitHub repos. That was the only response I got. That and a brief thank you. But this is Yeah. There's a whole lot more to that story. It's a lot of fun. Um, but yeah, it really does just show that like this is not isolated. It's not one company doing this. A lot of people have it and it's really hard to notice. Now, this one I'm going to preface with
this is still in disclosure, this next one. So, uh, it's going to be fun. Uh, so the other fun one I like to look for is deployment because people love to just put deployment buckets out there. A lot of random S3 files. Uh, one of them was a K8 deployment script. Uh, for preface, this is an AI based company. Basically, take all of your uh, data from different locations, shove it into the LLM because this is a good idea. Um, and you know, it's the classic company goes fast, breaks stuff. Um, anyway, they just hardcoded the ECR credentials in there. Um, it was quite literally just in a read me, but technically had full scope of his admin
on AWS as well. Um, don't know why they chose to do that. That was a choice. Um, but you know what? I prefer finding better vulnerabilities than this. I'm better than this. So, I dug in more. And you know, I like to look at what's there, what's available, what's in the software, how does it work, and you know, between all the hard-coded credentials that they put in for K8s, and the fact that they use Erlang, who uses Erlang in the world, seriously. Um, yeah, there was a bunch of weird stuff in there, but luckily it's a K8 stack where they just had a middleware service in the front and that covers most of the stuff. So, I had limited access. Even
though they hardcoded the JWT token for authentication, it wasn't really abusable because the way they did it was something a little bit interesting. They referenced a GID in there and every time you log in, it will send you a GID and check, hey, is that UU ID logged in or not in the database? and then say no. So, at least they did that, right? For some of you, if you can't tell, I love to lead people into this stuff. Um, so this fun little bot login request that you can send is one of the few middleware exposed services. I don't know why this was exposed. It's never meant to be exposed, but it is. And all
you do is you supply a hard-coded or credit uh header and a good of the user that you're trying to sign into. And this bypasses full login or you just get straight through to it. But how do you get a good? Because you know I'm I might be pretty smart but I can't predict goods. Um for some reason they exposed a path for airflow DAG manager. Um so this was hardcoded as well for the username password again. And inside of there, there was IDP secrets for being syncing across users, uh, login and sync accounts and the data sources being synced, but it always had the good inside. That made life a lot easier. So now that we were able to log in, you can
do some interesting authenticator requests. First ofress the users and return all their goods once you've authenticated once. Then you've returned all of the IDP secrets and API's API key from a simple web request. I've never seen a software in my life where you can return the IDP secret from a web request. Um, return SMTP credentials. And something that was weird was when you requested the data sources, they returned in an encrypted format, just the for no reason. And that made me question, how does it actually do that? How is it encrypting something on the server side and then it's outbounding with unencrypted credentials? What was the JAR file on the same S3 bucket? And if you just pulled it out, it had
the hardcoded AES key and null IV. And yeah, then you get to write a fun endto-end exploit where you just log into Airflow, view all the logs, return the good of that user, return the requests for all of the other gooids, and then log into each user, decrypt their uh secrets for data connections locally, and extract the IDP secrets and SMTP configuration. And I'm sure there's no thread actors who are abusing, you know, IDP configurations or trying to log into locations from IDP secrets. Absolutely not. I did try to find a way to redact this enough where I could show a PZ video, but I can't. I've tried. I I spent like 8 hours in After Effects
trying to censor everything and I couldn't. Um, all I'm going to say is the next rack's going to be fun. Anyway, at this time, disclosure is still in progress. Uh, Caesar's basically just like, "No, unless you email this to the vendor, we're not going to deal with it anymore." It used to be that you could just throw stuff at Caesar and they'll just disclose it for you, which was annoying. Uh, 30 days with no vendor response. And I've started going directly to the affected companies and saying, "Hey, your software is on the internet. Here's your IDP secret. It's a fun way to do it. But a big thing out of this that I'm going to try and highlight is disclosure
is hard. Anyone who's done vulnerability disclosure knows this. And none of us are lawyers. If you are a lawyer in here, oh boy. And if you ever find a lawyer who says they know exactly that you are doing everything legally, don't hire them. There's a fun anecdote in the US which is uh that there are references in law that state other laws in other countries that reference other laws in other countries. So there's no way to know that every jurisdiction that you're following the law for is correct. And that exists everywhere. Um, and there's a lot of different ways that you can disclose things. Uh, at the moment, this is what I've been saying for most of it. Caesar
is basically not taking vulnerabilities if you do vulnerability disclosure unless you've already disclosed it to other ones. ASD won't return an email unless you pop a big four. DIBD seems to be trying to do good stuff. They have some interesting laws around basically just being able to hack things to verify as long as for the greater good. Uh, full disclosure, personally, I'm not a fan of just yeeting stuff on the internet and saying, "Here's a vulnerability. Good luck." And I don't want to go on LinkedIn and say, "Hey, I hacked this company. Please point a target at my back." But creating burner emails and yeing it over the fence is always priceless. Now, part of this was I spent a lot of time
digging into VHDX files. And as much as I love the hacker tool known as sevenzip, I can't use it for everything because manually passing through every single VHDX file is a nightmare. So, as a fun note, this is still an area of research you can go right now and test yourself. This is 100% working. You can go onto anywhere and just download a bunch of VHDX files for fun. Um, so why not just take a system similar to Snafla? So going through a virtual disk file passing through all the contents with a basic rulesbased system because I like looking at both file system and registry files. Do both of them at the same time. Uh there's a really interesting tool
called a discreer written in Python that does most of this but it's a CLI tool. Therefore I hate it. I don't I it doesn't have a file close handler. It literally has a pass function for it. Um, impact read registry rich because I like nice output. Uh, read the file system without mounting the disc. So, just make your life easier. Uh, go through all of the regist all of the individual partitions of the disk. Pass through all of it. Extract the information. I'm just literally painting over the alpha this at this stage. uh full JSON schema still in work in progress but you can extract load walk and dump the registry keys and values out of
anything. So you get a nice little JSON rule like this where you've got a set of registry files for the SAM system security and software and this will just extract it directly into your local file system to make life easy. Now, when you're doing this at scale, let's say 2,000 virtual disc files in 4 hours, this makes your life a lot easier. And you can do things like this where you say, I want to pull the tenant information out of the local machine file and pull out where that machine is enrolled in for Azour. Anyway, it's already been released. It's on GitHub. And as the open source adage goes, it's open source. If you want a
feature, feel free to add it. Uh, I've kind of written enough spec in there that you can kind of figure out how this stuff is working. I've made it very open. Feel free to contribute. I don't mind. And as because I'm of course a thought leader, I need to offer the classic what did we learn? Please go check your file stuff. Like make sure you're not just leaving everything for public read. If you're a defender, I'm sorry. Good luck. If you're an attacker, enjoy the next six months of this being the rest of your job. Um, and seriously, stopping VHD files on the internet. The fact that I didn't even include the fact that I just found
a full domain controller in here in this talk really says a lot. So yeah that's it.
[Applause] I think I did pretty good on time. Uh, any questions?
>> No. >> Questions?
>> Blurry. So, you that just in case like doing this. >> No, the more that you try to anonymize yourself, the more likely you are to be perceived as an attacker. It's better to try and most of my work here is all public interest. I'm trying to encourage people to go fix these problems. If you go out of your way to hide yourself and make it harder to find who you are and what you're doing, it makes it very hard to defend that you're doing something in the public interest. Sorry.
>> So, are these decent sort of start steps to actually take when you're wanting to have a hobby in this >> in safari zoning? >> Generally, >> uh I would not follow this as a guide for anything if you want to stay out of prison.
I thought this was a Pokemon talk
in the future. >> Yep. >> Oh, I was going to make a joke. >> Now that you cracked military grade, what's the next step? People stopped putting their PKI certificates and all of their PKI servers on the internet. >> How did you get a QR code to the link to the slide deck in the slide deck? >> I exported as a PDF and had a static link.
>> Yes. >> Where can we find that? Yes. >> What is the funniest or weirdest thing that you found with all these like images or outdated software? >> I found a Nessa server. >> I wish that was a joke. Cool. >> Yep. >> Would you consider integrating your tool with Snap as like an optional module to to sort of like auto run basically on >> uh Snapflow doesn't support compressed file uh stuff? In fact, I'm pretty sure if I last remembered the comment, it literally says todo from 2023. >> Harass Mike loss. Have you ever thought about public shining through journalism instead? >> That's a very fast way to find out how quickly lawyers can get on to you. I
prefer to just have an anonymous email that I try to do the best opsec I can to yeet it over the fence. Uh, and if that doesn't work, well, full disclosure is also an option, but I just don't like doing that first. >> Journalists always pay good money for good stories. >> I'm not in this for the money. I'm here for being able to laugh at things and go, "Why the hell is this on the internet?"
>> N >> what size were the customers of that software? >> GDP of a country. You already know the answer to that one.
>> That's a lot. Thank you very much, Adam.
[Applause]
[Music] minutes.
Just strap this one on. So this one's for the live stream. >> Oh, okay. >> This one's for the room. So, >> got it. >> Cool. Thank you very much. Uh I'm going to hand over now to Gia talking about primitives of security audits. Take it away. All right. Thank you. [Applause] Hi testing. Okay. Good afternoon everyone. Um hope you're all having a wonderful day so far. Um thank you for taking the time to attend my talk and this is my third time speaking so but I hope that by the end you'll have some takeaways. My name is Jao. Uh you can just call me Ja. Uh that's me over there with a black coatu red tail.
Uh I'm currently a consultant at Eltem where we help our clients uh secure their access through pentesting and other services. Uh on my spare time I do uh have interest in web security research. So a question for everyone before we get started is when was the last time you dive into how various libraries are actually implemented? So there are plenty of informative talks these days about finding vulnerabilities that follow certain bug patterns. Uh they can be very useful when you're learning about the different classes of bugs. Uh however, as you will see, harden targets are less likely to have these sort of grapable bugs or lowhanging fruits and at some point you it's going to be useful to audit
libraries in order to identify these primitives that we can potentially misuse. So that will be the main aim of this sharing today using a case study as using Jakata mail as an example. So we'll first look at the background of Jakata mail followed by a short primer on encoded email strings and then uh the I'll be sharing about various primitives that I found in Jakata mail namely the internet address and my message classes. Uh similarly primitives in spring framework will also be shared and we'll end the sharing with email annotation from hibernate. So what exactly is Jakata Mu and why is it not called Java mail? Well uh the Java Enterprise Edition name can be very confusing especially when we
see that Java 2 was renamed to Java and then just losing the word Java altogether. Uh so Java 2 Enterprise Edition or J2E was released back in 1999 and this saw this the introduction of uh specifications relating to enterprise technologies such as enterprise Java bins, Java server pages and JNDI and in 2006 J2E was renamed to remove the number two and Java Enterprise Edition 5 was released and we see big changes such as the uh introduction of annotations and finally in 2019 and beyond Java E was renamed to Jakata Enterprise Edition after being handed over to the Eclipse Foundation as Oracle still owns the trademark to the name Java. So if you're curious, there's a
lot of backstory regarding the change of ownership from Sun Micros Systemystems to Oracle and finally to Eclipse where development is currently at and yeah this is just a view of the full timeline and as of today the latest version is Jakata E1 with 12 being in the works. So where does Jakata meal reside in this Jakata e platform? Think of the Jakata e platform as a container and there's a different bunch of different modules and within this container there are even smaller containers known as profiles and which comprises of a subset of Java components and so developers looking to create a compliant framework will have to refer to this list and some frameworks that we may be familiar with
are Apache Tom IBM webspear or Eclipse glass fish the component that we're looking at today is Jakata Mail and it is part of the Jakata E platform and during my research I examined the Jakata Mail version 2.1 which is the latest version released since Jakata E1 10 and according to the website 2.2 two will be released with Jakata e chof. If you look at the Jakata mail page, we can see that there's only one known implementation which is Angus mail. And with that, we roughly know what Jakata e and Jakata mail are. Um I'm going to sidet track for a bit and give a very quick primer on encoded strings in email addresses. And I want
to give a shout out to Gareth from Portswiger who published a very detailed write up on potential misusers with encoded strings in email addresses. Uh so please go take a look at his write out and presentation if you haven't already done so because it's super interesting. And so uh during my research into Jakata mail I remembered this write up and I wanted to extend his research to see if there were any issues with this library. Uh although I did not find any uh I believe a quick sharing on what encoded strings are can still be very useful when testing email address passings in applications. Uh this because developers may not be aware that email passing libraries they are using
actually may allow and pass encoded emails. So in case you are not already aware like I was email addresses can contain encoded strings mainly due to email protocols needing a way to transport non-eski characters. This is defined in RFC 2047 and is supported by Jakata male and the syntax is as such. So encoded strings are wrapped in the equal and question mark symbols and consists of three distinct sections uh delimited with a question mark symbol. The first section specifies the char sets and UTF8 is one of a commonly supported one and the next section is the encoding which can either be B for B 64 or Q for quoted strings. Finally, there is the encoded text which
is where the payload goes. So this is an example email address that is RSC 2047 compliant. And let's deconstruct them into the different sections. There we go. And so what can we do with such an email address? So in Gar's research uh a misuse scenario was he was able to inject an encoded now byte to terminate the email address early. So you can think of it as being similar to like a now bite injection in PHP strings. This led to a differential between what is saved as a user's email address by the application and the email address where the emails are actually being sent to. Keep this in mind as we'll look into this differential later.
So that was a very quick primer on what encoded strings, encoded email strings are and we will now move on to the interesting primitives from Jakata mail and the main focus will be on internet address and my message classes which are shipped by default. How I found the primitives was uh quite interesting as it was just another another day of reviewing our customers applications when I saw a reference to the native Jakata mu's internet address class. I decided to just take a quick look at the constructor to see how the input is actually being passed. So um in Java you can have multiple constructors for any object as long as they have different like method
signatures. And in internet address class the I guess the is for different scenarios and the intention is to take an email address string and pass it through its pass method which will validate the email address for RFC 822 compliance. Afterwards the email address and the personal name will be assigned to the object itself. So there there are two other constructors over here and one section in two and three strings arguments respectively. Um do you see anything interesting here? I'm not too sure if you can see from the back but anyway uh if you look at what the code is doing or even the comments we can see that the constructors are actually just straight up using the
input without even calling the pass method. And it is quite interesting because the one argument constructor will actually do the pass method to check if the input is RSC compliant. But when you see the two constructors over here, it doesn't call the pass method. Uh it may be an intentional design for whatever reason. But the fact that this inconsistent behavior between the different constructors can catch developers off guard. Just to do a sanity check, I went ahead and tested it out. So I attempt to initialize an internet object internet address object with an invalid string and only the single argument constructor will throw an except thrown an exception. So yes, this behavior definitely can catch people off guard.
So that's the first primitive from this class. We can move on to the next one. Let's look at the pass method next and we can see how it can cause confusion when passing complex email addresses. In the implementation, the method takes in a single string and attempts to pass it uh according to RFC 882. Let's use this email address here as an example. Uh it has an email address wed in the angle brackets followed by another email address. How do you think it's going to be passed?
Well, it seems that the library will only take the portion in angle brackets as the actual email address. So you might be wondering what is the implication of such a behavior. Uh let's imagine a scenario here. Uh for example, let's say there's an application that identifies users via their email addresses and maybe for their own developers, they grants special privileges to accounts from the fool.com domain. If the registration is not restrictive enough, we could use the example email address earlier to register. Notice that the string ends with the at@ful.com. We know from earlier that the email will actually be sent to the address in the angle brackets. So let this be our attacker uh control email address.
Finally, we also assume that when granting special privileges, the application does a simple match by just looking for the last index of the ad symbol uh and use the remaining of the string. So this pattern is actually quite common uh from what I've seen. and it basically trusts that the internet address constructor ensures that the input is a valid email string. However, what the developer think is a valid email string uh may not be the same as what the parcel thinks is a valid email string. This means we can now register accounts with special privileges granted to the full.com domain while verifying it through our attacker control example.com address. And so this simple example demonstrates
how an expectation mismatch between developers and the email address passes could lead to high impact vulnerabilities in applications. From the passers perspective, it is simply passing the input according to RFC standards. Something else to think about is that this concept is not just limited to web applications but also other types of services that rely on passing email address to establish identities. Well, what about um encoded email addresses? This was where I tried to apply Gab's research to see if null bites or any other special characters could break the passer. Unfortunately, the get address method does not perform any form of decoding on the email address component. So, we can't do any encoding tricks over here.
Um before we move on to the next method um like to draw your attention to the pass method and the comment says that is an adhoc mess and is not a perfect passer. We got to look for it for look at it ourselves and basically it's just like 450 lines of intensive positional checks and is really a mess. Yeah, if you're motivated to look into the exact implementation, I will not be too surprised if you can find some flaws over here. Okay, so the next method is also from the internet address class. It's the get group method and it attempts to return an array of internet addresses from the current group address. So you might be
wondering what is a group address. Um it's basically a special string over here that has the following syntax. So it has a group name followed by colon and after the colon is zero or more email addresses with a comma as a delimiter and the entire sequence is terminated with a semicolon. So a group address is different from like a mailing list address that we might think of when we are thinking of a group of email addresses and it'll be passed successfully if we supply the this string and yeah it'll get passed successfully through the internet address constructor and if we run the get address method to retrieve the pass email uh it simply returns the entire string as it is and
yeah now we know that a group address is a valid RSC 822 syntax and will pass the validation. One example of misusing this primitive could perhaps be um maybe deni rejects denial service where supplying a group address may somehow cause catastrophic backtracking. Just an example and that's it for the internet address. Um we look at the other class which is my message. So this class is used to represent the message envelope which typically includes the email headers and the body. What's interesting here is that when passing certain email headers such as from, reply to and subject, you will call internet address pass header method to process them. And in this pass header method, you will
call the pass method that we just saw earlier. And so what this means is that the primitives applicable to internet address are also valid through the my message class. So um these are the few my message constructors that takes in an email envelope as input and passes the from reply to and subject headers via the internet address pass method. If you happen to see uh come across an application that cause any of these constructors with user supply email envelopes, be sure to take a closer look at how it uses this input. And we'll use take a look at this sample email envelope here. We'll pass it using the my message constructor. We are using encoded strings here to demonstrate that
values in these headers are actually decoded by the my message class. Um this is just some sample code I use to pass this envelope. So it basically calls the constructor to read the envelope as an input stream. And the output is that we can actually confirm that encoded values in this particular headers will be decoded automatically. Um so coming to the potential misuse scenario for example maybe we assume that the application takes in an input email envelope file and maybe it has some filter to strip away any uh deny listed words from the input to bypass this check attacker can simply supply encoded strings of the bend words and when the application calls the my
message constructor to create the email object the encoded strings are automatically decoded subverting the filter. So um in this scenario we see that if an application is unaware that the my message constructor actually performs input processing in particular decoding encoded strings uh it can potentially just bypass the validation logic. Yeah. So another interesting method in the my message class is the get recipient method. This method retrieves a header value from the email envelope according to the argument type and there are four acceptable types which are all defined as a Java enum. And the interesting one that we'll be looking at is news groups. Observe that if the recipient type is news groups, the get header method will
be called to retrieve the values from the news group header. The get header method also concatenates values from duplicate headers with a delimiter which is a comma in this case. And what this means is that this method actually accepts duplicate news news groups head shaders and this edge case may not be obvious to developers. Um so let's look at how the concatenate concatenated value is being passed in a pass method. It basically splits the input string by commas and inserts them into an array list of news address objects. And the issue can be found in the news address constructor. So uh observe that the code does not perform any kind of validation which is just to accept to strip white spaces and
if you look at the comments it also does not throw any exception if any malform input is received. Um the exact implication of this will have to depend on how an application actually uses uh the news group header values and having duplicate header values itself may also cause potential passing issues if developers do not handle them well. So this is just a simple example of this. Uh so here are multiple news groups headers in an email envelope and if you pass it with my message the result is a list of group address news group address with the spaces stripped. Yeah. So um that concludes the section on the Jakata mail internet address and my message classes. We will look at the
couple of classes of from spring framework next and these classes all utilize Jakata mail library uh in one way or another. Um the package that we are looking at is uh the oak.spring framework.mmail and its sub package java mail which contains support for spring mail infrastructure and we'll be looking at the four classes shown here which are all found in a Java mail package except for the simple mail message. So there's this class called internet address editor which is used to prepare an internet address object from Jakatan library uh using a supply email address input. If you look at the source we can confirm that it simply just passes the input to the internet address
constructor. Um so the following here is a simple use case and since the input goes directly to the internet address constructor the personal name field will be decode decoded if it's an encoded string. Unfortunately as we have seen earlier uh encoded email addresses are not decoded via the internet address constructor and only the personal name section will be decoded. So a potential misuse of this behavior would maybe be fishing attacks. Uh let's say an application shows the personal name section of an email address. We can make it look like the email came from a different sender. And as for the remaining three classes, we'll look at them together as they are somewhat related. And these are the my
me my mail message and it helper implementation as well as the simple mail message classes. They are all implementations of the bill message interface. The simple mail message class has a similar primitive to the internet address editor that we just saw. The address view will or eventually make its way uh to the internet address.pass method. This will decode any encoded strings in the personal name field which once again may catch developers offg guard. And this is just some sample code that I used to confirm the behavior. So basically using an encoded email string here and confirming that the decoding occurs in the person personal name section of the input address. So um my mail message is another class
that is an abstract class that's used to represent the email object and the helper class is the one that is used for handling the logic. One of the helper constructors u will take in the my message object as well as a string which can be sketchy if it's used unsafeely. So the string in this constructor represents the encoding type of the encoded string. The constructor will actually create an encoded string for us if the personal name section contains at least one non-esk character using the encoding method that was specified. And once again, one implication is spoofing for fishing attacks depending on how the application uses it. Um, so we just have a bit of time left.
We can take a look at one primitive in the hibernate library. So, uh, the hibernate library has this email annotation that developers can use to validate email addresses. Unfortunately for us, the default rejects used does not really approve of encoded strings. Yeah. So it also allows custom rejects to be supplied by developers that wants to take matters into their own hands. And the default default rejects is pretty intense and it performs validation in two parts. So they basically split into before and after the at symbol. I played around with the rejects to see what kind of strings could pass validation. Uh I've managed to find one uh simple instance. So the email string right here will pass
the validation checks. So we simply wrap the bad characters in quotes before the add symbol and this will pass the default default reject check and its usefulness will depend on how application passes this email string. For example, uh if it does a knife split by the ad symbol and uses index one as to identify the domain, then it may lead to some issues. All right, so that concludes the sharing we have today. Um to recap, we have seen a few primitives from Jakata mail library as well as in spring framework. And the main takeaway I hope everyone have is that we should always take a look uh close look into the libraries that we come across especially if they
have the potential to have interesting behaviors like we have seen. With that we have come to the end of this sharing. Thank you for your kind attention throughout and I hope you found it informative. If you have any questions please feel free to approach me after or now.
Thank you. Any questions for Gia?
>> No. Okay. Thanks. >> Thank you so much. Thank you. >> Oh. Oh, thank you. >> Thank you. >> Awesome. >> Hand that off to you. Got
a afternoon break now. So half an hour special
Um, so this one plugs into about Stick it on. That's for the live stream. >> Yeah. >> Uh, and we're using that one for the broadcast in the room. >> For the room. Can I use a wireless? >> Probably. As long as the audio works. Okay. >> Go. Just cuz I'm going to be talking quite a bit. >> Can we talk to the radio mic? >> Yeah, this should just work.
>> They won't feed back, will they? On each other. Should be fine. Let me just clip this one on this side. Awesome.
Do I need to test? Oh, you can I can hear me. >> Yeah. >> Awesome. >> And uh I was already getting audio from that. >> Good luck. I heard that the >> Oh, man. This live demo is going to be very very yoloistic. >> Very. It's not going to be deterministic. I wish you were all live now. >> Magic. >> Cool. >> Really? >> Yeah. >> Do you need to >> How do I cool? >> All right. Uh, if you grab your seats, we'll finish up with the last talk of today. So, I will hand over to Christian talking about threat modeling as code. >> Thanks, mate. >> Awesome. Thank you. [Applause] Hello everyone. Um, thank you for
hanging out. I know it's the very last session of the day and I'm the last uh I guess speech before drinks are hopefully happening down at Steves, I believe, unofficially. Um, this afternoon I get to talk to you about a topic that I'm super super p passionate about. Plus, I'm gonna really really try um and demo some stuff. But I'm demoing it using uh our favorite buzzword, LLM. So, it's going to be very very uh I've done this demo like 10 times and 30% hit rate. It's gone way off the rails. So, I guess we'll see how this goes. Um, I also have to apologize that after working on these slides for so many days, in my mind, it started to
sound like I was just saying threat yodelling, which I think actually works because a lot of the time when people are performing threat modeling, it honestly feels like we're just shouting off the top of the mountain. Anyway, today's presentation, I'm going to talk a little bit about threat modeling. Um, and I need I need my spectacles now because I'm getting old. Um, a little bit about threat modeling. Uh I'm going to talk and demo quite a bit on this open source project that I've been working on for the last few years, particularly how to leverage CI/CD to automate workflows and business logic checks for your threat assessments and threat models. And then have a look at
how um artificial intelligence may help or your favorite non-deterministic postmodern mark of chain ebook machine generation system. Um, it it's it's pretty random, but also kind of fun. Um, I did actually ask an AI to figure out a backronym for what I was trying to say when I was talking about AI, and it came up with this, which I thought was quite funny. Um, and like don't get me wrong, like I think there's a lot of really cool use cases. Obviously, the greenhouse gas emissions side of things is not not fantastic. Um, just as a reminder, make sure that you validate the output from your AI uh, slaves. Um, my name is Christian Fawo. I've been
doing security for a long time. Um, last 10 years I've been focusing and and been really fortunate to work in kind of like big big tech companies uh, both in San Francisco and also remote from here in Perth. Um, now I would say my time working for Hashi Corp and my time at Atlassian, I I currently work at Atlassian, certainly started to have a pretty profound impact on my I guess my approach and my philosophy to threat modeling, particularly when you want to collaborate with software engineers. So, hands up if you work as a software engineer. A couple of hands. Hands up if you work with software engineers, right? A lot of hands. Okay, awesome. Um, I definitely
am a wannabe software engineer. I don't get to get my hands on the keyboards as much as I possibly can. Um, I guess my next question I'm going to ask is like who here um performs like threat modeling? Okay, cool. Good. What about um people that think you might be doing threat modeling in the future or it's a capability that you want to try and expand and grow and develop in your place of work. Okay, awesome. Um so personally I do come from this school of thought. Practical threat assessment can be a really really valuable and effective way to help address and manage uh technical risk to help deliver more secure products that your customers can trust. Um now when we
think about how threat modeling and threat assessment typically gets um executed at enterprise organizations it might follow something that looks a little bit like this. You've got some product manager and they go we need to build a widget. They'll start planning the widget. They'll get a project manager. They might spin up a whole bunch of project logistics. Maybe there's some integration points with the security team. Like maybe the security team goes, "Oh, you've got a new project. You have to come and talk to us." And then the security folks kind of get hooked in. And then they start gathering information and they start writing a bunch of documents and people create diagrams and data flow diagrams
and there's lots more documentation. And then they go, "Okay, cool. We need to now do a whole bunch of threat modeling workshops." And then there's just like workshops and you're sitting down and you're doing whiteboarding and it's all awesome. And then maybe like the security team has some gating processes like cool this feature has been developed. We've done thread assessment. You've done some pen testing and all these other things and you can um finally release it. Now what's the what's the problem with this? Well, one of them is that it's really boring. Like this is just like it's doesn't sound very fun to me. And you know what software engineers would actually rather be doing?
Programming. Um and and like when we think about what we get from this exercise, you really have to question the value of the time invested. Now, if you're doing this well, you should be driving positive change. So, the project comes in as an input into your black box of threat assessments and then the output is the project team agree that there's things that they need to change. They go, you know what, that is a problem. We're going to have to tighten authentication or whatever. Um, and you know, I guess typically the other kind of aspect that's really important is documentation because quite often systems are never built once. They're probably expanded. People might change. There might be
extensions on systems and you have to kind of like, you know, we built this big widget, but now we want to add AI to the widget. So now you've got security coming in to come and do like a threat assessment of those changes. So really good documentation can also really help for future threat yodellers. But a lot of the time it's kind of like it's done like this is the process is done we move on to the next one. So like what's the problem here? I think if we take the lessons that we have seen maybe over the last 20 years in the kind of DevOps and agility space, we've seen this like massive industry push for like
we deploy code to production a million times a day and we're moving moving really really fast and a lot of threat modeling and threat assessment typically is still based on I guess like manual assessments potentially documenting things in um confluence pages or in word documents. ments. Obviously, some companies use like threat modeling, threat assessment tools to help this along, but we're kind of like missing a lot of these optimizations. And um one of the things that I really enjoyed about working at Hashi Corp is that Hashi Corp was all about the source of truth in their organization and their source of truth was code. everything was as code and they basically were like um I guess culturally moving to this point
where everything should be like documented in code policies standards security vulnerabilities um threat assessments obviously the products that they sell strategies and if it's not committed into a repository then effectively it's kind of deemed as sort of nebulous so let's bring like DevOps productivity into security. I'm sure someone else said this, but I also have also said this. Um, let's kind of think about infrastructure as code and the approach that infrastructure as code is kind of head on the industry and let's bring it to threat modeling. And that's where threat came in. I did the first iteration of this while I was working at Hashi Corp. Obviously, everything at Hashi Corp is as code and I was like,
our threat models should be as code. Hands up if you have used Terraform. Yeah. Okay, cool. So, Terraform is a popular infrastructure as code product from Hashi Cororp. Um, I really liked the syntax of Terraform HCL files. If you've ever worked with uh HCL or Hashi Corp configuration language, threat is built around HCL. You can document your threat models in JSON as well. And the utility the utilities will process them as well. But HCL unlocks a bunch of really interesting kind of capabilities just because of the nature of how the specification is built. So, it's git friendly and it's great for CI/CD. Um, and I'm just going to jump into a terminal. But before that, I actually
saw a meme the other day and I actually thought for those that work in Terraform, I thought this was quite funny. [Music] Um, because I have definitely been there. Like I feel like I've done Terraform apply and then everything broke. I feel like this was me except I would never be able to lift whatever that skeleton is lifting. Uh, not on a good day. Anyway, so I am going to jump into a terminal. And for those for those that are comfortable with working uh I guess in HCL, ThreatCL gives you the ability to drop into uh your editor. And we're just going to threat model a recruiting app. So this is just a really simple example of what the specification
looks like. This is not exhaustive but you know we can start working on this. Let's say we're building a recruiting app and it's an app for recruiting whatever people um you know maybe we've got use cases like recruiters can list jobs. What's another use case? Someone give me one. What would be another use case in a recruiting app? >> Short listing jobs. I mean that's okay. You you guys are missing the point, but that's okay. How about candidates apply for jobs? That seems quite important. Now threats is obviously where the power of threat modeling comes in. So you document all the potential bad things that could go wrong and you kind of you can put impacts there and you can figure
out controls. So maybe something like an attacker DDoSes the app and that would impact integrity or uh sorry thank you availability. See this is why this is this is I'm doing pair programming on like absolute hard mode. Um and threat cl basically gives you a bunch of utilities. So you can now validate this right. Okay cool. So according to the specification this this threat model and this is a very light example has been validated. Uh the CLI tooling allows you to view it which is also not necessarily super exciting but it kind of gives you this markdown representation of this. Um now for those that don't even want to look at HCL uh the specification you can also generate
this interactively. And at this point you get like a friendly Q&A. So it's like we'll do this again. Recruiting app. I'll skip all these optionals. Christian, it's a new initiative. It does face the internet. Let's say that the size is medium. We've got information assets. Job listings. This is all the jobs. It's public. We'll add another information asset. Job applications. applicants apply and that's probably confidential um use cases you know recruiters list jobs I won't add anymore I won't add an exclusion attackers flood and doss the system and that's availability and then I'll go no and then again it basically kind of spits out uh like a threat model like a fully populated HCL representation of a threat assessment.
And this is just a simple example. Obviously, you can add multiple threats. You can add a lot of controls and other bits and pieces. Now, so far, the things that I've shown you as far as output goes um are not super exciting. So, like markdown output is maybe useful if you're publishing these things into GitHub. GitHub is kind of cool because it can natively render Markdown. So you get like a pretty version of that. But what you actually want to start doing is like building dashboards and doing all sorts of other things. And the tool allows you to do that as well. Uh including generating um DFDs. So you can do things like export markdown. So this will give you a raw
markdown version of the same thread assessment. And then for more advanced functionality, you can start using the dashboard command. Now the dashboard command obviously has a lot more options because it allows you to set kind of the extension types whether or not you want a dashboard file. You can provide custom templates. So obviously you know Atlassian has a particular format for thread assessments. um a different company will have a different format, but this will um generate uh like a real simple HTML example of a dashboard. And obviously, if you threw in multiple HTML files, it'll actually generate them for all of the thread assessments. So, this is just showing for one. And then this is just like a real simple
like I mean I I am obviously not a front-end developer, so this is a completely uh uh vanilla HTML but I the fact that I created HTML is quite good I would say um actually as a full disclaimer most of this was written before vibe coding was a thing if you go back through the commit history anything over the last 6 months that is absolutely not true anymore so um I've kind of demonstrated like some of the real simple things you can define a threat model in this in this file and you can kind of interact with it and dynamically process it. Now, as soon as you start doing this, you can kind of shift your paradigm for how you manage
threat models. All of a sudden, now you can start managing them like code. So that means maybe you have software repositories for your widget product. You can put a threat model in there and you can put some workflows to automatically process, validate, and publish that. So the widget changes, you update your threat model, the CI/CD picks it up, it republishes your threat assessment. Um potentially you can centralize all your threat models into a single repo which is also potentially useful. You've got a single repo where you've got all your threat models. You can obviously commit to them and you get all the benefits of git. So you get the version history and you can kind of get
blame and find who you know messed up on something in the past. Um plus you can automatically generate and publish documentation. Uh GitHub actions. Hands up if you've used GitHub actions before. Okay cool. So GitHub actions is GitHub CI/CD solution. And this is just a really simple example of creating a dashboard and exporting it. Um, threat has native GitHub action magic. So you can just use threat and then this one will validate all your HCL files. It will then build uh a dashboard file and then in this instance it will actually recommit the dashboard and all your changes back into the repository itself which is a little bit kind of quirky but kind of fun. You
think about it you've got a repo with all your threat models. someone updates one of your HCL files, this thing will publish updates and then recommit it back on top of itself. And what you get when that successfully runs uh see if I can click over to it is this. So this is an example that was dynamically generated. And if you click on one of these threat models, this was a threat model that actually had a data flow diagram defined inside of it. And when you publish your dashboard, it renders the PNG of the data flow diagram as well. So you can kind of see there's diagrams and there's some threat model stuff. Uh all right.
Okay. So code files for threat models workflows inside of your your CI/CD. But we haven't really touched on like why HCL. And one of the things I really liked about Terraform is its modularization. Like you can include things from other places into your Terraform files, define really really expansive modules elsewhere and then kind of import them and then it kind of influences your build and your environments and stuff. Threat CL has exactly the same functionality. So you can define central control libraries and I've got a couple for like the AWS security checklist and a couple of other things and then you can define a threat model for the product that you're building and you don't have to retype anything or
recreate anything. You can import that information from a central library which itself you can also then version control. So over time you can obviously adjust to those controls. Maybe the effectiveness of the controls change over time and things like that. Um, baselining is also is is also is also really great. Like the widget that I was talking about before, maybe you've got a threat model for the widget and the company wants to build a new product on top. You can now build a threat model for that. Refer to the threat model from here which sets a new baseline threat model and then just kind of add the bits that are different and then publish that
as a final result. So this kind of modularization I think is quite um quite powerful and I'm going to try and figure out how to do this uh live. So, I'm going to get rid of just some of these weird placeholder things, and I'm going to refer to an example on the internet. So, up here on this git repo, I've got um an example of like an authentication control defined in a git repo like on a up on the internet. And what I want to do is I want to first of all import that. So we can add an import into GitHub threat threat to a particular file. The format's a bit funny with the pipe
uh library expanded controls. I'm just going to try and validate that to make sure it still works. Cool. So now I've imported this. I want to like pull in that authentication control into this thread. So maybe there's a new threat something like attackers can impersonate a candidate. Um and I want to import a control that authentication control. So you define a new import. It's an expanded control and it was called authentication control. Uh like that. And if I validate it and then if I view it, you should see it's now added uh a threat which is attackers can whoops attackers can scrolling. This is a little fun. Attackers can impersonate a candidate and then it's dynamically pulled in that
centralized control from a separate GitHub repo. Under the hood, this is using another uh Hashi Corp package called Gogetter. So you can actually refer to local files, you can refer to git repos, you can refer to HTTP assets, and it handles all the other kind of magic underneath quite well. Um, and that really starts to unlock this modularization for your programmatic threat models. Now, doing all this inside of HTL also starts to give you some other really interesting benefits. Hands up here if you've used SGP. Not as many hands. There's more people using Terraform than SGP. That's interesting for a room full of security people. Um SGP is a a static analysis tool. Um they have an open source
version like a community edition plus also like a professional thing. Um for those people that put their hand up that they use SGAP, did you know that SRP has rules for Terraform files? Have you? Yeah. Okay. So, Smrep uh they built the engine to be able to grock HCL because they wanted to be able to provide uh scanning capabilities for your infrastructure as code. Now, this is awesome because you can write business logic rules in Sim Grep to validate your threat models. So, for instance, maybe you want to throw a warning if someone has documented a threat and they don't have a control. you don't necessarily want to stop it from publishing potentially, but maybe you just want to throw a warning
to them. And using SEM grip rules, we can actually start doing these sorts of business logic checks against processed uh threat cl um files. So to do that, I do have to export the file, which may be a little bit weird because it needs to build the final product. And then I can scan that processed file and you will notice that it's going to throw an error. So in this instance there was that DOSS threat that we had documented and we had not put a control into there. So we should we should we should address that problem and we can address that problem. So what was it again? Availability. So you can see here there's like no
controls for this threat and we want to add an expanded control block. Maybe it's something like CDN uh we use a good CDN um or something like that. So if we validate that still works and then we export it and then we scan it with SRP, we should see that should pass except we've got two different business rules that we uh documented well we configured inside of SGP. One was the current control is not implemented so I forgot to put implemented to true and the second warning was there's no risk reduction value. So you've kind of documented that there is a control but you haven't documented that it's reducing the risk appropriately. So let's quickly go fix those things. Let's
say that we use a better CDN uh and it's implemented equals true and its risk reduction is at least 40%. So we export that and then we scan it and this time it passed nothing. So in this instance that threat model met all the business requirements uh that we had documented inside of SERP and that's quite a powerful capability like we can kind of define requirements that you want for these threat models and manage them as you do manage code um which is absolutely something that we should be doing. This is all still way too manual, right? Why am I opening up a terminal and fussing around with code? What about all them juicy ais that people are talking
about? Um, so threat also has an MCP server. Um, now when I started working on these slides, I was using cursor. Do people know what cursor is? It's like a AI IDE. Um, and then a week or two ago, Atlassian um, they uh, launched their new rodeo CLI tool, which is I'm not too sure if anyone here has used Claude code. It's kind of like the terminal version of Claude Code, but this is the Atlassian version. Now, under the hood, it uses exactly the same models. So, it uses anthropic models. You can select the new OpenAI models if you want. And I thought, I'm going to run this demo on absolute chaos hard mode. I'm going to
try and use the Atlassian rovo dev AI product to create a threat model from scratch. Uh, and it probably not going to work, but we'll see how we go. Um, in this folder, I have got uh a recruiting app that I have vibe coded. Um, uh, it's not super exciting, but what I'm going to do is I am going to start. So, ACL is the Alassian CLI tool for Atlassian customers out there. Rodee is their interactive robo chat thing. Um, I'm going to try try and run this to see if it can create a threat model for me. You can see it's using Claude Sonnet 4 and similar to Claude code, we've got the ability to interact with MCP
servers. So you can see down here in this instance, it's got a threat cl uh MCP server which has access to a whole bunch of different tools like interacting with threat models, validating threat models and things like that. So I'm going to ask it to review the threat HCL specification, then analyze what this web application is about, draft a preliminary product security threat model using the specification, write this HCL to a file, then validate it. If there are any bugs directly edit the file and then revalidate it. Um, and then after it's validated, this last bit never works. But after it's validated, I also want you to generate a DFD. So it said, "Cool. I help you do the
thing." It's already quickly already looked at the HCL specs, so it understands what they should look like. It's going to start going through all the content. Oh, I actually had a backup file. So, I wonder if it's going to realize that there is an existing threat model file, but that'll be fine. Um, it's going to look at all the code. Now, I have a good understanding of the application. Lets me create a comprehensive threat model for this recruitment web app based on my analysis. This is a client side web app that manages things. It's going to try and create this file.
Come on. If I ran out of tokens, that would be very funny. Uh cuz I actually don't know how many tokens I use on this thing. And it's I'm using like I guess I'm using a trial of the Atlassian product. I'm not even using the real thing. Um it's definitely doing Okay, cool. Uh it's successfully created a threat model file. Now let me validate that this threat model file it successfully validated that it cre