
our next talk for you right now how we ran an online Hardware CTF why Thomas Hobson Emily TR and Tom Newan let's welcome them to the [Applause] [Music] [Applause] stage hello everybody great to see you all Welcome to our talk how we ran an online Hardware CTF so our speakers today will be um hex F also known as Tom myself Emily and Dot or Tom as well a quick rundown of down CTF we run an online Capture the Flag competition and this year we had over 4,000 players participate in a range of challenge categories including web reverse engineering binary exploitation and more for the fifth iteration of down under CTF we wanted to introduce a new class
of Hardware challenges for many of our players this might have been their first opportunity to experiment with Hardware Security even if they might otherwise won't have the tools or equipment to do so our competition was open to the public and we welcome teams from around the world we are immensely proud of their efforts and collectively they attempted our Hardware challenges over 2,000 times to facilitate this dream we needed to First design a completely novel hardware and software stack we wanted to create a platform of real circuits allowing us to surpass the limitations of simulation and aim for complete authenticity in modeling threats and attacks users would be a able to upload arbitrary code for execution lending
them complete control over their virtual Hardware lab or while maintaining the security isolation and safety of our underlying infrastructure early in the design process we had to pick the microcontroller that would be the heart of our platform what the user would be interacting with the most and that they would upload their code to for this we had to evaluate a lot of options on the market and we tested them against a strict set of requirements first and foremost it should have a development environment that is easy to install we wanted to make this as small of a hurdle as possible eded idees don't always have a strong reputation here and this quickly rules out most of the industrial triip
families support in Arduino is a must to help our players in their Learning Journey the microcontroller should have an existing development board and large established Community Resources we want to aim for something that's already popular with the maker hobbyist and education audiences to maintain the safety and stability of our systems we need to have a clear view of what is running on the processor at all times with external control and debugging unfortunately this excludes possibly the most widespread family of chips among hobbyists the venerable 8bit AVR microcontrollers that are in the aduino Uno and its relatives next um while we appreciate our players for sticking to the spirit of the competition but being a large CTF with
open registration means that we do have to take certain mitigations against Bad actors malicious code should not be able to permanently modify the behavior of the chip or damage it through setting fuses or writing onetime programmable memory and finally many of our options contained Wireless functionality chips like the esp32 have been made widely available and affordable thanks to the proliferation of iot devices and have earned a wonderful loyal following among makers and home automation enthusiasts such Wireless functionality if we used those chips would have to be disabled somehow to prevent us from interfering with the local radio environment I know we're asking for a lot already but it would be really really great if it was cheap as well why can't we have
that even with all of our requirements we were able to find the perfect chip that ticked every box the rp240 always always known as the Raspberry Pi Pico on top of everything I previously mentioned it Sports a powerful dual core arm processor lots of ram fast programmable IO pins you need those for those precise timing attacks and a completely stateless design with no fuses or onetime programmable memory to quote a Raspberry Pi engineer it was designed to be absolutely unbreakable at less than a dollar it is truly the only perfect processor for our application this project would not be possible without we've picked our microcontroller but it takes a lot more to transform this building block into a fully fledged
system passing off to hexa to tell you about our surrounding Hardware architecture oh is it working cool hello everyone um I'm hex F or also known as Thomas um so we're going to kick off with what's the topology of the system so uh basically everything around here is centered around the host computer which is going to talk to a bunch of of these managers which is something we built ourselves so these managers uh talk over USB um the managers interact with these Runners over swd or the serial y debug protocol and um uart or serial uh these Runners are then paired with these challenge cards which are ultimately the things you're attacking with your code that runs on the
runners um so this is basically what a challenge card looks like so we start with all the challenge stuff this is uh unique per board or per challenge uh oh then we get to uh The Edge connector here so this is a PCI Edge connector um these were picked just because they're really cheap to manufacture the connectors were relatively cheap it's also double-sided and gives us enough IO lines to get both the io from the runner up to the card so that we can allocate it how we see fit for each challenge as well as allow us to provide Power and data on that left side of the card as well so there's data on the left for
challenge management which in this case on this challenge specifically was a ID eom so this is just a spy eom that allowed us to identify which card was in which socket and um for other designs we also put the SPI here directly into an MCU which also allowed us to do things like uh releasing Flags giving certain conditions on the card which allows us to simulate more real life conditions like you're opening a door it also allows us to just reset the challenge uh over SPI nice and easily um I was also asked over Twitter um what would happen if I plugged it into SPI and uh turns out your board the challenge board might survive your
computer definitely want um so we'll take a Clos look at the manager now so first thing we've got is this management rp240 over here like Emily was telling us about it's got It's the Raspberry p Pico chip as well uh this runs a custom pie of firm where we wrote with rust and Artic or the realtime interrupt driven concurrency framework uh this is connected to the computer through either the USB C Port here which was useful when debugging with just one target or one uh cluster manager but not so much when we're in a lot of them cuz we ran 20 in the final production cluster um so this 5pin jst connector allows us to hook up to a
custom USB hub we designed really easily and also allows us to provide extra power delivery that uh USBC wouldn't work without a bunch of extra external uh components uh this USB host or USB port exposes about six serial endpoints so there's a management API which is just request response we can talk directly to the manager there's log so we can see what's going on and in four passrs for the uart or serial back to the USB uh host so that we can work out or so you can talk directly with your uh Runners so from here here are the runners uh they are all connected through to the management rp240 uh and then these are all directly
paired with these challenge card connectors which have challenge cards that go in them we've also got um these status LEDs so we can check what's going on at a glance with the cluster and so this was the cluster that we ran in production uh you might be able to see the interface in the bottom left uh which uh Tom's going to talk a bit more about later and yeah this just hooked up to my laptop uh over USB there's a USB hub there as well so just plug all these in lots of USB device tree stuff going on there so also uh this is what about 50 USB serial devices look plugged in so this is about
half the cluster uh this is where we started running into problems with uh the Linux USB stack which wasn't properly supporting all these USB devices so you had to make some like quick modifications to fix that as well um so yeah once the Cloud software which again Tom will talk a bit more about uh has some firmware ready on the host uh how do we get the code onto the runner and eventually running your code and how do we do that whole process so uh starts with the host uh looking saying I've got new firmware so the manager is then going to set up this flash agent so this has a little piece of uh shim code which will install
onto to the runner over s swd so again swd is that debug protocol which you use on arms to talk directly to the Chip And bypass you know having to actually have code on the chip so the shim is going to allow us to call all the ROM routines in rp240 to actually write to flash as well as uh allowing us to decompress Flash images rather than sending them over bite by bite which is really efficient in this case CU we can tunnel oh it's going to say yeah once it's installed the flash agent is going to pop up as ready and then it's going to allow us to Tunnel directly from the host to the
runner over art and that way we can send the firware directly from the host having to avoid the manager mostly uh and then from there your code's going to run a bit um and then eventually it'll hit a break point and stop and that's what we use a stop condition as well as uh like timeout and other other stop conditions as well um so yeah how do we basically not get hacked we're running untrusted code on the hardware and what we want to do is sandbo this microcontroller so in this case sand boxing for us looks like uh we don't want the uh Runners to be able to steal code off out of the runners or talk to them at all we want
them to remain in complete isolation with their Runner and their um their challenge card perer just to make sure that there are no there's nothing going on there between them so like you can't steal a flag from someone else for example uh we also want to make sure that once we tear down someone's job on the runner uh and it starts back back up again we want to make sure that the code or the challenge remains functional so for this uh we could just try not running untrusted code which pretty it's it's a good way but the problem is it's slow uh because we probably end up using some sort of DSL or custom DSL or lure
or micropython or something like that with a whole bunch of our own drivers but at that point we might as well just emulate it and it's no fun and that way we also lose out on all the you know embedded stuff that you get like the lowlevel Pio access and that way we might as well just emulate it in the end um so instead we're going to implement a bunch of protection so these are just a few of the ones we implemented so we've got time limiting because your code shouldn't need enough time to m a Bitcoin um we want to yeah run nice and quickly we want to uh make sure there's resources there for other people to come
and run the code on uh we also have no connections between Runners so originally in the design process we were thinking just use a multi-d drop swd bus so that meaning everything's hooked all up together problem with that is someone could potentially U send messages on swd and start programming and probing other microcontrollers which we really don't want um and so the way we do this is just by having direct or all of our connections only directly between the manager and each Runner and then the manager serves as like a little switch there essentially uh We've also got no overclocking so this we implement this by constantly polling over swd each Runner just checking that the registers
aren't uh set in that way so that it's running at a reasonable uh clock speed uh cuz we're more concerned about the overheating and the long-term effect on the chip here cuz you overheat the chip the transistors start to break down and the chip becomes unreliable uh but what could also happen is someone just draws way too much current by I know trying to overclock it to 1 GHz or something and um yeah that's just going to draw way too much current and in the end it will end up tripping a fuse which will immediately take the cluster down but it's it's about as good as we can do uh we also have periodic health
checks so this just makes sure if we're going to run someone's job they have a known good working environment so this worked by we just uploaded the soul scripts we wrote directly to each of the uh to the challenges like every so often just to make sure that the environment's still working and so we have all this cool Hardware so what do we actually end up doing with it so we built uh four challenges this year so the first one's IC this was just an easy um challenge we you had to read an i2c prom this challenge is more about getting people familiar with the environment rather than getting them you know accustomed to what a timing attackers or
whatever um so there was this one right up which was really nice to see um so we'll quickly step through it so basically the first thing they did here was getting Arduino environment set up and with all the infrastructure going on so that they can upload code to our system um then they wrote some code to dump the eom uh this looks like it's an example piece of code but it's it's a piece of code nonetheless it dumps the code off the eom um then we they worked out that I slightly changed the eom so instead of having all the address lines wired to ground like you're usually meant to I wired them all high which changes the
address of the chip a bit and so uh in this way the chip uh you need to acir it on a different address on over the i2c bus and then um yeah so then from there uh they uh just ran the code with that new modification grabbed all the data decoded it and got a flag and so the next challenge we built was called the door Al this is a medium challenge which was based off samyam car's Research into open sesame which essentially looked at garage doors and how they have a really insecure um password system with their remotes and so we implemented this thing with Hardware logic gates because it was a really interesting uh use of our
infrastructure it's something we could emulate uh with some code but it's a very nice to have like all you know propagation delays between Gates uh things you have to consider if you're trying to run fast enough uh then we also built this other challenge bird loader so this is a medium challenge where essentially you've got a password protector bootloader the idea was you had to perform a Time side channnel analysis to uh look at this uh constant or this like string compare which was vulnerable to yeah timing side Channel attacks and it's really nice here because it's something we can simulate on the hardware it's a really real world attack and in once you perform that get the
password you can just run the flag command and get the flag on the bootloader and so the final challenge we built was called dctf This is a hard challenge built on uh the rp240 it's called down on cabs trains of fairies is its full name because it uses desire sort of so the idea is you have to exploit a floor in the rp240s RNG where if you don't have an external Crystal um the RNG always just reads zero um and so from this you then exploit a floor in the desire 3-way handshake where if you can if you're pretending to be a card and you know the RNG of the other party or of the reader
you can just bypass it with now encryption key at all and so with all of that uh there are three few ideas which we have for the future which you might see next year in dctf so um we're looking for more uh side Channel analysis challenges so maybe next year we'll Implement some power ones where you say reading a key out of a secure element through measuring the power draw that it uses as well as um timing analysis we are like yeah again looking at like string Compares or other vulnerable functions like that also we might do some fault injection we either voltage glitching or emfi also some RF challenges uh where either you're like
sending or receiving data on various frequencies and also maybe some custom's uh but they're looking a little expensive so some sponsors might be nice for that um so yeah without further Ado I'll pass on to Tom who's going to talk more about the infrastructure we built y um thanks Hef so yeah I'll mostly be going through our networking service and infrastructure today um yeah so I'll first go through what we wanted to build so um we wanted to have complete automation for our challenge card provisioning and allocation of Hardware resources and yeah just having our organizers assigning challenge cards through Discord DMS is not something that's scalable um the second main feature that we wanted was um having loging
auditability so basically which challenge card was allocated to which one as well as all the inputs and outputs and the benefit of this is that we could provide logs to players at the end of each run um yeah we also wanted some rate limiting to prent protect ourself from Bad actors um originally we planned to have geographical redundancy by hosting from two separate locations however we did scrap this idea in the end since um due to shipping costs um y so we built a platform called Eevee that runs on top of our communities challenge platform um so all the state is stored in a persistent myql instance and the messaging architecture was built on top
of net which is a lightweight open source messaging service um so one of the most important features was that it supported in order messaging as well as both broadcast and request and reply patterns so each API as well as challenge instance has its own message topic and we can create them and subscribe to them on the Fly and all of the network communications tunnel for appr proxy code traffic um which is our reverse proxy and in order to not expose NS over the public internet we just left an mtls s on and CAU it a day um y so yeah the EV platform is made up of five um separate rust micr Services which are independently
scalable and it also makes our infrastructure more resilient since down timeing one of the services doesn't mean a complete outage of the platform um so yeah if you notice the infrastructure running slowly at like some point it might have been due to me pushing some bad code changes throughout the CT um yes so I'll give a brief rundown of HL services so um the admin service runs the admin panel which is used to activate and deactivate individual challenge cards um we also use that to take a sneak pick out your submissions throughout the CTF to see how close you all were to solving the challenges um and the front end is like the main entry point for players and
where all the code is submitted as well as like a place where you can view and download your logs from um and the terminal service is like a remote serial console um allowing you to directly interact with challenge cards by sending and receiving through um data um yeah we have two backend Services which are the agent and auditor um so the agent serves as a bridge um between the cloud services and the underlying Hardware clusters and this runs on Hex laptop and yeah organizes flashing of code and the communication with the hardware challeng managers um and the auditor's main function is to collect all those serial logs and outut them into one log file which you can then download to inspect
the output as well as handling the state of an IND individual session so for example if um a challenge card runs over to allocated a time limit we immediately send a kill signal from the auditor um so I'll go for a brief um sequence diagram of what happens when you submit your um challenge so first we uploaded to a cloud storage Bucker and return an instance ID to the player um so on the agent um there's a um service that like continue posing a loop for um challenges in the Run queue and if such challenges exist it'll be deced and then the agent will proceed with downloading the firmware from directly from the bucket so once we Flash the firmware and
check if everything is all good we then send a message from the agent to the front end to register the session and then create the room and set up the auditing and time limits um so y so while the instance is active um we will continously stream serial input and output through Nets um and yeah this is all recorded by auditor and if the player wants to connect to the terminal they the terminal will just subscribe to the instance topic and relay the messages back to the player and yeah when the instance times out um or the code finishes running we'll broadcast a termination message from the auditor and this will um initiate clean up on every service I was
interested in the instance um yeah so that includes um closing the TCP connection and also um uploading the Run log to um our bucket um so we are planning to release a blog post of a lot more detail but we just haven't gotten around to doing that um yeah okay um yeah so this is like a picture of our UI um not going to play the video for now but um so that's what you would see when you run the um log on to the website and sorry it's not okay the I'm not sure why it's not going to the next
slide yeah a wait sorry just one second I'm not sure what's going
on the video is too powerful
y all right just um
okay I'll just skip that so yeah we would like to um thank some additional people that helped us um throughout the building of the hardware challenges so um we'd like to thank Sam one of our infog gods for building the front end UI as well as um integrating with the scoreboard and also Jamie for doing um challenge design and validation and like load testing our infrastructure throughout the CTF as well to make sure we could handle all your traffic and yeah last but not least we want to thank all our sponsors for um funding our adventure um yeah thanks [Applause] everyone thanks what a great talk we might have time for one maybe two questions if anyone has
them look hit up um our oh we've got a question over at the front who might take that one thanks guys um what was the most unexpected part about building the hardware challenges was there anything you ran into that you thought might stump you and prevent you from running it during the CTF um I think that would definitely be the shipping times for all of the hardware that we ordered um I think it arrived like the day before the competition and I spent a good long night assembling it all but yeah other than that probably just a lot of like little bugs in the rust code was certainly a big challenge for us that was yeah had to be dealt
with let's thank our speakers One Last Time great talk [Applause]