
All right. So, um, thanks everybody for, uh, joining our ridiculously named talk. Um, for those of you who are wondering why our first slide looks like that, that is what happens when somebody tells me that my first slide is too boring and I needed to make it more interesting. >> I think it looks perfect like it is. >> Yeah. So, if any of you know Soldier of Fortran, uh, it was him who told me it was too boring. Uh, and to be honest, Oh, wait. Is this working? Does this work? >> Do I get a clicker? >> That is a clicker. >> I'm happy. This was the original slide, so I get it. I get it. But, uh, epilepsy
aside, um, thank you for coming to see our talk. Um, Larry, if you want to >> Okay, so normally we go past the induction thing, but yeah, thank you guys for coming and and watching us demonstrate how we can actually make laser fault injection and some other laser ball tools available for the masses. It was the thesis joke that you normally do here. >> It's been a long day. >> It's been a long day. So, this talk is been categorized by a few people as five thesis in a trench coat trying to sneak up on someone. >> So, there's a lot to cover here and we don't have long. So, I normally make a smoking the bandit joke about having a
long way to go and a short time to get there. So, unfortunately, I fumbled that. So, >> it's okay. Uh the when we created this research, we had to consolidate about nine PhDs to altogether. um within two months of time because we thought we would have more time. Uh but we'll cover that in a second. Uh for those of you who are more familiar with our talk, we did deliver this last year um uh initially at Black Hat. The only reason why I mentioned this is we have made some modifications to the slides. So if you have any questions or you feel like we haven't answered any questions during this talk, uh feel free to have a look
at the video slides there um for no other reason than I promise you that's not malware and that wasn't just like a totally great way of fishing you. >> [laughter] >> Um, all right. So, >> and it's got little cute dinosaurs. >> Uh, for we get asked a lot of questions about how on earth we got around to actually naming the title the way we did. Um, so what you're seeing right now is uh every single title we came up with when we were given an hour worth of notice um to name the talk because >> we submitted we we filled out fire forms. We had everything working and then we forgot the title. >> Yeah. So, you know, it was not exactly
last minute, but it was like within the last five. And so, we were just frantily typing things while at 3:00 in the morning after going to the bar, which is why a number of them are just named Pew. >> We really, really wanted to get the lasers go pew pew thing. So, uh, we're really, really happy that we got it in. Uh, but that's kind of how it happened. It was, uh, late nights, totally sober decisions, and absolutely not, um, last minute, uh, working out. So, >> and not high on gummy bears. No, >> before we get started, let us introduce to you who this team is. So, we call ourselves uh Project Lauren. Um, as you
can see by some of the photos, Larry and I have actually worked together for a very long time across multiple companies. And we always used to make the joke that if we ever did a major research project together, we would call it Lauren. And that's cuz he hates me. So whenever Sam's working on a report or a document or a spreadsheet or anything else not finished, obviously put Lurm in there. So now Lauren is on every single document associated with this in every single place it possibly can and she can't fix it. >> Yeah. And the worst thing about having Lauren as the project name and the placeholder is I use that to define something not finished and a year later
this still isn't finished. So I really am having uh the time of my life with this. But of course once we named it Lorm we had to figure out how what to call it. So we totally didn't absolutely use chatbt for this but it came up with laser oscillation for retrieving electronic memory which we actually think was pretty accurate to what the actual tool was meant to be. >> It works. >> It works. Yeah. Okay. So a bit about me. My name is Sam Boon. Um I go by Panther and I'm kind of known as the woman who never sleeps. I'm always tired. In fact, most of you saw me tanking a Red Bull over there. Uh, I definitely wasn't out
in Vegas at 5:00 a.m. in the morning and waking up at 7 a.m. in the morning. He does have spares. And the way that Larry kept telling me to come to Bides was, "Hey, they have an entire fridge full of Red Bull for you in the speaker area." >> Um, I am terrible with nouns. So, uh, for any of you who ever want to say hello to me or maybe have known me or worked with me before, uh, I will tell you I'm terrible with anything with a noun, uh, even if you are a my best friend, a common household object, or like something else, I will forget it. So, please continually speak about yourself in third person. Otherwise, I'm
going to consistently try and go through your entire life story trying to figure out what your name is when we are having a conversation. So, please >> [laughter] >> talk about yourself in third person. >> So, I'm Larry Tr. I have a horrific memory for acronyms. I can't remember what any of them stand for. Particles of dyslexia. I also am a specialist in all embedded systems, whether it's medical, financial, video game. I'll hack anything with a chip. And the thing I put about here about me is that I have the most unique sense of direction. Doesn't matter how I try to go somewhere, it'll be the randomest way. I'll follow someone random and I'll end
up exactly where I'm supposed to be, even if I'm completely lost. >> We call it the hand of God navigation. Um because honestly, it doesn't matter where you're going, Larry will make it there um eventually and in time. just, you know, one story is I asked him where he was and he told me he was following a a little family of people uh through the streets of London and somehow made it to the bus station that was three blocks away cuz reasons. Okay. Uh Project Luram is made up of three additional people who are not here today. Um the first person who uh we really wanted to shout out is a man called uh Chaz Beck. The S
is underlined because chaz is not a real word or a real name as far as all autocorrect systems go. Um, and so every single time I talk about chaz, it always corrects the chase. Chaz, chassis, every other type of chaz except for a chaz chassis. >> That's what I'm calling it from now on. >> Chassis, we call him Chaz GBT. So if you hear us say that joke, um, it's literally if we don't know something, we ask him cuz he really helped us with the electrical engineering part of this. uh especially when we had some problems with the laser at the very beginning. >> Yeah. >> Um the second person is Casey Rep. So Casey really allowed us to uh bring this
open source and online to the community because as I'm sure all of you know documentation's real real hard uh and it's really easy to do the thing uh compared to actually writing about the thing. Uh, so, uh, he's our android and also known as the, um, second Sam because every single time people meet me and Casey together, they think I'm Casey and he's Sam. And we end up just going with it >> and occasionally gets sidetracked and he starts telling stories with puppets. It's the weirdest thing. Don't know where he keeps them. >> And the last person we want to introduce is a man called Curtis Shelton. Um, I emphasized the T at that because for 2
years, um, until about 24 hours before the Black Hat talk last year, I thought his name was Shel Dunn. So, I had been talking calling him Curtis Shell Dunn the entire time. And, uh, he never corrected me for two years because he thought it was just all part of the joke. Um, >> it's on you. [laughter] >> Um, so Curtis was instrumental to the research because he allowed us to come up on stage and talk about artificial intelligence and or machine learning uh and not look like complete morons. Uh, so a little bit about that later on in the talk. Uh, [snorts] together we are a team of five people who specialize in hardware and integrated systems also known as
hardware and embedded systems uh, and artificial intelligence and machine learning. And uh at the time of the project uh we all worked for the same company. Um so Net Spy were a security services company. I'm not going to I'm not going to spend much time on this, but uh we do a lot of uh specialist pentesting services from mainframes to AIS to hardware. So if you're interested in talking to us, let me know after the talk. All right. And a very special thanks just before we end. Uh we wouldn't be here without these two individuals, uh John McMaster and Dr. Matt Lley. Um, uh, John was our emergency acid supplier. Cough, not the funk kind, and cough. Um,
uh, where we ran out of, uh, decapsulating material. And, uh, Dr. Matt Linley Lee was available for us at 3:00 a.m. in the morning when we broke something that wasn't expensive. >> Yeah. No, totally wasn't. >> Wasn't very pricey. Nope. >> Okay. So, you guys all came here because you wanted us to talk about lasers. So before we get started with the lasers, we're going to have a little bit of a legal disclaimer. >> You've got about 30 seconds to read this OSHA document that tells you everything you need to know so you don't come sue me cuz I don't have any money and that's why I wanted to do this. >> Don't worry, everything here is legally
binding. [laughter] >> Um, so for those of you who don't want to read, >> very sorry, but you did ask for a cup of sanity. Unfortunately, in the most ironic thing ever, I ordered a mug that said, you know, cup of sanity. >> Yeah. >> And it was unavailable and it didn't come. So, you got this. [laughter] [snorts] >> Vanity is backorded. Okay. >> Thank you. >> Cheers. I really hope this is straight up vodka. [laughter] >> Yeah, you could get that if you want, but that's just water cuz I wasn't sure. >> Thank you very much. So, [snorts] usually by this time we've gone past this slide, so you couldn't have read all of it. But if you happen to notice
Laura Rim everywhere in there, good job. [laughter] >> I really needed to fill that slide out. All right, so really there are four things that you need to take away from this and not sue us from. Um, the important thing about working with lasers is number one, you can be blinded or worse. >> So even though the light is invisible to naked eye, it will still burn you and blind you. Um, you must wear protection. >> And you probably, if you're lucky, only have two chances to protect your eyes. >> And [snorts] because we cannot just demonstrate to you why lasers are so dangerous with your optical eye, we chose to instead use a digital sensor
instead. >> So, if you look at the images here on the left, you see people filming various concerts where they have laser displays. And all those little black lines you see are pixels that will never be populated because the sensor has been burned out of the actual phone. And our eyes are more sensitive than digital sensors. So, uh please lasers are dangerous. Uh because if you do not follow the practices um uh that we >> laid out on and you look at it with black on the left, you'll end up like Belma on the right. >> Yeah. Okay. So, >> not talking to me. Hands up. Who knew that laser was an acronym? Nobody likes a showoff guys. [laughter]
So talking about all the different types of lasers and what they do is out of the scope of this talk, but we wanted to briefly comment uh what a laser is. So laser as all of you show us knew um is an acronym for a light ampl amplification by simulated emission of radiation that I totally had to read off my own slide. So basically what that means is if you pump enough energy into a lasing medium out will come a laser beam which is a coherent beam of light that you can use to focus and target things in our case. >> I just remembered you're not Carl. >> Sorry I figured OUT I'M BAD WITH NAMES.
I FIGURED IT OUT. [laughter] >> So so sorry. All right. So why lasers and why would you use that? Um uh why would you use lasers against a hardware material or in hacking or pretty much in anything that you use lasers for? >> Yes.
a clever answer. >> Our answer is it's easier to source. [laughter] >> It's also easier to target if you're doing with heat, it's a lot easier to get a targeted light beam than is an actual heat sensitive beam. And basically, we're going to go into the fact that how transistors are all just very poor photo dodes. As you said, you put enough energy into it, it charges up and it changes state. >> Uh transistors being what holds memory within chips. We'll get into that later. Uh lasers are typically completely contactless and they are typically non-damaging physically to the target if used correctly >> and typically use these in a manner that's called a semi-invasive technique.
>> So what do we mean by the word semi-invasive uh when using lasers uh in the context of uh hacking? Well, uh, when you're prepping a target to receive a lace, there are traditionally two different ways that people like to employ. Uh, the first way, which is my favorite way, is using chemical decapsulation, where effectively you take a not the fun kind of acid and you use it to eat away the top layers of an epoxy that holds the silicon dye, uh, exposing it to the outside elements, i.e. delays so it can receive it from top down or method number two being Larry's favorite >> brute force. We're talking sand blasting. We're talking extreme heat or
in the case of our favorite method, zip socket on a power sander. And if you watch, we love this one because you can see this is a failed attempt that was completely safe. Those are safety hands and the safety goggles are on there, Jill's head. But you see the little wires popping out. That's when he goes he's gone a little bit too far. Just a little bit. Yeah, we >> he goes a little bit further. He's going to lose that last finger and then he knows he's going way too far. [laughter] >> Uh yeah, we were definitely wearing personal protective equipment during this entire thing. Uh lead by example, guys. Come on. Come on. Um but yeah, so
you can do sand blasting, you can do heat, or you can do friction. Uh and this exposes the back of the die uh in order to receive a laser. So these are what's called semiinvasive because the operational target will still work despite going through hell and back. It may not always work after the heat, but the first hit and the last will always work. >> And so we finally get to laser fault injection. The first reason of why you would use a laser against a target. What is laser fault injection? Well, laser fault injection is a technique that you can use light to affect processor functions with nothing more than itself. Um it typically gets used
to cause processors to skip instructions. And the reason why it's so effective is that um laser fault injection is normally used in this context to help you bypass security mechanisms on a chip. >> It can also be used to modify data on the chip depending on which areas of memory bus you hold. And there's typically three ways that you use to actually target the laser for the methods like Sam was talking in which you can actually expose the top of the die via sanding or via acid. Front side back side is typically used when there's no actual shooting on the back of the system. So you can actually remove the back and then shoot it with laser which
requires altercation of the circuit so you can actually hit the back of it. And more commonly now because protections is a lateral side because the top and the bottom have been started to be shielding. So, you're trying to hit the laser beam from the side and just happen to bounce it to where you needed to be inside of the actual RAM chip. >> So, my next slide is about asking a lot of questions. You guys might know that I really like puns. Um, so this is going to be a theme throughout the entire thing. Um, right. So, uh, hands up. Do any of you actually own a laser fault injection rig? >> Interesting. We have some clever ones in the in Yeah,
I know. But this doesn't count. This is what the slide's about, right? So, if any of you had actually raised your your hands, the next question was going to be, would you be willing to actually tell us how much it costs? Because my next slide was going to say drinks were on you. Um, the new average cost of a LFI rig on the commercial market at the time that we were doing this research is approximately 150,000 USD. um used or decrepit systems which don't have um a guarantee of functioning uh can run anywhere between five f five and 20,000 USD >> and that's a big amount of money to risk for a system that may not work and it
may not even work on the target because the market may be immune to it >> and remember guys these prices are pre-tariff so we don't know if those went up or down we're going to guess going up [snorts] all right so um this causes a a little bit of a problem for people who do laser fault injection. Um, for those of you who hate doing expensive reports like I do, uh, the justification for submitting an expensive report for anything between 5,000 and $150,000 to either your own company or the company that you made is pretty high. Uh, >> and [clears throat] you need to have a receipt for it. And I'm really bad at that. My finance folks will tell you
that. Um so this is a huge problem in the community because uh whenever you find problems where let's say uh an IC is vulnerable to laser fault injection or maybe you find a something that is vulnerable to fault injection in general. The reasoning we normally get from a client perspective is that ah it's too expensive. No one's going to be able to have that anyway. >> This is a nation state level attack. No one's going to be able to do this in the garage. is going to have to have advanced hackers, advanced study, four or five year degree, specialized skill, all that. >> And uh we didn't believe that. Um also, I didn't want to submit $150,000 worth
of expense report in order to prove it. So that's where Project Lorm started where it really was an operation to um reduce the expense report. uh so I didn't have to get a lot of receipts and we could prove that laser fault injection is not a nation state level attack and something that is very easily done from your own home. Um in reality Larry's Larry's special focus was to democratize tooling. Um too many times I think this is pretty special to besides too many times have we been told that we can't do something because it's too expensive. Um here we were try we were trying to enable people so that it is you can do it from your own home. You
can learn about it and you can get better at it. >> Yeah. It's like most of the citizen science stuff if you study enough on how to do it you can find a cheaper way of doing it with stuff you have at your house if you need to. >> Okay. So to start off um we needed to figure out how to build a laser fault injection rig in our own home. Uh which splits it up into the essentials which are only three key items. The first key item is you need an imaging system >> so you can see what you're trying to hit. You need a positioning system >> so you can aim at what you're trying to
hit. >> And you need a laser >> because lasers are cool. >> All right, so let's start with the imaging and positioning system because those are the easiest things to to tackle. Traditional imaging systems and laser fault injections are microscopes because they're used for targeting and identification. >> So in a number of papers I looked at, they were using this particular trinary microscope as the basis for the system. Reasonably referred price that you can get on eBay is about $4,000. pre-tariff. Let's remember that part. All right. So, that's a traditional imaging system used in an LFI. And that's if you were to build it in your own home. Now, um traditional positioning systems are typically tables
or uh specialized purpose-built motors. Um and they're used for nanopositioning of the target or the laser depending on the type of LFI rig you have. >> These are also pretty expensive just because of how much it take to build them and how precise they had to be. So they're about $3,000. [snorts] >> So we're already running up a cost of about $7,000. If you wanted to do this at your own home, >> and we haven't even got to laser parts yet, >> just hardware. >> You could do it our way. >> So while searching the internet, we found that this project here, the open flexure product, is a microscope that's made for citizen science. And the beauty
about this thing is it combines two of the things we need in one package. Notably, the plastic structure allows for it to have a 50 nanos nanometer positioning system that's repeatable and pretty accurate. [snorts] >> So, effectively with a 3D printer and some Amazon parts, total cost running you about $280, you can build an LFI positioning and imaging system in your own house that is missing everything. That is not missing anything aside from the laser. That is a cost reduction of approximately 6,720 USD. That's a lot of money. Um, and uh, we haven't even got to the best part. The last part that we needed to tackle was the laser. So, traditionally, laser fault injection bricks work by firing a
highly powerful laser in a very short amount of time. So to give you an example, a YAG laser can provide tens of millig in less than 10 nonds. And this type of setup can cost about just over 30,000 USD on a good day. And that's just for the laser. That's a bit of a problem. So in order to really tackle this, Larry and I had to ask the question uh ask three major questions. How much energy is actually needed to cause a fault fault or also known as a glitch? Did we really need that much energy in that short amount of time? And how much time can we get away with when attempting to cause a glitch? Cuz at the
end of the day, laser fault injection is about inducing faults within an IC. So, do we really need to do these things or where can we get away with? That brings up brings us up to the coolest part. >> So traditional research has shown that they need milligles of power to be able to induce a glitch. But recent research and actual thesises have gone and done some more mathematical calculations. And it turns out you only need between 40 and 80 nanog which is an order of magnitude of like power of six difference between the two. >> And some of you be may be wondering how on earth is that possible? And that is because of the wonderful worlds of
physics or also known as the photoelectric effect. So the trick is we can use low power over time and actually build up the charge. As you said earlier, the thing it's all about how much energy gets to that. Well, they discovered that as long as you put the energy in there before the actual clock pulse happens and it changes state, it holds. So if you put it in there for example 25 nconds at 2.5 watts you get past the threshold of the glitch it could change the state. So the trick here is to use a low power laser over a longer amount of time instead of instantly with high power. And that is how you can induce a glitch as long as
you do it within the same clock cycle. And uh a 2.5 watt laser is way easier to source. And when we're in a chicken dinner, we found something on Amazon that hit our criteria. >> These were on Amazon for two for9.95. [laughter] >> Um, so nice budget. >> It's a great budget, guys. My expense report is looking real sexy right now. All right. So, uh, before we go on about the lasers, we do want to kind of cover why cheap laser pointers enable us to do this and why we keep bringing you back to that totally you read OSHA disclaimer that we're definitely going to nod along to here. Um, >> so does anyone remember all the stories
about green laser pointers in the last 20 years? Do does anyone know why those are bad? Well, it turns out that to make a cheap green laser pointer, you basically take a high power IR laser, the kind that could burn your eyes, and you double the frequency, which goes from IR to green. The problem is these are about 95, these about 5% efficient. So, if you put out 5% of green, let's say a 5 matt green, which is the legal standard in the US, you're actually putting out 95 matt of of IR in the 1,64 nanometer range, which would be important for a reason we'll get to later. So what you're seeing me highlight is when you buy these laser
pointers and they're cheaply made uh you may be quote seeing the uh that frequency but what is actually outputting in the uh invisible light spectrum uh is 10,64 nmters which is important because that happens to be the exact IR range and output that we need to induce glitches because >> because that wavelength is actually silicon is transparent too. So it can actually or translucent at least can go through the actual silicon back of the die and hit the back of the surface. >> Yeah. Um so just to kind of cover 1064 nmters is considered one of the most common wavelengths to use when glitching. Um and uh it's also considered uh one of the most um
effective wavelengths for glitching, but it's not necessary. But we're just kind of highlighting cheaply made products can end up helping us out later on. All right. So, um, we've talked about an imaging positioning, um, a positioning system and a laser, uh, pointer from Amazon. Um, if you put that all together, you get the Rave. Um, the first generation affordable laser fault injection rig. I I every time I say that line, I feel like a saleswoman and I hate it. >> For absolutely no cost. >> For absolutely no cost, you get a laser fold injection rig. >> And even better, no expense report. >> No expense. Yes. No expense report. All right. So, we're going to put this all
together. So, what do we do? So, on certain targets where it goes too fast, we use an FPJ to slow the clock down because that's what actually gives us the time to actually have the charge build up. Then, you need an LED to be able to see what you're targeting at. We're using a camera that can see that spectrum. So, we're using a 1050 nmter LED to be able to see through our microscope so we know what we're aiming at. We need a green laser pointer so we can actually cause the glitch. And then we needed the objective from a normal standard microscope to be able to see what we're looking at and target the
laser through the actual objective so that the focal length of the objective is that of the laser give or take the optical recence of seal. So we do have pretty photos to show you what the assembling actually looks like. Um for those of you who want to stay over, we also have the actual rig right here to show you give you kind of an idea about what it looks like. Um but when it's freshly naked out of 3D printer is on the left. Uh then when you assemble it with the components, it's on the right. Click. >> Yeah. >> Click faster. Uh when you load the target, it goes onto the top of the positioning stage. And then
uh this is what the target would look like if you look from the underside with the laser firing up into the silicon. And uh because Larry actually likes you all and none of you brought laser goggles, he swapped out the laser here for a blue one just to show you uh that the lathe does fire up. And this is what it would look like if it is working today. We did not >> make sure it was low enough it would not damage my camera. >> Yeah. And we purposely didn't bring the laser. I'm definitely going to say we purposely didn't bring the laser to here just in case somebody wants to plug this thing in. Um uh but you do have a a way
for us to have a look at it. And the reason and how this is all possible because we haven't addressed something specific is the beauty of plastic is that it bends. PLA when combined with stepper motors has a um uh a >> has an actual flexibility which allows you to make a lever system that'll give you a nanometer resolution as it goes through the stepper >> of 50 nanometers. And for context the size of a human hair is >> around 1,000 nmters. So this system is is accurate enough for us to use for laser fault injection um positioning and a whole bunch of other things just through a 3D print just through bending PLA. That's it.
Uh actually normally when I do this talk I actually bring the naked port part without it being attached to the stepper motors and you can really see it moving. Um the the GIF that you see on the right hand side or left the left hand side is the the best way that you can see the the the the everything moving when it's not assembled. All right. So we're going to blast through this part. >> Go fast on this one. So basically what you have here is you have the positioning system which is this system on top that moves over the axis. You've got the objective which is for the microscope part that has the lens.
You've got the stepper motors which allows it to move back and forth through a controller. You've got the beam splitter. So you can actually shoot the laser and see what you're aiming at. You have the green laser pointer. You have a Raspberry Pi with a dollar board on there to control stepper motors. We've got a manual trigger if we're trying to test something. So we can actually do glitching manually. We've got the IR that we can actually see the target through, which you can see on this little board here. And we got the FPJ to slow down the clock and actually do some other like analysis while we're doing it. But does it work by the lasers? [laughter]
>> So, this [snorts] is a gift video of our setup in the lab. And what you're going to notice here is a green light underneath the thing. There's going to be a LED and a red light over there. When the yellow light blinks twice, we have successfully glitched the chip and gotten to an area of the memory that we're not supposed to be. >> So, for pre-context, we built a um a little piece of firmware that was designed to do some mathematics uh that was associated to different colored LEDs. Um so if you disrupted the maths on the on the chip uh it would be one uh and if you didn't disrupt it it would be
zero. >> If it skipped the instruction or if it changed the addition it wasn't the right value light would come on >> i.e. if uh LFI was detected uh we could actually visually demonstrate it here. Um so yeah this is it. And um yeah, if you see if you see when it loops over uh for those of you not color blind to the color green uh you will see a green flash right underneath the uh the target and that is the actual laser lazing straight into the target. Yeah, there you go. Perfect. Okay, so um that was when we first submitted the talk, our initial idea was to make laser fault injection affordable. Uh we did this under $500 to
be more precise. It was $320 at the time that we did this. Uh and we proved that you could do LFI at home with 3D printed materials and everything like that. But why stop there? Much like overachieving people uh who often get told that we can't do things. Um we asked ourselves why why should we stop there? And what else can we use the Rave for if not just for LFI? Because you know not everybody does laser fault injection. Could we use a rig like this for something different? So, um the first thing we thought about was something completely different and that's in imaging memory. >> So, we Okay, go ahead. >> Go for it. I so a few years ago Bunny
Han released a paper called Iris which is infrared insitu inspection of silicon and that showed that you can actually see through the back of silicon using a 1050 nmter lead. So as we talked about a little bit ago we use that on the bottom of the chip and so this lets us be able to see the image. Now, we are fortunate in the fact that the laser the the camera that we used for the laser was actually rated to be 800 nanometer tolerant, but it actually went all the way to 1200. So, we're able to use that to see what we were looking at so we can actually aim through it. And so, we owe this part to
Bunny Hong for actually providing us the technique to be able to see what we're trying to shoot >> because you can do laser fault injection completely blind, but that's kind of useless. Um the reason why this is kind of interesting is uh we forgot to mention this is specific about imageely uh sorry statically imaging chips. Uh so just because you have to see the chip in order to do LFI you can also statically image the chip using iris. Uh so to give you a kind of an example of what this means um using a visible light camera that is what you would see if you were looking at the silicon and using Iris or our invisible light camera uh this is
what you could see uh on in within the silicon itself. Uh, and to kind of further emphasize the fact, um, using Rave's kind of system of panning and scanning, uh, I do promise you that this GIF was way higher resolution, but turns out you can top out the HD requirements of, uh, PowerPoints. Um, so this is a better resolution image. It's very, very sharp, uh, very very accurate, and it enabled us to really look at the components and exactly what we were looking at, especially if it was a blind analysis. Yeah. >> Um, all right. So, there's more to this. As a result of us combining Rave and Iris during our research, we had to actually check if it
worked before we got on stage, right? You know, all part of prototyping is to check it works. Um, and what happened uh during our research is I happened to in front of me have an Arduino Nano that I was decapping uh and on the back of it was an FT FTDI chip. So I was like, "Okay, cool. I wonder if the Rave Plus Iris could image this thing." So we put it on the Rafe. We walked away and we came back and it was fully imaged. And um what was really interesting is on the left hand side is the image that we got using Iris and the rave. Um now hands up, who actually knows what the image on
the right is? >> No. Okay. Um >> yes. Yes. The but all right. So the image on the right is an actual silicon die of what we should have had if we imaged the FT FTDI chip that we thought we had the FT232R. What we got was the image on the left, which was a very interesting and undocumented alternative that may or may not have been produced by uh FTDI themselves. So, what we ended up discovering is that um I think Larry makes a joke here about hidden object puzzles. >> Yeah. >> So, you might see difference between the two images that you could spot similar little cartoons that we had on the Saturday mornings like newspaper strips.
Um there's a slight differences between the two >> slight and the reason why this was quite very interesting is for us this was a real life version of us checking our supply chain and finding out that the IC that we had was not the IC that we thought we had. Um so we ended up being able to use the rave to start especially when you start building things at home and the whole point of this is to make it affordable. Uh, how many of you have ordered components online, it's re you've received it, you built firmware for it, and then it doesn't work and you don't know why. A lot of the times that's because you received something
you didn't think it you received something you shouldn't have and there's no way to tell because you can't see through epoxy unless you have an X-ray. Um, so this thing >> or a belt sander. [laughter] Yeah, I guess it is semiinvasive. It should still work. Um, so that's why this was pretty this were pretty happy for us because we can use the imaging system that we use for the LFI to statically image um image a chip and be able to verify our own components at home. And that was that was really cool for us. Um, but why stop there? So what if we've already covered statically imaging chips, right? When the chip is off or
it's on on your computer and you wanted to check um what you got. What if it was possible to dynamically image a chip, i.e. when it's on, can you pull memory with just light beams? >> Uh, using just light beams, yes, but really the answer was, can you use affordable tooling to dynamically image chips live and pull memory as they run? And that is where the field of laser logic state imaging or also known as LLSI comes into play. So for those of you who don't know what LLSI is, here's our very brief introduction. >> So laser lost state and and imaging is a technique that lets you actually image the chip dynamically over the system. So
basically all the transistors all the memory in the chip is made up of transistors and all those transistors are arranged into gates and each of the gates are arranged into clusters and some organization of those clusters make up the ones and zeros of your RAM and your memory. Now each chip is different. So it's not exactly as straightforward as looking at one chip and seeing what the data is. You have to go and pull out a similar chip and then map that chip and see where those regions are and where those bids are. And once you do, you can do side channel analysis and be able to extract the data from your target chip. And this is useful if
you're trying to do things like get the key out of something or if you're trying to figure out what the firmware is and try to reverse that. But you have to map it out. And that's a kind of a lengthy process. >> That was a lot of words. So I'm going to use pictures. What does this mean visually? Right. Oh well, sometimes we go back look like a [laughter] >> uh All right. So what do we mean by this? Right. So transistors are what hold memory and transistors being arranged in gates. Those clusters represent either a binary one or a binary zero. And this is how memory can memory tells you binary. That's literally how the silicon works.
So that's if you use LLSI to image a chip, you get a cluster of images that when abstracted and represented correctly, you can pull out actual memory from the chip as it runs. >> And what these images are representing are the interference from the electrical system being turned on on those transistors that we're mapping. >> Just skip it ahead, sir. We haven't even got there yet. >> We'll get there. Just >> So how on earth do you do LLSI? And how is that even possible? >> So, as I was just saying, when voltage is applied to silicon, it changes the effect the absorption and reflection rate of the silicon. So, light passes through it differently. And there's a
technique in silicon design called electrooptical frequency mapping or >> ephen >> which can be used to correct to check the fluctuation. So, you use it as you're going through a state and you look at the transistor, see if they're flipping when they're supposed to be flipping as it goes through. Well, someone a few years ago figured out that you don't have to actually change the state if you can modulate the power going into the chip. So rather than actually going from a one to a zero, if you take let's say the voltage of the chip is 1.8 for a high, if you send out 1.85 85 to 1.95 over a cinosolidal wave around 300k. It
modulates which changes the absorption over that transistor so that the laser interference can be mapped by a processor or by a receiver because remember um we don't when we're imaging dynamic data we actually don't want the data to change because then what the heck are we imaging? We we we want the thing that we're imaging. So instead we use uh where we can pause the processor instructions briefly then fire a voltage ripple through the chip and then use a laser to measure that voltage ripple to tell us if it is a one or a zero in that specific area. And that is LLSI in a nutshell. Totally hyper simplifying here right now. All right. So um again
operation overachiever. I just realized this is a 45minute talk and we uh we are we are not on time. Uh so we're going to have to blast through this slides. Um but uh overachieving because telling us no it's not possible to use affordable components to do something this crazy uh is the fastest way to tell me and Larry to do the thing. Um so just like we did with LFI uh can we use the same principles for um uh for the rave to do it for LSI? So there are six major components for LLSI to work for the RAV. Uh first you need to be able to modulate the target. >> Then you need a laser that can actually
reflect off the silicon that you can actually read. >> Uh you need a sensor to able to receive the laser you fire back at it. >> You need to be able to pan over the system pretty smoothly with very fine uh precision. >> Um you need signal on noise Jesus. You need to be able to parse signal from the noise because there going to be very slight variations around one part per million. >> And then you need to be able to read the thing because it doesn't matter if you fire a laser at it. If you can't if you can't read it, there's no point of it. So, [snorts] it turns out the RA does a
lot of those things already. Um, so let's start with the laser because that's got to be different. So, the Ray previously used a 1064 nanometer for glitching. Um, this graph represents uh silicon reflectivity at certain wavelengths. And as you can see, 1064 nanometers is um a hot garbage reflectivity rate. Uh so using this graph, we were like, okay, um well, we can swap out the laser module. Um what can we use? And we chose 1300 because it's got a far better reflectivity rate in silicon. And some of you may be wondering why specifically 1300. And uh that's because it's cheap as hell. So, turns out the same wavelength is used for most communication systems, the
fiber optic cables, and they're very cheap. And the what's beautiful about these things is that because they are used for something like communications, they need to have a stable signal, which means they put a diode on there to measure the actual how much actual light is being emitted. And we can use that, which we'll get to in a second. And the best part is just $6 for all these. >> So, why is the diode important? Well um because of the magical world of uh self-mixing intererometry I am the only person who can say the slide. [laughter] Um so we can actually use the same photo diode that is used to measure the laser output as the receiving sensor to
receive the lace uh that we need in order to read the signal that gets fired from the laser. >> Yeah. So as it bounces off a perpendicular surface it go hits all the way back to the diode and we can actually read the various signals. So, for $6, we just fixed two problems. And because the RAIE already has everything else from the positioning system and everything else, uh, if we put it all together, all we need to do is scan over the system with a modulated signal while it's in a pause state and we can actually extract what the data is. And in fact, if we can't pause the state, but we're using something like a
hard-coded key, that hard-coded key is not going to change. So, if you scan over the system and you find something that doesn't change, that's probably important. So uh does it work? So again remember we are trying to prove that you can use affordable components to achieve this level of type of research and the answer is yes. So in this demonstration we used a highly reflective surface to prove that you can use these components in order to do the thing. Um and uh if you want >> so we're using a function generator to actually modify how much reflective is going into that laser assembly right there and noise uh signal filteration there with some op amps and some high
speed a tod >> but as you can see the function generator here is sitting at certain mehz or certain hertz frequency and we can also read it here on the oscilloscope that's reading the data coming out of the photo diode on top of the laser from the >> so in um in other words if we send the um the modulated signal that's meant to represent the change of state of the data that we're trying to read. Uh we are proving that we can actually read the thing. Do not look at me right now because I'm firing a laser at your head. [laughter] Um this is the signal that we're actually receiving. So your battery is running low. Great.
Going to slide slide >> it. I couldn't plug in anything over there. So >> okay. Uh let me >> next slide. >> I'm trying. It's not working now.
We're having some technical difficulties. >> Okay. >> All right. So that's reflectivity. We proved that we can do it. Um the last problem that we have to fix is how to make a human readable. All right. So we've we've um achieved signal retrieval. How do we make this readable? Um and that's through artificial intelligence. uh where the irony is that you need to make um uh >> you need an AI to be able to make something that's easily read by a human which is kind of ironic. >> We're using a yeah we're using a robot to make something human readable. Um now uh full disclosure I hate the term AI. I think it's a highly overloaded term um
and it's often misused. So uh let's just call this what it is. Uh and that is um >> machine learning >> analysis using conval neural networks. >> Yes. And don't believe the penguin. Chat DP did nothing. >> All right. So, what does this mean? Uh, that means that Larry and I had to build uh uh a CNN to be able to parse LLSI images in order to read data. Um, some of you may be asking why do we use a model instead of something else? >> Well, as we pointed out, every memory system is unique and there can be difference between different runs. So, we need to be able to map out a chip and
we have something that's automatically able to read it so that we don't have to. We can generate a map, but it's much easier if we have something to read the map for us. >> So, by training a model, we've just made it simpler for our problems in the future effectively. >> And we'd probably jump off a cliff if we had to do this by hand more than once. >> Yeah. And a really quick thing here, uh, deadlines are real and we're really bad at them. Um, so this is really important because when we created the model, uh, we used a published data set instead of using images from the rave. And I like to be completely transparent about that.
And the reason why is because um uh the process to design a low-noise band pass filter capable of doing this uh right now is incredibly slow. Uh at the time that we were publishing this research, combining that plus an active LLSI um real systems that do LLSI are also incredibly slow because of the precision that you have to do within the uh the actual scanning across the chip >> which is slow squared >> slow slow squared. [snorts] Um so we used training data sets that uh actually what do you call it? They represent the same type of data we expect to get at the end of the the research that we're doing. Um because we were trying to
prove that a model can be used to read the LLSI images. All right. So a little bit about the data sets that we used. So this is a data set here and this is the chip memory from a processor. It's an S&P 430. And if you look very hard and have very good eyes, you might be able to see a signal here where the lights are slightly more white. Um, but can anyone actually see the pattern? Can you read it very well? >> So, Larry kind of skipped over the a little bit of the part. All right. The training data set uh is using an MSP uh uh 430 SRAM block and that SRAM block
was programmed with 512 bits of no memory. So that's what that was used for the training data set. Um the training data set of 512 bits. Uh you might be able to see on the actual LLSI image little variations in the uh in the image to determine that's where the data actually lives in this specific IC. Uh hands up. Do you does do does everybody see what that is? >> Okay. >> Okay. >> Here let me help. Does that help? here. Let me help more. Does that help more? Uh, not really. But you can kind of see the point. We can see the data is there, but uh, we don't actually know what to do with it.
>> So, our first step was to make it a little more readable to all of us. So, what we did was we took two images that had different bit patterns and we subtracted one from the other and took the absolute difference and then put that into a different color channel. For example, blue because I can have I'm color blind, but I can see blue a little bit better. So if we do that, can you see them better now? >> Now you can see where this SRAMM block specifically stored 512 bits of memory. >> But you still don't know which ones are ones and which ones are zero. There's a pattern each of those locations. >> Yeah. So you can see but you can't
understand. So the next thing that we needed to help the model do was identify where the uh 512 bits were stored because obviously we're people so we can see the thing but we need the model to be able to see the thing. So how do we do that? So, basically, we created a search grid. And the reason we created a search grid is because our laptop is out. >> Oh, okay. Well, I guess we're done. >> Huh? Yeah, there's no uh there's no power here. >> There's a power over here, but I couldn't plug it in this thing over here. >> So, I can do this off memory, but um it's going to help people to see the
thing.
Oh, >> did you not turn it on? >> I hit the button. [laughter] >> I promise you we're hackers. >> Yeah, it's charging up. How long does this take to um >> Not long? Go ahead and talk and I'll do it back on. >> Oh, now you're going to make me do this all based off of memory. >> Fine, I'll do it off memory. [laughter] Okay. So, >> but you need to be able to turn this thing on. I I'll talk the password when it comes up. Okay. So, if you remember the grid with the blue thing and the red stripes, well, basically what we're doing is we're trying to isolate where the memory things are, where the bits
are. Now, we can't just give it to AI by itself because uh there's this thing called bit collisions. And what that basically means is if you have similar patterns inside the data, then it's not going to be able to find it because going to be too many false positives. So what we do is we divide it into various quadrants and each of those quadrants >> keeper [laughter] uh we go and search the algorithm for it. So we do this 512 times for each bite. So we figure out which ones have the most highest correlations and when there's a high correlation then that's actually the area where the system should be. Um and so we go through that process for
all the times and we're able to isolate which ones actually map to which areas. And once we have that, we can actually go and actually generate an AI with a conial network that can actually read the data for us based on the image data system. >> Okay, Larry, can you work on that? Um, I'm gonna I'm gonna have to do this manually. >> All right. So, can everybody hear me still? I'm going to have to yell. >> Okay, >> Larry, you kind of covered. Oh, this one >> you covered search grids, right? >> Yeah. >> Did you do the um >> uh start? Yes. >> I break things. So I don't turn them on.
>> How about now? >> Does it work now?
>> Did I fail an interview? [laughter] Is it all working? >> Can everybody hear me? >> Oh. >> Oh, perfect. >> Oh, there you go. >> Oh. Oh, >> working on it. Oh. Oh. >> Why has it done? >> Are you gonna You're You're expanding your screen. You want to make it um >> No. >> Well, extend.
>> This is too funny. This is >> We can hack lasers, but we can't get PowerPoint to work. >> That's right.
Okay, screw it. We are gonna do this.
All right, we are going to do this this way or not. F5. Shift F5. Shift F5. Come on. Shift F and F5. Okay, >> there you go. You didn't see anything. All right. You covered the search grid, right? >> Yeah. Sorry. >> Does everybody want us to kind of go over that again or you good? >> You good? >> Okay. >> Perfect. All right. So, we're dividing by search grid. Um uh now that the model can know um how to look uh we still need to be able to find the right segment. So each segment has a number of bits uh we need to know uh or the model needs to figure out which segment has the bits that we actually
care about i.e. part of the 512 bit memory that we're looking for. And this is how we search for it. So I'm going to skip over this slide because we totally didn't just waste 10 minutes trying to trying to work this out. Uh but this slide is kind of covered in the next one which is um if time isn't an issue a really easy way of training a model is you take one data set and you completely program everything with all ones and then you take another data set and you completely program everything with all zeros with the exception of one. And the idea here is that if you if the model were to find a segment where the
exception of one was found, an absolute diff would show an observable change that would represent where that data was stored in that segment. And so if you did that 512 times, you would find the 512 times uh observable differences uh in amongst the data set. Or if you're really smart with maths and statistics, you can do that way more efficiently. But this is the really slow way that works. So what does this actually mean? Uh this is what I really wanted the slides for. So if you take this LLSI image and you take this segment and you take a second LLSI image and you take the same segment and you take them apart and you do an
absolute diff between the two if there is an observable change i.e. the data between the two is different between one or zero, it would show the identified location of where that would show uh where that would show up on the LLSI image. And the reason why that's important is the observable change is the representation of the location of where a one and zero is i.e. training the model of where the one and zero is. >> We're seeing acts. >> Yeah. No, this is actual real data that we like all of these images are real from from us working on it. >> Is the observable change. Yeah. >> Yeah, it's an observable change. >> So, because remember again um
transistors, gates, clusters represent a one and a zero. Right now, we're trying to train the uh the model to find where those clusters are because otherwise it's just random noise. Um because the next step is once we have an understanding of where all of the observable changes live, where is this turning off? >> I don't because you're using more power. >> Next time we use my laptop, mate. >> I I wanted to use a laptop. >> Um >> okay. >> All right.
>> Yeah, your laptop's off. All right. Turn that back on. So the next step is once we figured out where the where the data is, the next step is to train train it on what a one and zero is. Uh and that's through supervised learning. So the slide that we have for supervised learning is literally just saying we use supervised learning to help the model identify what a one and zero is. Um Larry is completely distracted. uh and at the end of that uh we have a successful model of finding where all the data bits are and then using supervised learnings for the model to know which one of those uh sectors were a one and a zero in relation to the
actual data set. The next slide after this was proving that it works. Um >> is it not going to give any power? >> We don't have any power to the laptop. >> Yeah, it's not charging at all for the plane. >> Yeah. um uh proving where the data is where we show you a >> fancy green and black terminal of us actually running the model against the data set and it identifying um eight bytes eight bytes yeah or one bytes sorry of eight bits Jesus eight bits of um of the data uh that we actually had and it was accurately by 96% >> 96% uh accurate using a test data set about one quarter the actual samples
that we had and was able to actually reproduce the first eight eight eight bits of the entire test episode. >> What I love is this whole thing's recorded right now. So, we're getting like like extra recording of us debugging poorly on the fly, a laptop not working. Um, but hopefully the laptop does turn back on so we can at least show you the demo. But, uh, that was technically the end of the talk in the sense of us proving that you can use affordable materials to um to do LFI. You can use affordable materials to do LLSI and it is completely possible to create these systems at home. >> Um I am doing this completely off
memory. I hope you guys are like like appreciating this. Um another slide that I'd like to show you is that we do stand on the shoulders of many giants. Um we mentioned right at the beginning of this talk we had to combine nine PhDs to do this. Uh we have three special shoutouts to specific research topics that made this uh possible. The first is Bunny Hung's research on Iris. Um that wouldn't if he had not done that. We I definitely fangled over him a little bit when I met him in person. I was like, "Dude, that Iris research was so cool." Um but Bunny Hung, uh thank you again. His Iris research was instrumental to
our work. >> Another one is Martin S. Kelly who had a couple of different PhD papers on the actual amount of power that was required to actually conduct the glitch. That was very beneficial because we were able to shorten our laser our laser amount. >> Oh yeah. And then the third was a thank you to the open flexure project. That is a citizen science project that was designed quickly. Click quickly. >> I'm going have Show them the demo. [laughter] >> Here's the demo. >> Yeah. Yeah. So this is the model actually working against the data set. Um, so we're pulling out that data from the LLSI image and it's determining what those data sets are.
>> Yeah, we're showing one read here, but we did a lot of reads and overall it was 96%. >> Yeah. And then if we go back first, we'll go back here. Um, one thing to show you um why we used the model. So previously you saw the black and white images. Um, those are special. I think we got feedback. So we'll walk backwards a little bit. Um uh data set number one on the left hand side that is a different transistor setup. Uh the one that we created was on the right hand side. So that is its representation of zero and that is this SRAM's representation of one. Um so that is uh that is uh using the model to tailor it
for this specific IC. >> Okay. >> And then it was working. >> Okay. I'm clicking through this because we don't have a lot of uh a lot of time. >> We have time. >> Okay. So, building an LFI was possible for using less than 500 USD. Um, we can learn chip layouts through imaging, introduce faults in embedded controllers, and because I won an argument because Larry wanted to make this a bench setup, and I said no, I wanted to throw it into a suitcase perfect example. So, we could do this. Um, we have a portable, movable, and affordable entrylevel solution for LFI work, imaging work. And for those of you who want to do LLSI work, uh, you can
get started. And so far no one has complained about on TSA. >> Yes, so far we haven't been told. So here it is. Here is the um the references that we use for anybody who wants to look at the slides later. Um we have the open flexures project, Martin S. Kelly and Bunny Hung as our special shout outs. So thank you to uh some things that surprised us. So I didn't know that laser was an acronym. You can judge me. I don't care. >> I didn't believe that hoarding parts forever would be useful, but it was. >> Um and um not all data sheets are real. But the thing that we really wanted to focus on this is how powerful open-
sourcing um and democratizing tooling is. We had a lot of questions of why didn't we sell this? Um and uh whilst we were doing this research, we couldn't believe that someone else hadn't done this before. And turns out exactly 24 hours before we got on stage on Black Hat, my LinkedIn message pops up with a random person that I've never spoken to before. And I'm gonna ambassadorize his name, Yan Tapen, I think from Fractal. Uh, and turns out Fractal had developed their own under€500 euro version of an LFI rig, and they were using it for their own purposes. They just hadn't open sourced it because it was commercially created by them. Um, so after some discussion right before Black
Hat, uh, we agreed and we coordinated the tools to be open source at the same time. So, for those of you now giving it a year later, um there's not one, not two, but actually three different researchers that have come out with an open-source version of super ultra affordable tooling >> where a funny thing with ours is uh another research ended up doing what did he do? >> He did roughly the same thing but didn't figure out the time variance. So, he found an actual uh person in China to send him a 50 three watt laser IR and he actually got it like six of them for $300 and built an entire chip on it. And
he used that to do the uh Raspberry Pi 2030 challenge and was able to actually crack the system. >> Yeah. And it was just really cool to see. And he didn't actually know we existed until after he published his research. And then someone reached out to him saying, "Hey, did you know about these guys?" And he's like, "Why why didn't somebody tell me this? This would have saved me like so much time trying to figure it figure this problem out. Um, but things that went horribly horribly wrong. Um, supplying chains and distributors were really difficult for us. Uh, us actually getting something reliably lazing, something reliably modulating. Um, I cannot swear on stage because I'm recorded, but uh, it was a
lot. >> I would like to mention that I'm I am colorb blind and I feel like that graph over there is just calling me out. >> Yeah. So, we This was a graph we received when we bought a laser module and that was meant to tell us >> something >> something um uh if anybody knows what the heck that means, please let us know because that wasted two and a half weeks of our time. >> Oh, it said follow the red line. >> Yeah, which red line? Um and uh yeah, so graphs that make no sense. And then oh yeah, cost of R&D. Just because it cost us less than $500 to make the final
product does not mean it cost us less than $500 to make during research. Uh so this is a shout out to just kind of explain it does cost money to do these things, but when you get there it's worth it. So thank you. Um next time you see us raven I do love saying that word. Uh we do want to increase the crawl control and precision of the laser and we also want to lower the no noise flooring. We want to make this LLSI actually achievable. Um, we want to combine the LFI and LLSI housing. Um, and uh, Chaz, because he's nuts, wants to somehow make this cheaper. >> I think he said he wanted to go under
100. >> Yeah. So, Chaz is currently working on the version 2.0 where he's trying to get it under $100 uh to do this whole thing. Um we also want to recontribute to the open flexure project and uh improve the 3D print um because open flexure was designed for um microorganisms and we want to create an LFI specific version for it. Um but key takeaways >> laser is fun but you need to wear protection learning is no obstacle cuz we don't have it >> and open source is king. So, um, thank you for painfully walking through, uh, this entire thing with, uh, everything breaking >> and helping that debug our laptop. >> Uh, the QR code is 100% not malware, we
promise, but it is the GitHub project where we have published all of this online. If you want to build this yourself, if you want to find all of the components, you want instructions on how to do it, um, it should be on the GitHub project last time. >> If it's not, yell at Casey Rep. He was the first of the thing. >> Yep. Casey Rep is our Android to do that for us. Um, and then thank you. Uh, if you want to chat us, do hallwaycon. We're on hacker sumo camp. If any of you are black hatters, we also have a booth. Feel free to try and find me. Um, what you may not have seen is there was a
where where is Waldo, uh, hidden on some of these slides. Uh, so, uh, extra bonus points for those of you who actually found Waldo. Um, but thank you very much. Um, that's all folks. Thanks for listening and, um, remember >> lasers are cool. >> Yeah. Jesus. [applause] [cheering] Jesus Christ. Any questions? >> Somebody has a question. >> Oh, the center cry. >> Hey, so uh you're primarily concerned with uh altering running memory for fault injection, reading uh SRAMM cells to presumably do things like pull encryption keys out of out of memory. What about flash? Things like that, you know. >> Did you take this off? >> Okay. No, no, go for it. >> Just >> Sorry, I took this off by reflex. So,
yes, uh the laser can still affect flash because it's still memory stored. Um it's not magnetic, so it's still a transistor story and stuff, I think. So, >> so for data recovery, this is potentially a uh >> LSSI should be able to be used for data recovery. I >> I don't think that the ripple and the voltage would cause an issue with it as long as in safe levels, but I think it would work. Yes. >> Okay. Thanks. >> I've not played around with it, but I do believe it's possible. >> Fascinating work. >> Thank you.
on the burden. >> This is a Yeah, this is amazing work. Um, just a comment I guess uh there was a paper I saw a while back that was about um using in the subject of like everything being crazy using liquid nitrogen to remove RAM from a computer while it was running. The liquid nitrogen allowed it to I guess remains charged in some way and then they were able to extract data from it later. That would be a really cool thing to see demoed with this like >> the nitrogen thing that you [clears throat] were talking about with me like at 2 o'clock in the morning. It was that PhD paper. >> I can't remember.
>> They use nitrogen to freeze the memory instead. >> I mean that's just cold boot attack, but I don't remember with lasers on it. >> What he's referring to was there was an attempt to use liquid nitrogen to basically reduce the self-discharge rate of the uh cells long enough that they could put it into a forensic crate. >> Okay. Oh. Oh, here we go. Are you coming up? No, he's like, but uh No. Yes, that's uh that's another style of attack. Um definitely >> and it probably would work on the actual extraction. >> The only problem with liquid nitrogen in that context is the purpose of this was to prove that you could do stuff at home
and the most dangerous thing about this was lasers. Um, the other problem is that with the liquid nitrogen, you still got to power up the chip again. And depending on what type of memory it is, it h like if it's a dim memory or something like that, it has to be refreshed. And so, while you can pull the clock signal down to zero, which causes it to self-refresh, I'm a little worried what would happen with the voltage of the RAM would happen as you modulate it because the RAM on a dim is set to about at max 1.2 two volts and it doesn't have much of a tolerance over it because of static discharging kind of stuff. So, I'm a little worried
what the voltage ripple for LSI would do for a live DRAM of either three or four. It's possible, but I would think that between the voltage going in, it would probably >> answer questions. So, we're basically done. Yeah, >> sounds good. >> Okay. Sorry. >> Well, thank you very much for attending our Oh, wait, wait, we got Okay, one more. Okay, good. Quick, fast. [laughter] Um, I just wanted to ask you guys if you considered um like flight schools for the lasers. A lot of Redbirds um have the same setup on their full motion um like safety measure. I don't know if it's exact, but I used to build simulators for like Red Bird and Cessna
and Frasa and it looked similar. I'm not certain, but a lot of flight schools might have old parts. I got a bunch of that for like 200 bucks. It was like, >> "Hey, that's cool. Yeah, >> might want to consider that." >> Yeah, definitely. Thank you. Thank you very much. This is why open source is cool, guys. All right. Thank you very much. Let's help these sides pack up. [applause] Sorry for going over time.