
We're getting the giant crook to get out of the room. Are there any burning questions before we get out of here? One small one regarding the password sharing part of the CFAA. Like I know there's very often password dumps online where a database is hacked, username, password hashes are dumped. I know there's a bit of a contentious area regarding trafficking passwords online, my perspective as security researcher and slash, you know, a commercial pen tester is if there is a commercial dump of passwords, I have to assume every bad actor out there has those passwords. As a job related issue, I need to acquire my own copy of that password database dump so I can then test against a target
website on the higher two go after, you know, are they still using the same credentials or the same username and the same password as this hacked database? I mean, is there an issue with commercial pentesters doing that? So 1030-A6, I would just point to the words, with intent to defraud. So that's required for there to be a criminal violation of the password trafficking statute. So what you described there does not sound like with intent to defraud. Sure.
Authorization? Yeah, I mean, just factually, yeah.
I think that's damage to a computer. I think it's a CFAA violation, yeah. If that mic is Wi-Fi, that's CFAA.
What do you mean by reform? Not a lawyer. It depends. It depends.
And quick show of hands, who believes that CFA reform is a realistic possibility in the next three years? Three years? Three years. I'm an optimist. Yeah. Take the elections. It depends. Yeah.
I need to get I need to contact them
It's titled, Why Can't We Be Friends? I actually had a chance to go several weeks ago at the US Cyber Camp. I've been doing several hours, and it's a good perspective. But yeah. I'm here tomorrow, doing the Why Can't We Be Friends talk. Good. Yeah, I mean, we're trying to. Some of this is about, I mean, so. I know. The way I see it is a lot of one-eyed kings. Well, yeah. Well, George R. Martin. No one's a villain in his own story. So I think people, for frustration, sometimes don't understand what we're doing or whether we're responsible or something like that. And the problem is it doesn't matter because the message is still chilling people.
Oh, hi! Right, right.
Not everyone heard about that. What time is your point? It's 2.30. I've got to get moving, yeah. Yeah. And trying to find the best place.
in hard form.
So I'm speaking of two. So I need to find my co-sport. I'm going to sign the same thing. So here's your phone. No, I have to say, I think he is a... If you're coming with me and you don't, you're not going to do it. So if you're coming to Mandalay Bay with me, then we'll go. Because the entrance to the venue is separate to where we're actually going.
Wednesday? 6.15 on Wednesday. 6.15 or 5.05. Where? It's called. I mean, I get that. I think we already have a problem. I think we already have a problem. Can I bring a high baseline? Last night, we went to Parkinson's. We went to Parkinson's. We went to Parkinson's.
I was like, holding a sleeve. It was just...
Alright, I will see you at the end. Yeah, given your own three, how we travel to the world, the first time. And now, how we travel to the world, the first time. And now, how we travel to the world, the first time. And now, how we travel to the world, the first time. And now, how we travel to the world, the first time. I want to ask, what do you do tonight? I'm not going to be the first time. Yeah, I hate to be like, we'll be talking to the same. Where? I don't know. I hate to be those good places. Pleasure. Yeah, it was good. It was really good. Thank you very much. I think it went pretty well.
We weren't too bad with our timing. We had a few things people wanted to love us, so I think we were
good. Thank you again. Thank you so much for having me. I appreciate it. Petri, Petri. If you're going to come to the party, do you want to bring us some? They're hard to come by, I suppose. Yeah, I think it's hilarious. Just go
along. I think it's working. Yeah, you can just switch places around, and then, and then I'll just
I
know.
I'm a new moose. I'm not the half day old, so I'm not here. So I'm doing well. I work with Grant. This is a great panel. I really enjoyed this talk. I like how you had the presentation together. Yeah, the format was a little... Very interesting to engage you. people sing lots of very fine girls, which is a good song. So... Well, thanks for putting this together. You know Tom. Oh, I do know Tom. Do you... Chris is a much better entrepreneur.
So you should . What are you doing ? The framework. Yeah, yeah, so . Yeah, you should definitely . Yeah, that's . I'll take you over . Let me just
Thanks Mr. Bailey, real appreciate it, it was real interesting.
Are you sticking around or you're flying right back? Yeah, you're flying right back. Yeah, I have a two-year-old. This is actually the best backpack I've ever had. I've got everything in here. And it's all exactly checking me to have it. So, e-bags, man, e-bags. E-bags? Yeah. Okay. All right, cool. Thank you. Sure. What about the terms of time? To say, how do you do it? There's a very different place. It's a lot of fun. I don't know. I don't know. So, that I would I don't
want to offend you guys. I'm friendly foyered.
Okay. Okay.
I
Yeah.
So you can count since you still feel like it's hard to work.
.
.
Yeah.
.
...
Thank you.
I'll see you next time.
United States.
.
Check, check, one, two. Test, test, one, two. Hey, hey. Hey, hey, one, two. Oops. Check, check, one, two. Test, test, one, two. Hey, hey, one, two. Check, check, one, two. One, two. Check, check, one, two. Hey, hey. One, two. Test, test, one, two. Hey, hey. One, two. Check, check, one, two. Hey, hey. What is going
on here? One, two. Check check one two test
test, one, two, hey, hey, one, two. Check, check, check, one, two, test, test, one, two, hey, hey. One, one, one. One, two, check, check, one, two, hey, hey, one, two. Check, check, one, two, hey, hey, hey, one, two, check, check, one, two, test, test, test, test. a want to check check one to test test one to hate want to check check one to check one
to check check one two three this part check check one two three four five check one two three check one two three four five six
sounds a lot better, but it was still going in and out. I think it was that pod. I'm not sure what it is. Check one, two, three, four, five, six, seven. Check one, two. You got five gain on that now? I'm at, yeah. Tried to set it down here to zero dB. It was at plus 10.
Check check one two
three, check, check, one, two. Check, check, one two three four five
check, one, two. Hold on, let me, check, one, two, three. Check, one, two. Yeah, it's either, see right here? It's like a pad or something, but it goes on minus 10.
So it's zero, but then it goes on plus 15. Male Speaker 2
It goes from plus 15 to zero. All right, let's go to zero. There we go. Check, check. One, two, three. Check, check. One, two, three. Five, six, seven. Check, check. One, two, three.
Check, check. One, two, three, four. Check, one, two, three, four, five, six, seven, eight. Check, one, two, three. Check, one, two, three, four five six seven. Check, one, two. Check, one, two, three. One two three four five six seven.
Check.
Check, one, two, three, four. Check, check, one, two, three, four, five, six, set down. That can't be right.
Check, check. Channel's shaky. Check, one two three four five. Check, one, two, three, four, five, six, seven. That's good, right? Check, check, check.
test. Wonderful. Then how are we hooking up to video? I have an adapter right here. Yeah, that's his. I'll make sure it goes back to you, I promise.
So I'll give you, I'll introduce you, tell people what talk they're in, and then when you're 10 minutes from the end, I'll finish your flashy slides. When I say stop now, stop, that gives time for Q&A. Okay? Perfect. And then where are we? I don't see the VGA dev. Oh, am I missing? Ah, there it is. Cool, thank you.
Perfect. You can run the slides too. I'll give you the clicker.
Hold on, let me try again. Do you need markers for the board or anything? I don't think so. No, thank you. We just got to hope and pray this works.
Try the other port.
first of all start off by thanking our sponsors, Verisprite, Protivity, Tenable, Amazon, and Source of Knowledge. It's my pleasure to introduce Joel Cardella. He has over 24 years of experience in IT, ranging from everything from network operations, sales engagement, to information security. Before working at Rapid7, he has worked in multiple verticals, including telecom, healthcare, and manufacturing. Joel will be speaking to us about welcome to the world of yesterday, tomorrow. Thanks, Joel. And none of that matters. Thank you very much. I do not purport to be an authority on anything, so let's frame the discussion there. My name is Joel Cardella. I work for Rapid7 Global Services. I want to talk to you today about some things that I'm calling part of a storytelling series. And our
story begins in 1986. January 28th, 1986 at 11.39 a.m., the Space Shuttle Challenger lifted off the launch pad at Cape Canaveral. This was the 25th mission of the shuttle series, and it was a very special mission for some very special reasons that we're going to talk about and find out about. 73 seconds after liftoff, the right solid rocket booster dislodged. There was an explosion, and three minutes later, all hands were lost.
What went wrong? NASA had a very specific plan to create a program of reusability. That program included managing the risk of human lives in space. This is the 30th anniversary of STS-51L, which is the Space Shuttle Challenger. This is all about managing risk, and that's what the topic is today. I'm going to illustrate for you the shuttle program, what happened in the shuttle program, and how NASA, who has a very rigid risk management program, actually failed themselves by ignoring some of the rules and things that they had set in place specifically to deal with human lives. And we'll have to talk about the shuttle program. We'll have to kind of go back in time and look at the
things they did, and we're going to learn some names and things. And I'm going to throw a lot of names at you, but there's really only a couple of names I want you to focus on, and we'll talk about that when we get there. But really, again, this is a talk about managing risk. We want to learn from the lessons of our past so we don't repeat them. So again, we don't want the world of yesterday tomorrow. We really like that the other way around. So the shuttle program. When the shuttle program was officially launched, it was known as the Space Transportation System, STS. It was a U.S. government-launched, manned-launched program. It ran from 1981
to 2011. It was designed to be reusable, and that's really, really important. Because prior to that, we did not have any kind of reusability. We had rockets that we would build. We would send them into space, the rockets would explode, and the money would be lost. That would be it. So we started talking about how can we get something that's recoverable so we can actually make space travel economical and affordable. We can send things up. We can bring them down. That's what SpaceX is doing today. When they're launching their rockets out into space and then bringing them back down in that amazing platform on rough seas, the Dragon X, the most amazing video I think I've ever seen in my lifetime, and I've seen all this stuff. I was
alive when they were launching the space shuttles, which was the most amazing thing. When we can get to the point where we can recycle this stuff, we get to this point where, wow, economics doesn't become such a huge factor, and we can actually do something with this technology. We can make it into something. That's really exciting, and that's what the driver is for the space shuttle program. There was 135 missions that were flown between 1981 and 2011, and those missions were things like carrying payloads, doing crew rotations for space stations, recovery of satellites. We would not have the Hubble today were it not for the shuttle mission. We would not have the docking ports on
the space station Mir, or the International Space Station if it wasn't for the shuttle program. The shuttle program enabled all of these other programs to happen. So it was a very important part of space travel history. To date, it is the only manned spacecraft that has orbited and landed, which is pretty exciting. Now, the shuttle program itself, NASA asked for up to $14 billion for STS. Congress approved $5.5 billion. What do you think they did? Well, they took the money, right? But by taking the money, they had to cut corners, absolutely. They had to look at what they had laid out as a forecast to say this is a $14 billion program, we're only getting five and a half.
How can we manage this to make it work? Because they needed to make it work. So let's go back in time a little bit to look at why. We have to understand history. The Apollo program was the manned space flight program, 1967. It was conceived during the Eisenhower administration. It was a follow-up to Project Mercury. Little tidbit for you there. The Mercury capsule could only hold one astronaut, and that was a problem, because with only one astronaut, that astronaut has to be everything. He has to be the pilot, he's got to be the scientist, he's got to be the whatever, right? All these things in one. So the Apollo program then was created to allow three people to fly into space. Now we've got some options. We've got
one person who's the pilot, and then we can have mission specialists in the other parts of the program. Now we can start doing some things. We can do science, right? We can do experimentation, what have you. The goal that John F. Kennedy set out was to eventually put a man on the moon. Okay? So landing men on the moon by the end of 1989, it required really the most sudden burst of technological creativity we've ever seen. Twenty-four billion dollars was committed to the Apollo program.
400,000 people were employed directly or indirectly as a result of the Apollo program, supported by 20,000 financial institutions and universities. A massive undertaking, all under the auspice of putting a person on the moon. So the Apollo program, especially for the time, $24 billion in 1967, 1966, this is a tremendous amount of money. And so people are not sure if we should be doing this. What's the goal? What's the point? Why would we do this? And all these questions are being asked, but we're pursuing it. In February 1967, we had the Apollo 1 disaster where Grissom, White, and Chafee died on the launch pad during a test when the capsule caught fire, and they did not have a proper escape route for the
capsule. Right? On March of 1969,
Apollo 9, which is the first manned space flight test of the lunar module, happens. And then in July 20th, three months later, Apollo 11 lands men on the moon. So between 1967 and 1969, we learned enough about what we needed to do to never lose human life again. Three astronauts lost their lives in 1967. Remember that. That's an important point. But then three years later, we made the moon landing. We did what we needed to do. So tremendous success. This was an amazing accomplishment for the space program. And they completed it in a compressed time frame, which means as far as risk management goes, they had to make sure that they incorporated all of the
elements of risk to be able to do this, especially in a compressed time frame. So the Apollo program, massively successful. However, massive success sometimes sets you up for failure because now you have to live up to your past. This is what happens to us today. When we get caught in these same kinds of cycles in InfoSec and we have things that are massively successful, products are deployed, companies grow, and they do these things and they go, it's never happened before, why would it happen again? We sort of become victims of the success of our past. It's really important to understand Apollo is really successful. shuttle program comes in, when STS starts its stuff, they have to build on the successes of the past, which
immediately means what? Political pressure. We've got massive amounts of political pressure, but NASA's strong, and NASA will persevere. One of the biggest pieces of political pressure they had on shuttle mission 25 was this woman. Anybody know who this is? That's Krista McAuliffe. Krista McAuliffe is significant because she was to be the first civilian in space. not just the first civilian in space, the first teacher in space. She was chosen from 16,000 applicants to go through astronaut training, train with astronauts, fly in the space shuttle program, and deliver a lesson from space, which was an amazing thing. The thing about Chris McAuliffe, though, especially if you're younger and you weren't around, is she was a media darling. She was genuinely
charismatic. She was real. She was somebody people could relate to. Now, with Krista McAuliffe going in space, we can all go into space. This is like the coolest thing ever. So there was a tremendous amount of media coverage and attention around Krista McAuliffe because she was such this media darling. She was on all the major media programs, Good Morning America, CBS Morning News, the Today Show. That's when we only had three channels, guys. The Tonight Show, I mean all this stuff. She was out there and everybody loved her. Everybody loved Chris McAuliffe. So let's talk about some of the technical things that actually happened that led to the failure. Remembering we've got these other things
in play and they all kind of lead together. So what really happened was there's these O-rings that sit inside the solid rocket booster. These O-rings are about 12 feet in diameter and what they do is they feed through the solid rocket booster motor joints to allow flexing when the shuttle is being launched, okay? Because things like air pressure and gravity are causing these stresses on this vehicle as it's going into orbit, right? There's this flexing that needs to happen. As heat causes the gaps to widen, the O-rings have to pop back into the gaps and make the seal in milliseconds. So as a flex happened, a gap occurs, the rubber in the O-ring has to expand to fill the gap. If it doesn't, hot gases will escape. If
those hot gases escape, it's very, very likely those hot gases catch fire. If they catch fire, that means an explosion, especially because the solid rocket booster is right up against that main fuel tank, the solid fuel tank. Right? Does that make sense? Everybody got that? Okay. So here's the issue. When they launched from Cape Canaveral, Florida that day, it was 29 degrees in the morning. Here are some photographs from the actual launch pad that morning. These are icicles, and this is southern Florida. This is very unusual. This is a year that the orange crops were completely decimated because the cold was so severe and so bad. All right? So we have this issue with cold. Do you know what happens to rubber when it's very cold?
It stiffens, it doesn't flex, and problems occur. Okay? Now, we are not a group of NASA engineers. Anybody of the NASA engineers in here? No offense. Okay. Well, at least one. We are not a group of NASA engineers. We understand cold affects the properties of rubber. The question is, if we know this and it's sort of common knowledge, why do you think this wouldn't occur to people running this program?
So what happened is the O-ring failed. There was a small escape of hot gas. The hot gases caught fire. There was an explosion. The explosion is not, by the way, what killed the astronauts. A lot of people think that that's true. The explosion, what it did is it dislodged the main cabin that housed the astronauts, and they fell 30,000 feet, 40,000 feet, hitting the ocean at a speed of about 300 miles an hour. That took about three minutes of free fall, and that's what they think actually killed the crew. It wasn't the explosion. Why didn't anybody anticipate this scenario? Why do you think? We know that rubber doesn't work under pressure. Why do you think nobody anticipated this? Somebody
just said it. Who said they did? Because you know what? They did. They absolutely did. They knew it would happen. Now, I'm telling you this in retrospect. They knew it would happen because we now know they knew it would happen. But the steps that it took for us to figure out why they knew it would happen are all the failures that I'm talking about in managing risk. That's what's going to come out here. That's what's part of this story. Very specifically, written documentation,
showed that they had proof that these problems happened with their O-Rigs, which we'll talk about in a little bit. So this man up here is George Hardy. You don't have to remember these names, I'm just pointing them out. George Hardy and this is Larry Malloy. These were people in NASA who were part of what was going on. When the engineers at a contractor called Thiokol went to their superiors and said, we are very, very concerned about the shuttle launch. It's too cold and the O-rings will not perform if it's under 53 degrees. We think you should scrub the launch. This is the contractor of the solid fuel rocket booster. They go to their management and say, we have a problem. The management goes, well, what's the
problem? It's these tolerances, these temperatures. We have an issue. We have to make sure that they don't launch. So they have a convene about 6 p.m. the night before the launch. The management calls up NASA. They go, NASA, we don't think you should launch. NASA's like, what? These are some quotes. I'm appalled. I'm appalled by your recommendation not to launch. When do you want me to launch? Next April? Here's what you have to understand. This particular shuttle mission, which all eyes are on because Christa McAuliffe is the first civilian in space, has already gone through three scrubs. They've had three times where they were going to go to launch, and they didn't. It also is
the exact same day as the State of the Union speech by the President. Now, there was some research done to see if any of these things are related, and there wasn't any concrete evidence found that these things are related, that their political pressure was so great that they had to launch on that day so the President could talk to Christa McCullough from space. But anecdotally, it's probably there. And there are issues with that that we'll look at here in a minute. Right? So all this comes out later, by the way, through this investigative body that we call the Rogers Commission, chaired by Chairman Rogers. Okay? So this is the Rogers Commission made up of some pretty significant people. We've got Sally Ride, the first woman
in space. We've got Neil Armstrong, the first man on the moon. We've got Chuck Yeager, the first man to break the sound barrier. We've got Richard Feynman, a Nobel Prize winning physicist. Richard Feynman is going to be the focus of the rest of what I talk about because he's a fascinating individual that really went outside the realm of what he was tasked to do to figure out what these problems were. And I credit him for finding these issues with risk management and really bringing them to light. But suffice to say, these are important people. The two people that I want you to remember, though, are Richard Feynman, who is a Nobel Prize winning physicist on
the panel, and this is Donald Kutina. He's a general, well entrenched in NASA. really knows all about the space program. He was a fighter pilot. He was involved in many, many, many aspects. It also turns out that he was super politically savvy and a puppet master. He is the reason that we know what went on today, why it went on. We also have a special guest that I'll talk about later. Now, one thing you have to understand is I'm a thespian. We have this thing in theater where we draw the curtain back slowly. I'm not going to tell you who this is now until the end because that's all suspense and drama that I get
to build up. That's good for me. Feynman, or Feynman, however you want to say it, he gets this job because he has this inquisitive nature. They call him up and they say, we want you to do this job. He goes, I'm a physicist. I'm not a politician. I don't want to deal with this. This is ridiculous. His wife, Wenath, says, all right, look. If you don't do this, here's what's going to happen. They're going to get 12 people. These 12 people are all going to walk around in a group doing things and seeing things and seeing the same things and coming to the same conclusions. If they pick you, they're going to have 11 people going around doing the same things, coming to
the same conclusions, and one guy running around figuring everything out. He's like, yeah, you know what, you're right. That was Richard Feynman. He was the guy running around figuring everything out. Chairman Rogers even told him more than once, you're a pain in the ass, stop it. You're causing problems. They actually wanted him to stop the investigative efforts he was doing. Feynman was never really sure where his line was, so he just started crossing them. He's like, all right, well, if you're not going to officially tell me I can't do this, I'm just going to go to it. He had somewhat of a shaky start. People were upset with Richard Feynman because he was asking questions.
And people don't like to be asked questions, especially when they're put on the spot. Now, think about it. We've had a shuttle disaster. It's a national disaster. We've had all this media attention around Christa McAuliffe. It's a big deal. All eyes are on what's happening in the Rogers Commission. People are very, very, very nervous, and some crazy guy is asking them a bunch of questions. What's their first inclination?
As little as possible information fed. Kind of like working with an auditor. When you work with an auditor, You answer the questions the auditor asks and you don't volunteer more information. Now, I love auditors. I volunteer more information. I am a completely different individual because I understand what that gets us in terms of helping us go forward. So did Richard Feynman. I like to think I'm a little bit like him without the brains and stuff. Really, Feynman's big disappointment here is he doesn't like the sterile conditions of the testimony because what's happening in these congressional hearings is they're pulling people up. They're sitting in front of a congressional inquiry, it has a Rogers commission there and senators and they're testifying in front of a microphone, you know,
similar to some things we've seen in this political season, right? That's exactly the way it's going. And Feynman's going nuts because he's not a politician. He's like, no, I want to talk to the engineers. And he gets to talk to the engineers once or twice and they start telling him stuff. And he's like, yeah, I'm really excited by this because he's a technical guy. Even though he doesn't understand, you know, space travel and the space program, he gets to talk to them about their technical problems. And he's like, yeah, tell me about your problems. And he really starts to figure out what these root indicators are that wind up being the total problem at the
end, but only when they let him talk to the subject matter experts. That's one of our lessons here, is we have to listen to our subject matter experts, especially when we're dealing with risk. You want to go to the people who understand most about what that risk is to get the information to make a decision. That's exactly what Feynman's doing. He starts meeting with engineers, he's making these small discoveries, and he's figuring out that these are smart enough people that they should have known that these rubber seals were a problem. Turns out they did, and they have written documentation to show it. But this is what he's suspecting at this time. He's running around, he's like, yes, I can see, you know what you're talking about, you have
all this great knowledge. So really, what's the deal? He makes the first of several critical discoveries. If the technical people know what's going on, but there was no communication. No one has discussed the problems between their flights. Every single shuttle mission, 24 prior shuttle missions showed problems with the O-rings. They have a thing in space travel where they do a flight safety prep check. They do a check before the launch, they do a check after the launch. And they followed their procedures to the letter. Every single one of these checks showed problems with the O-rings for 24 missions. But they lucked out 24 times and it never failed. It was never talked about between flights. And he went, why? Why did you not talk about this between flights?
And the answer he got back was, Because there wasn't a failure. What was there to talk about? Right? Has this happened to you? Have you had issues where things go wrong but they're not talked about until something goes wrong and causes you to talk about it? Why would we do that? This is 30 years ago. Why are we still doing that? That makes no sense. Right? We want to be able to have these conversations. If you're managing risk, risk is all about communication. You have to establish those lines of communication and talk about those things, especially when problems appear, so you can discuss why the problems are problems, which is what they should have been
doing here. The reports that they dug up actually mentioned the joint seal as being most critical to operations. So what that O-ring was sealing up, it's the most critical piece of the flight. The report also says that safe flying can continue if they pass their checklist. And Feynman goes, wait, if we have something that we rate as absolutely critical, why would it not be absolutely critical? Why would we say it's okay to fly? It shouldn't be. Every one of these other 24 reports said we had failures. If it's critical and it's a failure, you don't fly. That's his conclusion. Why are we flying? It's a good question. He also makes another critical discovery when he looks into the computer simulations.
They had poor risk tolerance. So here's what happens.
There's a few people who are making decisions, and they're making decisions based on their available information. And what they did is they said, okay, if we have a set of conditions that executes under these conditions, and it causes us not to fly, How can we reduce the expectation of the conditions to get us to fly? So effectively, what they did is they took the risk tolerance, which they already had here, and said we will not lose a human life, which came directly out of the Apollo program. Remember, we've lost three astronauts. We will not do anything to lose a human life. The actual quote is something like, if any party disagrees that this is a problem and human life is at risk, we don't fly.
But at NASA, they decided to shrink their criteria. So their risk tolerance got lower and lower until they allowed something to happen. Have you encountered this before? Perhaps in your jobs where we know we have a set of criteria and we say these criteria exist for this reason and if we work within these criteria we know this happens and when the criteria change all of a sudden we get unpredictable results. That's risk management. The same thing is happening at NASA but it's a little bit more critical because they're dealing with human lives. Maybe in InfoSec we're not dealing with human lives all the time but we've got medical IOT now. We should be paying attention
to these same lessons. We should be learning from this. So Feynman, he's aghast at these discoveries. Like, he's just, his mind is blown. The other people are politicians. They're like, yeah, it happens all the time, right? That's the way it works, right? So he's going, no, no, no, no. So he's looking like, how can we assess this? How can we do this? So he goes to the National Air and Space Museum. And at this point, he's really down on NASA. He's like, I can't believe these guys let this happen. And that was kind of a quote. They let this happen. He goes to the National Air and Space Museum, and the director of NASA brings him through there and shows him a film on what it took to
actually get the space program going. And something clicked with him, and he went, wow, I can't believe this many people were involved in this massive effort, put all this time, money, and energy into it, and it failed. He goes, I can't believe that. And he changed his mindset from being anti-NASA to being pro-NASA to go, you guys are awesome. and you do awesome things, and let's find out why you had this failure. That's what I'm suggesting to you. If you're in a situation where you're facing some real negativity and you're really against what these forces that are causing you this negativity, understand what the drivers are behind those forces. Try to change your mindset. It will help you when you're assessing that risk because it will help you with
things like that risk tolerance because maybe you can reduce the tolerance a little bit, just not to the level that you're being requested. That's working. That's moving room. That's you negotiating, and that's useful, and that helps. It helped Feynman a lot. This is the cool part. There's a key moment. General Kutina, remember I told you about General Kutina. He has Richard Feynman over for dinner. He's talking about stuff, and he brings him to the garage, and he's showing him his 1973 Opal. Feynman knows nothing about cars and could care less. Kutina's going, yeah, this is my car. I've been working on it. and oh, that's the carburetor over there. You know what's funny? I've been working on the carburetor and I noticed something that when it's really,
really cold, the seals in the carburetor, they don't work right. What do you think happens to seals when it gets really cold? And Feynman goes, well, they don't work. Aha. And he has an aha moment. This was absolutely orchestrated by Kutina. Feynman later says in his memoirs, I'm pretty sure somebody told him that this was the problem. and he directed me toward it in his way. In 2012, we find out he's absolutely correct. That is exactly what happened, right? And I will talk about how that happened because it's brilliant. So the engineer's concerns are starting to come to light. So here's the thing. This is Alan McDonald. He was one of the chief engineers on the NASA side, I believe. He comes to a public meeting uninvited.
So they're having these hearings, these Rogers Commission hearings, and this guy walks up with his engineers. and sits down. No, I'm sorry, Ellen McDonald worked for Thiokol. He wasn't NASA, he was Thiokol, the solid rocket booster makers. And he sits down and they're like, well, who are you? And he's like, I'm with Thiokol. And they're like, well, why are you here? He goes, because I'd like to offer testimony. And here's what he said. We recommended to NASA that they do not fly under 53 degrees. This is shocking to the commission. They had not heard this before. They'd heard all this testimony and nobody had said that there was a recommendation not to launch. And they
said, well, is that true? And they said, yes. We reversed ourselves under pressure from NASA, which is exactly what happened. The Thiokol management went to NASA. They said, guys, don't launch. NASA said, when are we supposed to launch? Thiokol went, okay, you're right. Internally, they had some problems where Thiokol said to the engineers, take off your engineering hat for a minute and put on your management hat so we can figure this out, which means we disregarded our SMEs. We disregarded the knowledge that they were giving us. That's effectively what happened, but the commission had never heard this. This is the first time they're hearing this, and they are genuinely shocked. Feynman is pretty convinced that he knew that they had a temperature problem with the O-rings by
this time. He's like, all evidence points to the fact that they knew. But he has to figure out how to show that they knew without having documentation that officially proves that. So, so far, we know that there's problems with the seals that were not properly communicated out. That's the suspicion. We're trying to prove our theory. We know we have problems with Morton Thiokol management bowing to pressure, and we know NASA is accepting risk beyond their tolerance. It's a recipe for disaster at this point. We know this, but again, we only know this in hindsight. That's the problem with risk management. When we manage risk, we manage risk in the moment. it's difficult for us to manage risk for the future because the conditions of the future change
so much. Which means when you're managing risk and you have constraints, you need to stick to those constraints as closely as you possibly can so the outcome becomes what you want the outcome to be. Does that make sense? Okay. NASA's not doing that. All right. So, Feynman's looking for better answers. So, Feynman does something that is a pivotal moment in the discovery. He stages an experiment. And what he does He gets a glass of water and he's got an O-ring that he's pried off one of the shuttle models that they're using in the commission hearing to use as people are talking about what's where. He drops this O-ring in the glass of water. Kutina is sitting next to him and sees what he's doing.
After about a minute, Feynman reaches for his mic and Kutina goes, no, not yet. Feynman's like, okay. He lets another minute go by, he reaches for his mic, Kutina goes, no, no, no, hold on, not yet. Another minute passes, he reaches and he goes, hold on. He goes, when he gets to the testimony where he's saying this, that's when you do it. Fiamme goes, okay. So, Larry Malloy is testifying. He gets to this point where he says, and we had no indication. Fiamme goes, excuse me, and he pulls the O-ring seal out of the water and he had a seat clamp on it. He pulls a seat clamp out and the thing very, very, very slowly starts going back into shape. And he has a famous quote where he
says, I believe this has some significance to our problem. Because remember, they were supposed to pop back in milliseconds. The press went crazy. They saw the experiment. I mean, this was drama, right? I told you I was a thespian. This is what we live for, right? This is the critical moment. Wow, fantastic. This is amazing. And he makes this sort of critical discovery that should have been obvious to everybody, which it turns out it was obvious to everybody, but we ignored the obviousness of it. because of what was happening, and that's the politics in play. So the press is reporting that NASA is under great political pressure to launch after this. This actually turns the
heat up on NASA. Katina is a really keen political observer. He points out that the commission has many weaknesses in its membership. Basically, everybody's tied to NASA. Feynman's the only outsider. He's telling Feynman, you're the only one that can really get to the truth of what happens here because you are the person who comes from the outside and doesn't have any emotional attachment to what's going on. He was just a fact finder, somebody from the outside. This is why I say I love auditors. Because really that's what auditors are. They're fact finders, especially when they come from the outside. Maybe internal auditors are a little too close to it. But external auditors, they're fact finders. That's how they should be viewed. Somebody who can help us determine what
these root causes are. And when we're managing risk, sometimes we need an extra pair of eyes to give us that fact finding opportunity. We're just too blind by our own pressures, by our own politics, by everything else that's happening to actually see what we need to see. So Sally Ride, for instance, she still had a job with NASA. Feynman was the invincible man. Neil Armstrong was a consultant for NASA. So even though these are amazing people, they've done amazing things, they still have these very strong ties. They're blinded by what they're involved in. Right? And some of the other political forces. Reagan had announced in 1986 the shuttle program will, within a year, put a teacher in space called the Teacher in Space Program. So we are
focused on getting a teacher in space. There's a lot of politics behind this because the president makes this announcement. There are people who want to please the president and they're going to do these things. So maybe there wasn't direct thumb to nose pressure on getting this done, but certainly if you work in government and the president issues a dictum, you're going to do the best you can to make that happen, right? Shuttle launches the same day as the State of the Union. Now, Feynman, check this out. And he said, I don't believe that this is true. But here's what I'll tell you. I believe that there are probably some people who are ambitious who wanted
to make this happen so the president looked good. So it was an awesomely staged event. Again, there is no direct proof of that, but we can speculate. We can say probably there were people with some ambition and maybe some ego that caused that to occur. We've got this frenzied media coverage around Krista McAuliffe. And up to now, we've had these significant launch delays. These are all problems that NASA's facing. when they decide that they're not going to scrub this fourth time. I'm not saying it's right or wrong. I'm saying that based on what was happening in the focus of what they had, in the sphere of what they could control, there are things, forces, that
are causing them to make decisions, and that's what's happening to you. When you are having problems managing risk or when you're having problems getting management to understand, probably what you don't see is some of these other forces that are happening. You might have a CEO that issues a dictum, and ego and ambition are getting in the way. You might have some frenzy around a new product or service that's being released, and you're going crazy, but we haven't even looked at security architecture for this yet, if ever. All these things will play to the same things that are happening to you. Remember, this is 30 years ago. So, the same things are happening today. Well, the
engineers spoke out, and because of the Alan McDonald testimony, Diacol has called in for a more probative inquiry. They actually ask the engineers, raise your hand if you were in favor of the launch and not a single hand went up. They're like, okay. So, Feynman goes, let me ask you this. Who's your most important engineer who understands O-rings? They name names. They name Boisjeli, Thompson, Capenburts. He's like, okay, three of those people are here. What did you think, Mr. Roger Boisjeli? He said, I recommended we didn't launch. What did you think? I recommended we didn't launch. What did you think? I recommended we didn't launch. Yet, the testimony is that it was roughly evenly divided in the decision to launch or not to launch. Feynman is going, that doesn't
make sense. That can't be true. You're playing politics and we're trying to get to the root of this problem. Why are we having these issues? This doesn't make sense. He talks to the managers who say the workers aren't as disciplined as they used to be. So at Thiokol they're like, yeah, they used to follow all the instructions and now they don't. When he talks to the workers, he's like, they're doing everything they were supposed to do. They're doing it by the numbers. When he talks to the workers alone with one manager present, the manager is surprised to find out that the workers wanted to talk about what they did, but the managers wanted to shield
them from having to go through this inquiry because they saw this congressional inquiry process as being something that was super scary that they might not want to deal with, right? So he talks to the worker, he finds that they're frustrating in dealing with change. These are all communication issues. These are all breakdowns in communication. Management thinks that there's a problem that maybe isn't a problem. The workers think that there's a problem that isn't a problem. Stuff is being communicated up that stops being communicated, it's not being communicated down. These are the same problems we're facing today. It's communication based. It's the most important thing we can do. The safety officers define failure as one in 100. The NASA estimate is one in 100,000 for failure.
The difference between 1% and .0001% of failure. Why would the safety people think it's 1% and the managers be so far off? This is driving Feynman crazy. He's like, why would we have this disparity? Because of these communication issues. And they chose, at the management level, to accept a higher tolerance for risk than the safety engineers, who were arguably the subject matter experts, giving. Right? Does that sound familiar? Do you hear these things happening? And we need to identify what these things are and we need to point them out when they do. So the big findings. The biggest contributor to the accident was poor communication. We had critical safety concerns. They weren't reaching those who needed to hear them. They just were not.
They were not going up to management. In one case, the flight commander at NASA was never made aware that anybody at Thiokol had objected to the launch. That is a critical communication failure. Nobody at NASA passed that on to the flight commander who would have immediately scrubbed the launch because of that rule, the Apollo rule, which says, if anyone disagrees, we do not launch because human lives are at stake. They didn't accept the judgments of its engineers who actually agreed with Thiokol that there was design flaws in what they had, and the testing showed that. Then NASA wanted proof Their state of problem, when they were on that call and they said, we want you to scrub the launch, NASA said, well, prove to us that they don't
work under 53 degrees and less than 24 hours to launch. What are they supposed to do? Here's data. The data pretty much proves it and they're not believing the data that's put in front of it. Have you ever heard that before? That's pretty obvious. Management had faith in the machine. Feynman has this great quote, what is the cause of management's fantastic faith in the machinery? I will turn that on you a little bit and ask you, what is management's great faith in technology today? Why do our managers believe that there's a big red button for security that says, we're secure? Why? Why do you think? Why do you think? What is it?
They're not talking to each other. What else? What other kinds of things can you think of? They want to believe it. Money. Money, right? What did you say? Easy answer. Easy answer, right? Messenger always gets shot. Messenger gets shot. Trust? If you haven't seen a failure, everything is working. If you haven't seen a failure, everything is working. That's exactly right. If we have never seen a failure before, why would we see a failure in the future? Here's what I'll counter with that though to you. It's a valid question to ask. You have to prove it. You have to prove why. think it will fail in the future? What were you going to say? I was going
to say, bonuses. Yeah, bonuses, incentive, all of these things. But really, what don't they understand about the fact that we need to have the secure architecture and the things? These are the things that you need to think about. Complexity is definitely an issue. How do you explain it? How do you explain it? A big red button is easy. What you're talking about has all these moving parts, and I don't like it.
Absolutely. So there's the knee jerk, right? Wow, we don't have anything and we have to have all this stuff and now all of a sudden dollar signs start going off and people have birds and stars flying around their heads because they're knocked out with how crazy this is, right? Just a lack of understanding. So again, it comes back to communication. You need to communicate. This is the most important thing. So the decision to launch contributors. We've got the 1958 Project Mercury operational ground rule, which I've already told you. No manned flight undertaken until all parties responsible felt perfectly assured everything was ready. They ignored it. This had been in place since 1958. The Apollo 1
disaster happened because of a failure of equipment. It did not happen because parties were not in agreement. There were other launches that were scrubbed because parties were in agreement, but they ignored the 1958 operational ground rule. We had engineers expressing the safety of O-rings. They had presented a convincing argument to their management at Thiokol. Larry Malloy said the data is inconclusive at NASA. Gerald Mason, one of the managers, take off your engineering hat, put on your management hat. All of these things are happening that are contributing to this failure. It's not one thing. That's the lesson here. It's never about one thing. It's about all these things. And when we start to sum them up and they aggregate, it becomes awful. The final launch decision belonged
to Jesse Moore. He was informed of concerns but was told they had approved the launch. He was never informed that they had objected. That's a problem. That's a huge problem. That was the person who could have saved the lives, but he did not have the information that was required to save those lives. Uninformed NASA management, they had high level managers. They insisted that they were unaware of things like the recent problems of O-rings, that they didn't have a clear understanding of the concern, and the Marshall Space Flight Center project managers failed to provide information fully to their project managers. All communication failures. Guess what happens February 3rd of 2003? Fast forwarding the time. the same thing happens. This is the Shuttle Columbia. The Shuttle Columbia disintegrated on impact.
And what they found when they investigated the Shuttle Columbia was the same failures in risk management that had happened with Challenger happened with Columbia. Why? Why did those same failures happen? That's a whole other talk, right? There are political pressures that were happening at the time. But how can we make it better? Which is really the core of why we're here. The recommendations of the Rogers Commission that we're going to apply to what we do today. One of their first recommendations was promote astronauts to management positions. Basically what they're saying is make your subject matter experts manage the operational activities. This is what should be happening. We should not have managers managing operational activities who don't understand the operation because management and operations are two different disciplines. They're
two different philosophies. When you're involved in an organization that is so operationally entrenched like space flight, you have to have subject matter experts at the operational level, being in management. They recommended that they redefine their responsibilities, and maybe that's something you have to do too. Maybe you have to redefine responsibilities so that people who are right for those roles, as we say, aces in their places, that they have that voice that they need to have, that they get there and they get to where they need to be. Commission recommended they establish an advisory panel with representation for many different areas and organizations. Absolutely. That's what a review board is, like a security review board. That's a really great thing. Establish an office responsible for reporting documentation of problems, problem
resolution and trends. They ignored their problem trends. They just ignored them. But you need to have an office that reports them. Changes of personnel, organization, indoctrination, or all three to eliminate the management isolation, which happened. You might not have control over that, but sometimes that shakeup is good. Develop policies which govern the imposition and removal of constraints. Establish a flight rate consistent with resources. If you're in DevOps, maybe you've got way too much in your queue right now to deal with the output. Your flight rate is now being impacted, right? What you need to deliver is being impacted by the queue itself. It's got to be managed. It's got to be focused. It's got to
be able to be within constraints that you can manage with risk. And then are you part of the problem? When Malloy explains how the SEALs are supposed to work, and Feynman says in his usual way, he's using acronyms. It's hard for anybody else to understand. We're all guilty of this. Our industry can have entire conversations using 12 letters. That's insane to an outsider. They have no idea what that means. We need to work on this. This one's on us. I'm going to jump forward a little bit here. The forces working against you when you're doing this is you've got political forces, economic forces, ego, and ambition. These are all things that conspire to work against you, and it's difficult to overcome. But you need to be aware of
them so you can manage them when they happen. You may not be aware of the politics, so become aware. Ask questions. You may not be aware of the economics. Ask questions. Ego and ambition, these are personal problems. These are things we're going to run into, and they're going to be issues. Establishing a risk frame. The first thing in any risk assessment is to establish the frame, include assumptions and constraints. It's so important because that's what you're going to work under when you're managing risk. This is my set of tolerance. This is what I have to manage to. And be like Feynman. Feynman is just a pain in the ass walking around asking questions. Ask questions.
Don't just assume something is. Ask it. It's really, really important. All right. So let me jump to here. The human component. I've talked to you about Richard Feynman. I've talked to you about Katina. And I've said Katina was a mastermind. Feynman in his memoir said, I think Katina heard it from somebody at NASA, probably an astronaut, that these O-rings had a problem, but I can't prove that. Katina never said a word, until 2012. In 2012, he said, I was walking down the hall next to an astronaut at NASA. They pulled a piece of paper out of the notebook and handed it to me without looking at it, and on that piece of paper were two columns. On one side was temperature, and on the other side was resilience of
the O-rings. This was a NASA internal memo. They knew at NASA that they had problems with O-rings and temperature. Now, who do you think gave it to Kutina?
Who do you think? I've actually talked about that person. It was Sally Ride, who was on the Rogers Commission, who had ties at NASA, and with those ties, used it to pass the information to Kutina in a covert way. He then took that information, puppet mastered with Feynman, got the information out. She still worked at NASA. She risked her career. He still worked at NASA. He risked his career. How could they get the information to where they needed to get it, to where it needed to happen? Pretty brilliant. Maybe that's what you can do too. Pretty diabolical. You got to be thinking pretty diabolically. It's pretty tough. For a successful technology, reality must take precedence over public relations for nature cannot be
fooled. I'm being asked to stop, which is fine. have any questions
for any of this, do you understand a little bit more about managing risk, about how it has to be framed and how we have to contextualize it? It's incredibly important to understand as we go through. Before I close, I want to say one more thing. I want to talk about this man, Bobby Billing. Bobby Billing was one of the engineers at Thiokol. Bobby Billing personally felt responsible for the deaths of seven people, so personally responsible that on the day of the launch, He told his wife, I'm going to take a gun into mission control and I'm going to stop them from launching if I have to kill everybody there. That's how personally he took it. After the launch, he spiraled into a deep depression for 30 years before he
died this year in January. He took it personally until some people at NASA and other people who had heard his story said, it wasn't your fault. My message here is this. You can be emotionally attached to things and they can really, really, really bother you. What you have to understand is that sometimes they are out of your control. If they're out of your control, deal with them as best you can. Don't internalize them. Bobby Bling died with a clear conscience, but I don't want you guys to walk away from something with a bunch of negativity just because there were things you couldn't control. Things are out of your control, they're out of your control. Manage risk the best you can, understand your constraints. That's the big
lesson. There's a bunch of references if you're interested. I recommend reading any of Richard Feynman's books. They're fantastic. Thank you very much.
I thought that actually. Yeah, thank you very much. So for those of you wondering, isn't this an Angular talk? Actually, no, no. Angular died, Angular's gone, just went away, kind of fizzled. Some people are happy. I'm kind of sad. I had a love-hate relationship with Angular, so I'm like, whatever. So we're gonna be talking about Neutrino. I had about five weeks to prepare this. I researched Angular on and off for about two years, and then, oh, passwords are hard. There you go. And then I found out, like, oh, I need to completely revamp my entire presentation. So, here we go, let's do it. Okay, our agenda for today, we're gonna be talking about Neutrino, not Angular. We're gonna talk about what exploit kits are, how they function, how
they work, and then we're gonna pull apart a sample. My focus today is on the exploit kit portion itself, so we're not gonna focus too much on the actual exploits that are being leveraged in there. I'll mention one of them and we will break down what occurs, the shell code that actually runs, things of that nature. I wanna focus mostly on that. So for example, like the malware that it drops also, like we're just gonna kind of reference that and move on. You'll see what I mean. Alright, I like this to be kind of a two-way communication, so the B-Sides crew really likes it when people are involved and people participate and things of that
nature. And I like to run my damn mouth and talk to people too, so it kind of works out, alright. So when I'm like, hey, who does this? You know, participate, or pretend like you're participating, one of the two, whatever. And then here's my, yeah, we're not talking about Angular, sorry. The programs went to press before I actually officially changed it, so. No biggie. Alright, this is me. I work at incident response for Bechtel Corporation. We're the largest construction and engineering and project management company in the United States. One of the biggest in the world. Think like mega projects, like tens of billions of dollars over years and years and years. We build stuff. I don't build a damn thing. I don't swing a hammer, but you
get the idea. I have some education stuff, some certs, but I know you don't give a damn, so moving on. I like to run my mouth though, and that's gonna help, because I like to present, and so I get to do that, let's do it. I like these things, I do open mic comedy at the, I was practicing earlier by the way, at a local place where I live in Phoenix. I love retro gaming, I like to read sci-fi, fantasy stuff, and anyone here do CrossFit? Raise your hand, don't lie, don't you lie. You know, you've heard the joke, right, how you know if someone does CrossFit or if they're a doctor, yeah, they'll tell
you. So I started doing Brazilian Jiu Jitsu and I'm turning into one of those guys. I'm like, oh, you have to do it. Trying to convert all my friends. It's kind of sad. So this is our security operations center. It looks pretty. This is a professional photo we had taken. And I just like to show it off. And Rupert hangs out and does poses for us. All right, exploit kits. What are they? So exploit kits, it's a business. It's a business of exploiting hosts and getting a particular strain of malware onto the greater number of hosts as possible. Oh my goodness, this is a huge competition. It's a vendor based competition. So you may have
heard of Angular previously and we're gonna talk about what happened to Angular and we'll talk about Neutrino, but in reality there's three primary actors. Let's just pretend, all right, let's play pretendies that some of you in here create your own malware. I'm sure you don't do that, right? But let's just say you did. And let's just say that you wanted to spread your malware to the most number of hosts possible. Like how would you go about doing that? Well you would basically hire someone who runs the exploit kit. The thing is, you give them the malware and their goal is to spread it for you. So there's three primary actors. There's the campaigns, the people
who redirect the actual traffic to the exploit kit. There's the exploit kit people who run the infrastructure to try to exploit as many boxes as possible. And then there's the malware authors who provide their malware and then pay for it to be distributed. So, it does so by, we're gonna break down exactly how Neutrino does its thing here, but the idea is very simple. The exploit kit landing page enumerates the host, it checks its capabilities, it looks for non-patch software essentially, and if it sees that something is vulnerable, it just goes, all right, go. And if it fails, it says, ah, fuck it. But if it works, then it goes, yay, and that's pretty much
how it works. So it just casts a very wide net, if you will.
The redirect process is pretty simple. The host over here the user over here, better yet, goes to what he or she believes is a completely legitimate site. There's some funny ones that get exploited, by the way, but I'll save that research for another talk. And when they go to this site, what happens is in the back end, their browser gets redirected over to a quote-unquote bulletproof site, and that site is what's hosting the landing page for the exploit kit. This is typically done in the background and hidden iframes and all kinds of other fun stuff. The user has no idea it's even occurring, and exploitation occurs, and then, oops. So, anyone here familiar with a
bulletproof site? You know what that term refers to? You don't count, Jack, you can't say. Alright, someone tell me. What? Coffee, it's damn good coffee though, right? The empty tea oil shit? Yeah, buddy, I like that stuff. How about a real answer? Dick.
Damn right, damn right. You know when there's a problem like EDUs always get popped, right? And they have open mail relays all the time and they're being abused. So you send an email to their abuse department and they have terms of service and they immediately respond to that, right? Well a bulletproof hoster doesn't give a damn. They just click delete. It's like, I don't give a damn. So they use those to host their illegitimate software and such. Okay, before we get too far into this, I'm gonna talk about mitigation, and this is part of the conversation piece here. We have a couple different tools at Bechtel that help us immensely with exploit kits. So this
is, I guess, a little plug for OpenDNS, and for, funny enough, I feel like such a weirdo being a security guy, be like, oh yeah, AV is great. Like, really, dude? But Symantec Inpoint Protection has an IDS built in, and that little bastard catches a lot of freaking drive-bys. Specifically, Open DNS has this feature called the drive-by download exploits, and I don't know who runs that for them. I should probably have found out and give them a plug, but they do a damn good job. It was stopping Angular left and right. So we also have FireEye that just kind of alerts us, but it's not inline, so it's just more of like, hey, oops, and then we go, damn it, and then we have to go deal with it.
So that's not really mitigation if you ask me. All right, but what about you? Who here deals with exploit kits on a daily basis or every once in a while? Who works on a blue team here? Put your hand higher. Go ahead. It's OK. She's like, I don't know what to say. What tools do you use that help you mitigate them?
OpenDNS. OpenDNS. Cool.
So you basically correlate. activity and you hunt essentially, right? So what she's saying is basically they're looking for known TTPs that they find in particular samples and then they hunt for those in their network and they say, oh look, oops. Right, and I guess you can implement those in your custom snort rules or whatever you guys are doing there. Anyone else? What tools are you using? What's helping you with exploit kits? Same as you. Same as me? Same as you. Or us? Yeah, and then we have our inline. Oh, you have the WebMPS's inline? All right. So anyone here else have a WebMPS inline? It stops everything. Little bastard. That's why we don't have them inline. What about EMPSs? Do
you have them inline, the email appliances? They stop a lot of the same type of malware that will be dropped, but of course not exploit kit methodologies.
All right.
For legitimate research, right? Yeah. Yeah, yeah, yeah.
So basically, he says they have pretty much an implicit deny situation. They whitelist very few sites. Of course, some of those, like you said, can be. .
Gotcha. OK. . Domain tools API. What else are you guys using for your domain research and reputation scoring? Domain tools? What else? Pass the total. Pass the total? Heck yeah. Anyone else? No? Fine. Whatever. Alright, OpenDNS, investigate, that's good. VirusTotal, API correlations on those, those are good. I'm talking about moving on, boop. Alright, keeping up with exploit kits. Now, I like to just tear apart the samples. I really like reversing and I like to pretend like I'm a reverse engineer, like, yeah, I know what I'm doing, you know, I pretend a lot. So, what I do is I like to pick apart samples, but I don't like to just sit there and just look at all the different that
are occurring and keep up with their changes and variants and all that. I leave that to the professionals. There are four of them that I highly recommend. One of them is Brad's site, malwaretrafficanalysis.net. Our sample today comes from that site. There's also malware4me, which is Jack's site. He's sitting back there. Hey, Jack, how's it going, bud? I'm sorry, I just wanted to ask you a question. Yeah. Did you ask for a really outrageous speaker request? Yes, I did.
Yeah! Oh, that's awesome! Yay! Me! Thank you. That is so great. I'm getting so drunk on this thing tonight. That is phenomenal. So in the call for papers, they had like a, you know, do you have a ridiculous request? And I was like, shit, yeah, I do. I want a golden chalice that says chaps on one side and B-Sides Las Vegas on the other. And I fucking got it.
I'm gonna walk around the hotel and be like, what the hell is that? I'll tell you. There's a story, okay. That's so great. Okay, Broad Analysis is a fantastic site. And then there's Malware Don't Need Coffee, which is a guy named Caffeine runs the site. He's extremely involved in the EK scene. I swear he writes a couple of them, I swear to God. I don't know him personally, but I messaged with him, I'm like, you know a lot about these. Okay, the evolution of the exploit kit. Anyone here ever deal with MPAC back in 2006? I don't even know what the hell I was doing back in 2006. I don't know. We're gonna skip ahead a little bit, okay? That's 10 years ago. What's up?
Oh, you think so? Well, I wanna go look at that now. Okay, the source for the actual backend infrastructure? May have been leaked for MPAC? Oh, I thought I would look at that. Okay. Oh, I'm just thinking of what to do with that. All right, anyway. In 2010 and 2013, it was really a game of Black Hole. How dare you? It's kind of a catchy tune, I like it. It's not related to Pokemon, is it? Was it? Was that a Pokemon tune? Anyone know? Alright, better not be that Pokemon shit. Alright, so 2012 and 13, Black Hole ran the roost. Like, it just literally just owned. It was constant. Our fire, I was like, oops, oops. It was, oh, I wouldn't shut up, it sucked. So
the alleged creator who went by the name Paunch was arrested in 2013 and immediately Black Hole just fell off the face of the planet. So was he the one who actually ran it? I still haven't seen confirmation of that, but come on. So after that, Redkit, Nuclear, Sweet Orange, there's a bunch of other ones in here. I don't wanna go over every single freaking one. Those are the major players, but Angler became the big dog. Speaking of which, hey, whatcha doing?
OK. Just pretend that didn't happen. Moving on. So you'll see down here in May of 2015, Angler had the lion's share of the market, 82%. And it just increased from there. Seriously, it had a stranglehold, really. Yeah, so about that. Nuclear disappeared in April of this year. It just went away. I have not seen attribution yet to it. Have you? Any idea where it went away? Did you stop writing it? Is that what happened? All right. So, and then in June, 50 people were arrested in relation to a Lurk malware campaign. Lurk was using a derivative, technically a variant of Angular. Oh, it's Triple X, right? Wasn't that it, I think? Anyways, these people all get arrested and then Angular just disappears. So the question was, well, is
it gone gone or is it just like temporarily gone? Well, the thing is that all the popular campaigns, the people that get paid a lot of money to redirect traffic, the ones who were very, very devoted to Angular, they all switched to Neutrino. say they all, you know what I mean, the big names, switched to Neutrino. So everyone was like, oh, that means it's gone. In fact, Neutrino, capitalizing on this, because why wouldn't you, they doubled their price. So their monthly price went from $3,500 to $7,000. So that's another. And then they made a statement saying, we're not going anywhere. So it's like, all right, I see what you're doing there. And then here's F-Secure
Labs showing that in June, we have, well, technically July, actually, through what month is this? Was that projections? What the hell's going on here? I just realized that. It's in the future, man. Look at this. What's going on here? What kind of bullshit is that? This is August, right? Holy shit. What did I do last night? I don't know where I am. Is that what's going on? Oh, it is. I just drank a lot this morning, that's all? All right, fuck it. All right. So this is actually just going up to mid-June. Oh, I thought that was funnier and cooler. Damn it. This is recorded, right? That's going to be funny to watch later. All right. So Neutrino took over.
Neutrino uses a single Swift file, one flash file, to do all the damage. And we'll get into that coming up. So I want to talk about campaigns first off. Actually, we kind of already covered this. The people who run the campaigns are the ones who redirect the traffic to the landing pages. So technically, they can be the same as the people who run the backend infrastructure for the landing page and the whole exploit kit, or they can be completely different. What campaigns are you all seeing in your networks? Are you seeing them? EI test and pseudo dark bleach are like the two common ones. Anyone know which campaigns you're seeing? No? What? Virtual Madonna? What are you saying?
I'm gonna go with virtual Madonna, because I like that. Alright.
So EITest is an example, and back in, apparently it started in 2014. We didn't start seeing it in our network until, I don't know, a year ago-ish. It used this Swift file. You know what, this is boring. Let's just do this. So what happens is they compromise a webpage, and in that webpage they inject some code. That code links to a flash file. That flash file just creates some JavaScript. The JavaScript switches over to an HTML page. The HTML page then links to the actual landing page itself. It was originally called EITest simply because the variable names, the ID, and the name down here, you'll notice they're EITest. It's evolved since these original screenshots were taken. This comes from a Malwarebytes article by Segura back in 2014. So it's
ridiculously similar. They just kind of changed the variable names. It's just still called EITest as a moniker to identify it. All right, here are the tools that we use to decode some of the things we're looking at. The two big players are right here in the middle. FreeFlashDecompiler and then FlashDevelop are just, without them, I don't know what the hell, this wouldn't be happening. I wouldn't know how the hell these things work. So, we'll get into how they help me out over here. Oh, right here, yeah. Nope. Yeah, FFDeck, FreeFlashDecompiler, and then FlashDevelop. By the way, FlashDevelop is, as far as I know, well, I guess NetBeans doesn't, It's my favorite, I'll just say. IDE, free IDE, you just throw in
the error, whatever SDK, and then you compile to your heart's content. So what I do, that leads right into the next slide, what I do is I rip the code out with FFdeck, and I turn on deobfuscation, because they obfuscate the hell out of these things. Like I even put there, you're gonna yell a bit, you're gonna be like, piece of shit. But then you just use their deobfuscation, and you're like, oh, I can read that, yay. Like seriously, without that tool, it would've been hell, oh man. But then I just put the code into FlashDevelop and then I can debug it and do whatever the hell I want. Yay me. So here's an example. This is when you have, for example, look at these class names that
they're importing. You see those? What the hell is that? I'm supposed to keep track of all those silly, no, that's stupid. But if you look up here, these features are off. But as soon as you turn them on, gotcha, bitch.
So now everything's all, it just enumerates the class names and the variable names in order of how they're executed. So it's like, even if you're on, say you're on class three and you're like, shit, what gets loaded next? It's class four, stupid. Yeah, it's really, really easy to use. I love it. All right, and now we do the actual analysis. What time is it? Oh, perfect. Yeah. Our sample came from malware traffic analysis. This was a, the original website was scarsboroughcricket.ca.
injected code in it, oopsies. This came from Brad's site, so if you check this out, that link by the way, and I'll have references, I'll put them online later, I have them all compiled. It actually leads to a dump, and then in the dump, there's like five or six, there's one EI test in that dump, and that's what we're looking at. So, the malware that this particular sample was involved in delivering, it was called, and I'm gonna butcher this, I didn't research how to say it properly, so I don't know how to pronounce it. Anyone know how to pronounce that before I screw it up? want to try? No?
Bandar-chor? Bandar-chor? Bandar-chor? Bandar-chor? Bandar-chor? I had no idea. I had no idea. It's just a locker. It locks, you know a locker is a crypto locker variant wannabe kind of thing, right? It encrypts all the user's files. And this one's really ghetto. It just pops up a notepad and it's like, it sucks for you. So, yeah, I don't wanna, this isn't a malware analysis of this malware, this is based on how it got to the machine, so we're just gonna go like that. So, here is a little Wireshark review filtering on get requests, and we see here the initial get request was for the compromised site, so there's Scarsborough Cricket. Then we have two requests after that, and
these are both EI tests right here. The first one is the flash file itself, because they use the flash file for redirection, and the second one, which is very similar to the first one, I'll show you exactly how, is the HTML page. Then from there, it goes to the actual landing page. If you're wondering, this is a Swift file, this is a callback, and this is the encoded or technically encrypted malware. So, here, whoa, whoa, hey, what are you doing? All right, I almost threw my mouse at myself. So here's our little guy here, and again instead of EI test, now it's interconnection stuff, whatever that's all about. And then I have highlighted the actual Swift file that's embedded, and here it is coming down in Wireshark, but
that's boring, so let's just skip past that. All right, now I really wanna go into the full analysis of every line of code and how it works, but we sure as hell don't have time for that. This thing's ridiculous. I kept getting pissed off, because like I said, I did this in five weeks, and every time I extracted more information, I'm like, yay, I'm so awesome. It was just like this nether rabbit's hole. I was like, fucking bitch. So for this one, what I'm gonna show you is, what I did is I basically took out, this is action script three, by the way, which is what you end up having to debug. So if you
don't know action script, you just learn on the fly. That's what I did. So you look in here, And what we're doing is we're calling a function called go. Go calls a function called da, da. See that little guy right there? See how da one, and then passing two, and then passing three. That's basically, it generates JavaScript. Technically it decodes what you're passing into, you know, who cares. Anyways, and then there's the URL that you're gonna go to. So I just trace these out. Trace is like a console log. You just shoot it right to the console, right? And then we get this stuff down here at the bottom, and I've compiled it here. So
we get a JavaScript function. So, again, The hash file for the redirected EI test creates JavaScript. The JavaScript is right here. The JavaScript creates a div called d. It checks to enumerate the client to make sure they're running Internet Explorer. Let me say that one more time. It checks the client to ensure that they are running Internet Explorer. Now, completely unrelated here, how many of you force your users to use Internet Explorer? Or maybe not you, but management. Bullshit. No, you're liars. You're filthy liars. No, no, no, no, I'm calling bluff on all of you. You're trying to tell me that you like distribute Chrome or Firefox to your users? Or you allow them to use it? Why do they do that shit? Don't you hate
that? He says they have free will, but they still choose IE, because yay! Like, if you have a user using IE, just show them the screenshot and be like, hey. Stop it. All EA test redirects. All of them from this particular set of campaign redirects would go, Firefox? Screw it.
Anyways, it just bothers me. Okay, and then what it does is down here the inner HTML, it sets it to the base URL of the Swift file itself plus a little suffix that's made pseudo randomly. And that pseudo random part is created right here in this for loop. And it's basically gonna be a bunch of letters and then one of these suffixes right here. I ripped that part out and I threw it in a loop that runs 20 times and here's an example of the crap it spits out. So it's gonna be the base URL plus some crap like this. Our example is right here. So in our live captures, see that .htm? Yeah, so that's
the actual HTML page that then redirects to the landing page. Here's some more redirect. This is the actual HTML page itself. It tries to redirect using two different methods. It uses the meta refresh and it uses JavaScript location ref, silly stuff. Yeah?
Yeah, correct. Yep. Yep. To reiterate, this is all in the background. User has no idea this is all happening. All right, now we hit the landing page. Yeah. So we've just handed over control of execution from our referring API test campaign to the actual exploit kit infrastructure. Neutrino uses one Swift file. One Swift file. Now, one Swift file. It's the third time I said that for a reason, and it's in bold up there that we've blocked Flash in our environment. How many of you have blocked Flash? Really? Yeah, hell yeah. Suck it, right? Bye, Neutrino. So, the cool thing about this is, do you remember the hacking team corpus that came out? When that came out, we actually had, oh you're
recording this, huh? We had an attempted thing and nothing happened and it was great. So we actually had, no, we had four attempted emails. I believe Ironport mitigated it for us. One of our systems stopped it, but we looked at it and we were like, oh shit, that is targeted as hell. And it was right after they had time, like hours to weaponize it. And we were like, oh damn it. And there was no patch yet, so we kind of freaked out. after they released three more zero days from Flash within like, what was it, four days or five a week or something? We're like, done, gone. So if you don't have Flash disabled by
now, exploit kit, throw some statistics towards management and be like, just cut that crap out. I mean, we still have it on our clients and we will whitelist certain internal uses and such, but forget it. Okay, and here's the Swift file coming down. That's boring. And there's the actual, that's boring too. All right, the actual, One SWIFT file had a decent detection back in early July, I guess, yeah, originally 7.13 analysis, 25 out of 55. AV-Ware even knew it was Neutrino. It was like, ah, we know that. All right.
Well, they actually did, yeah, yeah, yeah. So he's asking if we go into VirusTotal to see the errorless detection. I believe on this sample it was either that day or a day prior. I think I ended up. I believe. See what I did there? All right, the one Swift file, and this is what Angular also did, the one flash file contains within it a secondary flash file. The secondary flash file is actually pulled into memory and then executed. So when you're doing analysis, you're really trying to pull out the secondary file, which is what we're gonna do. So these Binary data blobs are just little binary arrays that are stuck in here and they get actually converted into, or they get
pulled in as a byte array. Anyone familiar with the byte array in ActionScript? You can just kinda guess what it is, just an array full of byte values, the individual direct byte values for fast machine processing, simple as that. So what we're gonna do is, I don't even know how to, I'm admitting on a recording, damn it. I don't even know if you can put the binary data blobs in it. I don't write flash, I don't know. I learned lingo. You remember director? You remember Shockwave? Remember that? Yeah, back in 1997, I learned the programming language for it. I studied my ass off, and I learned it. And then two months later, they were like,
oh, flash is better. I was like, fuck. Piece of shit. All right, so I don't know how to do that. But what I do know how to do is I just manually extract the hex decimal values, paste it in as a string, and then I use Henry T's hex.as function. He's got a to array and a from array to convert between strings and arrays. We'll just do it our way. And we were drunk last night and Keith, because he's weird, told me to put pokeballs on her chin. I don't know why I left them in there. All right, so we're gonna do some extraction here. This is where we basically go into demo mode. So let me
do a little quick time check there. Oh yeah, 20 minutes, hell yeah, let's do it. We're gonna move over here. This is a Windows 8 malware VM. Technically, I actually got this a little plug for SANS Institute. Their GRIM, Forensics 610 is their GAC reverse engineering and malware certification. So I went through that. Phenomenal class, by the way. And I just used this for my Windows 8 stuff, because I like it. So what are we doing? We're doing a stage one. Actually, no, we'll just open it. I'm just gonna open up Flash Develop directly, and show you how I extracted out stage two. So this is the part where it gets a little, the two-way
communication is gonna be kinda eh, because I'm just gonna show you a bunch of code crap, but we'll see what we get. So this resolution's horrible, but that's fine. All right, we're gonna compile it. You ready? Oh, demo gods, don't be a dick. Oh, it compiled, holy hell, all right, run it. No, that's... Oh, I...
I probably should have opened the proper project. Funny enough, I really didn't plan that. If you read that on the screen, that was pretty funny. That was 100% an accident, but I enjoyed that thoroughly.
Dumbass. All right, so here we go. We're instantiating the class name that they had, and then we're starting it via startdammit.derp, which is going to do this.
The first thing we do is we fill this array called dam x. Well, it was called x, but it was pissing me off, so I changed it to dam x. And dam x is over here, right there. So this array holds some flash-based commands, event listening, adding children to the stage, load bytes. They try to obfuscate by just using this array and referencing index values in it rather than using the phrases, whatever. So we fill that, we do that. Okay, go again.
Over here, it's looking for a stage environment. If you don't have a stage environment, you're probably in some kind of debugger or some type of working environment. So I just skip the part where we check for the stage and I just call it function. So we call this function called r, and r, this is where it gets kind of funny because sometimes they just give you like these really freaking obvious names with their variables. Like they try to make it so hard to read, and then you have crap like embed additional info. And you have even better, Embed RC4 key. Oh, Rivis is an RC4. Okay, that's just the variable name in there. That's kind of weird. So we're loading those, and this is
what we're doing here. See how I'm using the function to array, and then I just have a big long, it goes off the screen hexadecimal value. We're just making our own byte arrays, because that's just the way I do it. So once we do that, we then start this guy right here embed landing. This will eventually be the variable that will contain the decoded second stage, if you can call it a second stage technically, I'm going to, Swift file that's going to be loaded into memory. So as we scroll down, all those different, do you remember that big list of binary blobs basically? Those are all just catted, see write bytes, write bytes, write bytes,
blah, blah, blah, blah, blah. So we're just gonna skip past all that. And then we call this function, There we go. And then we call this function called d, and d is actually a decoder. So I guess they call it d, because it decodes, that's cute. And we pass in, and again, look, I didn't write that, they wrote that. You pass in your RC4 key, and then your encrypted Swift file, and at this point, now, it's decrypted. So what we're doing now is we're gonna change it from a byte array to a string, and we're gonna call it stage two, because that's what I just wanted to call it. So we do that and then now we're gonna trace it. So tracing it again just means writing it out
to the console. So as soon as I step over this down here, this is the stage two file that we want. So I basically just pop that into a hex file and then I, there it is. Now you'll notice that down here what it does is it actually creates a loader, a flash loader, and then a loader reference object. And it does some more crap that you don't care about. And then what it does, it's actually being a sneaky little chump right here. this function called m, not only does it decode the second Swift into memory, but then it passes a byte array to a specific function in that second Swift file. And the purpose
of that is, multi-fold actually, but one of the good ones for like trying to stop dynamic or static analysis, is if you didn't notice that and you simply extracted stage two and you tried to run it, it would be like where's my parameter? And you're like what a parameter, just run it. So it's another deobfuscation attempt. It also makes it so they can reuse the second stage container and just push new arrays into it so that they can quickly weaponize new crap and push it out. I'm sure it's for other things too, but I don't know. So as soon as we pulled that guy out, I popped it up on VirusTotal, and we had a
detection ratio of 12 out of 54. And this was a couple weeks ago on the 21st of July. So stage two, I just did that, moving on. Stage two is they stepped away from the obfuscation and they were like, oh hell, no one's gonna find this, so we're just gonna name everything exactly what it is. So there's some binary, what are these calling it? Binary data blobs again, and they're literally labeled like nw22, but see this underscore swift right here, see that? They're telling me this is a swift file for an exploit, and that's one, and that's one, and this is a VBScript exploit, and that's one. I'm like, oh, okay, thank you. So, doing analysis on one of these exploits, I identify, I didn't identify a damn thing.
It was identified as CVE 2015-8651. This is the one part that really pisses me off in this talk because I wish I had time to personally rip all five of these apart and find out exactly to ensure that they are the proper CVEs that I wanted them to be. I didn't have enough time though because I had to change from Angular. Damn it. So this particular CVE, this is what VirusTotal said it was. So if this is what it is, it's just an integer overflow. If you're not familiar with an integer overflow, essentially if you have a bucket where you can store a certain amount of contents and you fill it up and you overflow
it, you start overflowing your data into places it should not reside. Somewhere you can probably get the instruction pointer to point to and then you can get remote code execution and shit like that. So, ah, let's go pull it out. So we're gonna take a look at the stage two file now. Wait a minute, does this go till 55?
People here have never debugged shell code before in your life. Raise your hand. Your hand was first. Do you wanna help me out later? Screwed yourself, you screwed yourself bro. Ah, aww. Gotcha. Well I mean nevermind. All right. What, no. What? All right, here we go. Let's do this. And we're gonna go back into Flash develop. We're gonna load stage two and we're just gonna kind of run through the code. So if you're wondering what's going on from this point forward, now we have stage two extracted out, right? And we also have the binary data that was actually pushed into it, so we're gonna manually just shove that crap in there and run it. And then I'm gonna debug through it, show you exactly how it works.
Eventually, once it creates these five exploits, we picked one, I grabbed one of them, and I analyzed that one, but we don't have time to deeply analyze it, so I'm gonna show you the shell code. We're gonna walk, you are, aw, did we leave? There you go. I thought he left, I was like, did he leave? What the hell? Screw that guy, I'm not doing that. You sir, what's your name? Ned? I said like you just made that up. Ned! Ned, oh there, is gonna walk, we're gonna actually have him debug the shellcode, and we're gonna see what the shellcode does. It's really weird, I don't know why they do it, but you'll see. Alright, so here we go, we have stage, no we
don't, we didn't do it yet. open up the proper file before we do something stupid again. Stage two, boop-a-doop-boo-doop-doop-doop-doop, close that, and main. Okay, so in stage two, we are using getitgirl, and we're passing the array that I extracted out of the previous silliness from that m function. So this is the stuff that's supposed to be hidden, like, oh, you're not supposed to know it's there, right? Well, we'll just pop it in there. And what that's gonna do, first we're gonna compile it,
I can ignore that. And I will, okay. Forget your type declaration. Okay, so as soon as we start debugging, we see that we're in this method three. ET gets called over here. That runs method three. We go in here, so all right, what are we doing? Well, first off, we're getting rid of this, because that's in my way. And that's in my way.
First thing we're gonna do is we're gonna call method six. Method six is gonna create a variable with strings to enumerate the client, meaning it's gonna find out exactly what machine it's running on, what browser it's running, all that other fun stuff, right? So let's take a look at how they do that. I forgot the button. F11. There we go. So, I lost my place. There you go. So what this does is it uses a combination of two things. Number one is it uses external interface.com. Anyone familiar with external interface from ActionScript? It tries to grab the container to the engine that's actually processing your client-side browser scripting, meaning it runs JavaScript in our case. So we're gonna run JavaScript, and some
of this JavaScript does stuff like this. Window.navigator.appname. Simple as that, just regular JavaScript commands, and it pulls the results back in. Another thing it uses is it uses ActionScript's capabilities feature, which I just found out about. So apparently within Flash, within ActionScript 3 itself, you can enumerate what it's running on and such using capabilities library. Okay, so it's gonna grab if there's a debugger present, the resolution of the screen, it gets all this crap. So we're just gonna step past this and then take a look at the results.
We didn't step past it yet. Now we did.
And var1, so here are the results of processing that. The resolution's a little off here. Now, by the way, I lied. I lied to it. I told it that we're running IE11, Netscape was my window name, we're running Windows 9 with Windows 8.1 full string, and I gave the user agent string for, I don't know, something, I picked one. And so I just lied to it. Instead of having it try to process that, because we don't have an external interface available in my debugger, so I just told it that we had to stop and move on with life. All right, next up we're gonna call function eight, or method eight.
Don't you dare, bastard. What are you doing? Oh, I just hit the wrong key. Cool story. All right, so here we're gonna fill up this variable called variable two, and this guy does, oh, where's my locals? Am I blind? Do you see locals over here? What did I do? All right, well, whatever. Boop, there we go. So variable two now is this, do you remember that byte? that we actually pushed into the secondary Swift that was extracted out. What was in that? Well, it just got decoded, and what was in it is this stuff over here. There's a list of links. These links, the back URL means like if the exploit fails, like just go back to wherever the hell you just came from. I don't wanna enumerate
all of them, but some of them are for the specific exploits. And there's also a ping back. So what this little sucker does is if it finds out that it's about to run an exploit, it hits back to the server and says, hey, we're gonna run this one. It just helps them understand which works best in what environments, basically. Or at least I think that's what it's for, I don't know. And then here, they actually give us, look, they even call it key for the payload. There's our decryption key for the malware it's gonna download later. Oh, cool, thanks. Could've obfuscated that a little more, whatever. And then here are the actual whether it's gonna
run these particular vulnerabilities. So three flash-based vulnerabilities and two i.e. VBScript based execution change it's going to try to exploit. Okay, so we've jumped on past that. I gotta stop hitting F6. Don't hit F6, stupid. Yep, boop. Alright, now we're gonna call method 10. I don't know what the hell it does. Here's what it does. Now we're looking to see if the client is actually inside of a non-standard like browser type of environment. We're looking to see if we're running phantom, node.js, couch.js, rhino, which is a front end basically for the spider monkey JavaScript engine which runs in like Mozilla Firefox. And then also if we're in a debugger. So right now this is part of checking to
see like should I run right now or should I like back off is what it's looking at. And we're gonna lie to it of course, so we're just gonna say yeah, dude, you're good. Then we're gonna run into method 11. Method 11 is the ping back. And all this does is it creates an image location. And that image location is just gonna ping back to the servers by just trying to access it. And all it does is it tells them, like, so far so good is what it's doing. Pretty simple stuff.
Surprisingly enough, at this point right here, it does not provide an identifier. It just loads a standard source and gives the link for JS ping. So later on however, if the, oh, did anyone hear the question? I'm sorry. Five minutes? I thought I went to 55. Oh, what? That's horrible. I planned this all wrong. Okay, so screw all that. All right, here's what it does at the end.
Shit. Ah, caught me off guard there. Do I have to leave? Can I do five minutes questions? Yeah, I can't get one. Alright, alright. Okay, so skipping that slow stuff over there. Here's what happens. Exploits in a row. Like ducks sitting on a fence being shot down. We have, where's that shit come from? You threw me off my game, damn it. We have two different VBScript and then we have three different flash exploits. These are separate flash files that are once again loaded into memory, right? So let's go take a look at one of them. I pulled out one of them and the one that I pulled out was this one right here, the one that executes what I believe is this guy right here. Now, when
I extracted it out, I was gonna show you how, but apparently I don't have time for it. So what it does, well actually first up, I popped it into VirusTotal this morning actually. I forgot to upload the VirusTotal. I was like, oh I should upload the VirusTotal. And when I did, it only had five out of 54 hits and that's actually what tells me it was potentially this particular CV, the signature of which I don't fully trust, but based on the actual chain that I was gonna show you, it does look like an integer overflow. I just couldn't confirm it, no time. But notice, it actually associates it with rig. See this little guy right
there? Rig is one of the other popular exploit kits right now. It's not as popular as Neutrino, but it's probably number two, I'm assuming, right now, pretty much. So, what does that little guy do? Well, essentially, it loads up some shellcode. Are you guys familiar with the concept of shellcode? If you're not, oh, I left a slide in, yay. So shellcode is essentially what you're trying to do is you're trying to change the instruction pointer in the CPU, which is what tells the CPU what it's about to run or actually execute. You're trying to overwrite that with the beginning of your personal code. So once you can obtain that, the CPU then starts processing your
code. It's called shellcode from an old term just coming from, usually it was used to pop a shell, or like a reverse shell, so that I can maintain access to your, or get access to your system, I should say. So we're gonna take a look at some shellcode. The shellcode that it actually loads in the Swift file does a couple things. It finds itself in memory, required, I'll show you why. It XORs the rest of itself to prevent static analysis. And then it actually creates, or calls process create A, which is the ANSI version of process creation in the Windows library. So, here's static analysis of the shellcode. The problem we have is down here.
See where the analysis fails down here? Well that's cause, see right here? See that nine A value? That's a single byte XOR. that it's going to use, and it's going to loop 596 hexadecimal times, and as it loops, it's going to decode itself. So static analysis fails flat because it's not XORed. So we're going to do a live analysis. Oops, ignore that. All right, was it Ned? Was your fake name Ned? You want to come up here and decode the shellcode?
I pop the shellcode into an executable. We take the executable and we drop into AliDebug2. And AliDebug, all right, what you're gonna do is you're basically just going to be pressing one of the F keys. What do I tell you, okay? Okay. All right, now. Hold on. So the first thing we're doing here is the shellcode is actually going to jump down in itself. So you'll notice here it's gonna jump short. So go ahead and hit F7. So now we just jumped over here. See, we jumped to this memory location. This memory location now is going to call back up here. Why the hell is it gonna do that? Anyone know why?
What's that? Yep, so when you do a call, wait what? Yeah, sorry. When you do a call in assembly language, you're doing two things. You're pushing the next instruction location onto the stack, which is kind of like your working variable zone, let's just consider that for right now. And then you make a jump. Now the reason we're doing that is what? Why do we want to have that address on the stack? buffers for them. But I'm gonna now know where I am. I'm gonna know my location. So because of address space layout randomization in Windows, was it Vista and up, right? Shell code, whenever it starts to execute, and even when you're trying to exploit something, you have
no idea where you're actually going to land in memory. So by doing this, it's finding out where it exists. Yeah. So go ahead and hit F7 and look down here. Boom. See that now? We have the value where we reside now. All right, go ahead and hit F7. We're going to pop it. So now EAX up here contains our location, so we know where we are. All right, hit F7 again. We're going to clear ECX. That's the counter register. So now we're going to basically start some counting. We're going to move this value here into CX, 596 and hex. Go ahead and hit F7. And then we're going to start our loop. This starts the
loop right here. Oh, whoops, I'm missing my
Let's start the decode process. So go ahead and hit F7. Right now we're moving, right now we're actually going to XOR by 9A and we're pointing to this location over here. Follow and dump, memory address. So right down here we're gonna start XORing these values. Alright, go ahead and hit F7. And you'll notice it just changed from B something to 22 and if he keeps it, I was gonna go through this for more time, we just gotta keep going. Yeah, so basically, yeah exactly what it's gonna do. So it's starting at the bottom and it's XORing, it's revealing itself coming back up toward the top. What we're gonna do is we're just gonna click right
here, go ahead and hit F2, and then F2 again, and then hit F9. Alright, we just, what did he do? Oh, it's fine. What the hell'd you do, Ned? You dick! Alright, so now what we've just done is we've XOR'd, un-XOR'd basically, so if I do this, the data that was hidden previously. So this is all being done in shell code, in assembly language on the processor, right? So now what we're gonna do is we're gonna jump. So hit F7, and then F7, F7, guess what? F7, and again, and again, and again. Whoa, whoa, whoa, fucking Ned! See this guy? He's like, yeah! Okay, this right here, we're pushing FS30 points to what's referred to as the Process Environment Block. It's part
of the Threat Information Block, the TIB. So essentially what that does is you're trying to find out information about processes that are currently running. So what you want to do is hit F7 again. We've just found NTDLL. It was just up there. So what it just did is it just found the location of NTDLL on the system. We're looking for kernel 32. When we find kernel 32, we can then find the functions that live inside kernel 32, and then we can call them and do shit. So hit F7 again. Did I write those? Did you just write that? I did. Lull and OU? I don't remember if I did that or not. All right, F7
again and again. And then one more time. Yep. And then one more. All right, one more. All right, one more. OK, one more. And one more. And then one more. All right, what we're about to do here is we're about to load the functions that are actually inside kernel 32. So if you look at the very top right-hand side, here. Hit F7 one more time. I lied. Do it one more time now. There. The first thing we have up here is a function name, acquire srwlock. I don't know what the hell that is. OK, cool. So what we're looking for right now is now it's going to start a loop. It's looking for a particular Windows API-based function. All right?
So we have a little bit of time. Thank you very much, Ed. Much appreciated.
take the string that it has right now and compare it to this value here. And then if it finds that part, it's going to compare it to the next value here. So this is basically what it's looking for. We have the values of what it's looking for. So when we decode that, like say we just pop that into Python, it comes out as we're looking for Eric's ass. Yeah! Anyone here named Eric? Anyone? Put your hand up. Looking for your ass, bro. No? Okay, so it's actually started in little Indian format, so it's actually backwards. So we're looking for something that starts with create and ends in s-a, which is create process. When we find it,
we end up right here, so let's just do that guy, f2, f9 it.
All right, so we just found the process that we want, and then I'm just gonna expedite this process here.
Wait for it. Wait for it.
Oh, I skipped past it because the time got me. Oh, what a dick. All right, so what it's gonna do is we have this data in here, and it starts with command.exe. See right here? So what that's actually trying to do is the following. It's just called, I was gonna show you where it loads the argument, but I wasn't looking in the right place because I got to rush for time, so I skipped it. That's fine. What it's gonna do is this right here. It's gonna run this. There's also some obfuscation here. The caret, if you try to run that in a JavaScript interpreter, it tries to run an XOR. Ha, dick. You have to
run it in command prompt. which comes to this, and this actually just executes the malware. So it opens up a stream on disk. I was gonna show it to you, but we're basically out of time. So this right here, here's the thing though, why the hell does it do this? The shellcode does not execute a call to like URL download or something, but rather it runs command prompt and additional JavaScript that functions as a downloader. So why the hell does it do that? I don't know. I don't know why it does that.
I'm like, that's dumb, just do it. You have code execution on the box. So anyway, so once the malware runs, in this case it was the bandage, I don't know, bandage, or was that what it came up with? So it encrypts the disk and then everyone's screwed and everyone cries. So that's it, that's how Neutrino works.
about this until right now. Alright, any questions?
No, I didn't watch it. I was hoping no one would notice that. Thanks, dick. Hopefully it was washed. I don't know. Went in Vegas, right?
Yeah Ned Ned.
Good question. So the JavaScript is running through the interpreter that's inside the Flash plugin, and I don't know if that particularly adheres to the same settings that you have in your browser. Anyone catch that? You're saying when the JavaScript's running in the Swift file, like if you have JavaScript disabled, or like it pops up and says, are you sure you wanna go to this place, stupid? Like, would you actually see that? The engine is technically different, but I don't know if they tie it back to those settings, so I don't know.
That's something good to check though. I'm gonna follow up with that. That's a good question. I like your question, Ned. Yeah? So this looks like it's all in Flash, so why is it depending on . Damn good question. He said it looks like it's all Flash-based JavaScript. I threw that in for you. Stuff, right? So why does it depend on IE? Well, first off, EI test depends on IE up front. But Neutrino also, I skipped that because we don't have time for all that. it does do some checks on the host. I forgot to show you that. By the way, it does enumerate your host. It looks for ESET, it looks for malware bytes, it looks for FFDecoder, which is funny, FFDC. It checks to see if these things
are all installed. There's a whole list of it, but again, timing-wise, I forgot to show you that. That was even your feedback earlier. Oh, your buddy's gone. Anyways, but yeah, so, I forgot that you asked. Oh, why is it dependent on Internet Explorer? Well, there's two VBScript exploits that are specific to IE, because only IE runs VBScript, right? But for the Flash-based ones, I don't know, because in the Flash-based ones that I analyzed, nothing in there was solely dependent on IE, so I don't know. Yeah? Sure the hell does. How much time do we have, like a minute or two here? Yeah, we have two minutes technically. Look, this is the part that I didn't show you
guys yet. So there's JavaScript, I jumped to the shellcode because of timing and stuff, but over here, actually over here, and there, look, it even says debug me, okay, I will. And then we open up this, and we hit that. So this JavaScript executes like this right before the final exploit fires off, right? And what this does is it fills this variable v, and then we can just be nosy and take a look at variable v. And if we look at variable v, This is what it's looking for. It looks for VirtualBox, it looks for VMware Tools, Fiddler2, which is just like web-based logging stuff. Wireshark FFdex, so the program that I use for my analysis, if it's installed it won't
run. ESET, Antivirus, Bitdefender, there you go, there's a list. So.
Yeah. I didn't even think of it, that's pretty simple, huh? If anyone didn't hear that, to protect yourself, just install Wireshark. So yeah, it's going to look and enumerate and see it's installed and be like, well, I better not run. Try and do that on your organization. Yeah, right? Just push it out via SCCM or whatever the hell you're using. All right. We're pretty much at it. I tell you, I got the double stop now, which means shut my mouth. All right, gang. Thank you very much.
and remain as safely as possible. Speaker day is Ryan Lackey. He's got several ventures, startups, and InfoSec. Let's give Ryan a round of applause. Thank you.
Thank you very much for the introduction. So I'm going to talk about how to travel to high-risk destinations as safely as possible. And you notice I didn't say safely because it's never safe. There's always risk. There's always countermeasures. There's always trade-offs to make. So just try to minimize your risk. Know what the risks are and then be able to minimize them. Yeah, so quick overview. We're just gonna talk about the different, the relative risk of travel versus everybody's normal security issues that are day to day. And some tools, techniques, and procedures that'll help. And then there's some things that I've used and other people have used that have worked really well and things that haven't worked very well. and some research areas. There's a whole bunch of cool things
that could be done in the future, some of which I'm working on, some of which I think are more community type things. And then there's some things where both people more in the activist or voter or educator perspective can help, or also opportunities where people that are either a CISO or otherwise in charge of security policy can have an influence within their own organization. So real quick, I'm a cypherpunk from the early 1990s. I found the mailing list when I was like 12, and the most life-changing experience when you're like a 12-year-old kid learning that if you build the right kind of math, biggest organizations in the world, governments, militaries, can't break it. And it's a completely amazing thing. Then I did this offshore data haven in the
North Sea called Haven Co. It was a crazy World War II anti-aircraft fort that turned into a Pire Etio place, which then turned into an offshore data center. We were trying to host all sorts of content that wasn't permitted in other countries, and we wanted to have very definite law. And then a completely different thing. Iraq and Afghanistan for eight years and worked as a defense contractor, setting up satellite networks and working in hospitals and things like that, which was very, very different. Then I started a trusted computing startup called CryptoSeal that got bought by Cloudflare in 2014. Worked there for two years. And now I'm doing a new startup that's in this general space,
but this is not a sales pitch. It's just general about the tech. So what's actually important about being able to speak about this is I've traveled to about 100 countries and territories and other places around the world. I travel probably 30 weeks out of the year. I'm a frequent traveler in a bunch of programs. I travel with lots of computer equipment, which is exactly what you don't want to do for safety. And I'm on a bunch of interesting lists as a virtue of having done defense stuff, so I'm on other people's lists, and then doing other crazy activism stuff, so I'm on other, yeah, so I get to have fun. So just a quick show
of hands, who is actually from Las Vegas? Who traveled to get here other than the people from Las Vegas? Yes. That's why travel's special. People travel for professional conferences, for vacations, all this stuff all the time. And who traveled from the United States? Who traveled from not the United States? That's also a lot of people. Exposure to multiple jurisdictions is what's special. If you're in one place, you have one set of laws usually that apply to you. If you cross the border, then you have at least two sets of laws. And what's actually crazy is you have a different set of laws at the border itself from both the losing and gaining country. So there's
all sorts of special laws there. away from your own offices, your base of support, everything else like that. It's out of your ordinary experience. You don't travel on a specific route every single day, so it's different each time. And the people who travel, especially to conferences like this, are among the most interesting targets to attack out there. And imagine that there were, B-Sides is actually probably a bad target market to attack because we defend ourselves. But if you went to say a journalism conference where they're talking about legal rights for migrant workers in Latin America or something, those people do not have any particular technical resources to defend themselves. Plus they have very serious adversaries.
So conferences like that are great targets to attack. And then there's always changing and evolving threats because they're not something you see every day. You have different ones that pop up all the time. So like what's new? Why do we care now? Government people have always had to worry about this stuff. There's like a travel briefing, pre-travel, everything else you have to do when you go to certain countries. I'm sure spies have never worked for the IC, but like if you work in the intelligence community, they have even more stringent requirements. But those people have the resources to defend themselves and have known about this for a long time. However, now economic espionage means that
everybody's a target. You can be a company employee, you can be a university person who has interesting property that you're working on, like a chemical engineering person that's working on a new paint pigment or something, all sorts of crazy stuff like that. And that totally changes the profile of the people that are targets and the resources they have to defend themselves. And then you have countries that, because of terrorism or the threat of terrorism, they're throwing out all the rules. There used to be in intelligence this concept of like Moscow rules that applied when you were in Moscow if you were a spy and you knew you were part of that set of people. Internationally,
if you're like the son of somebody in the Middle East and you're traveling to wherever, those rules might apply to you even when you come to the US. So it's kind of crazy. And people travel all the time now. So this is a huge problem. And the other crazy thing is people are traveling. I travel with probably like, I hope at 10 devices when I travel domestically. And those are all targets and they're all good to attack. Certain kinds of travel are a lot more risky than other travel. There is international travel across international borders is super risky. to other travel, travel where you're invited to come by somebody else, like say when somebody posts a job offering or a come to us to buy advertising for your
torrent site in a nearby country that has more favorable laws, things like that, because the attacker knows where you're there and you have to assume the attacker's the one who initiated the request to travel. And there's also a weird trade-off here. There's a trade-off between if you know certain travel that you do all the time, you're a lot more familiar with it, but it's also more predictable. If you do something for the first time, you don't know much about it, but the attackers don't know as much to attack you. So there's this other concept of individuals that are themselves high risk. There's a location-based issue that makes you high risk. There's also people that are high risk. So some people, by virtue of their own identity and what they've
done in the past, are super high risk. There's this zero to Snowden concept. You wanna be really, really close to zero, not really, really close to Snowden in terms of how much of a target you are. Those are usually because of things that you've done yourself, and so you know that this has happened. There's people who get the four S's on their airline boarding passes all the time and get detained in customs. They mostly know that they're a target. Then there's people who, by virtue of somebody that they know, either a family member or somebody they associate with, becomes a target. Employment, if you work at a company or if you work at a, even
in cases like we saw in Belgium, if you worked at an that happened to provide transit to target networks and target countries that were very attractive, the sysadmins within that company became targets. So some of those people are targets on their own. And of course, source and destination. If you go to Turkey, even like three months ago, if you went to Turkey, even as a transit person, you would get marked as a special person and bad stuff would happen. And then, of course, some people have a routine history. So this is unfortunately a hard problem. As I said, as safely as possible, not safe. If you're a super target, like if Snowden wanted to fly
from Russia to here, there's nothing I can tell him that's going to be helpful to protect any computing devices he takes with him, aside from the one, the most important piece of advice, which I'll give him a little bit. But there's no silver bullet here that's going to solve all this stuff. Lots of variables. And if you're, so this is more if you're responsible for a team or a company or an organization, the users that travel a lot are usually very senior people or very sort of like rainmaker salespeople or whatever else. So they're going to not really follow the policies that you tell them. They're going to do crazy stuff like set up their
own mail server in their bathroom or something like that. They're the kind of people that would do that kind of thing. They're not the people that will check off every box. So it's a problem. And you have to be aware of that when you're doing training. And because they're away from you, You can tell them something, but if they can get away without doing it, they're gonna not do it. If you tell them, like, don't connect to local networks or whatever, it's very difficult to enforce that policy if you don't have a technical control against users that just wanna get their job done. And stuff changes, so you don't really know in advance all the
stuff. So, as I said, there's nothing I can do that's really gonna help Snowden, aside from telling him not to travel or don't take any computing devices. There's certain people that are out of scope. I would say, really help anyone in the government personnel side because the crazy thing about government security is that it defines both a ceiling and a floor, which is exactly the same level for how secure something has to be. You must comply with a policy, but there's also not a whole lot of motivation to exceed the policy. So just follow your policies and that's about as good as you're gonna do. People who are extremely high risk like Snowden, yeah, not
really gonna be able to help. And then people who are very low risk, I think it's more important that you just, if you're not a target, if you're not likely to be a target, you should just do your regular security stuff. That'll work better. They're not really terribly attractive. But there's a Goldilocks region of people that are just enough of a target to be interesting for travel, but not so high that they can't be defended. So that's the Goldilocks region. We have to think about what factors influence this risk, how specifically they're targeted, and all these things.
certain places that are just known to be targeted, like high attacker environments. Say, if you go to certain cafes that are known, whatever. So there's various things like that. If you're part of an organization that's a target of interest, things like that. So you'll know how specifically you're targeted. If it's something where you think you're a target because you're part of a huge organization, you're gonna get one level of effort. If you're a target because there's like three or four people in your tiny group or whatever, it's gonna be a different level of targeting. The other idea here, which is that the people attacking you are going to use different modalities. You want to make sure that if you are vulnerable to something, you're vulnerable to something that
is either less common or another way of thinking about all this is if it became known that you got hacked through that particular technique, how embarrassing would it be? If you know that you got hacked because of zero day and all sorts of stuff, but you're still gonna keep your job and nobody's really gonna call this incompetence. But if you get hacked because you had no SSL on your login page or something, that's a different thing. So how intrusive the techniques are that the attacker's putting up against you? I'd say there's sort of a progression of mostly passive network monitoring, which you can just assume everywhere on the internet, up through active network attacks where
they inject traffic into your specific machine, or say, network hacking remote type stuff. Physical access, which is one of the things that travel gives them a lot more access to do, where they can non-destructively image your machine or something like that. Hardware implants, and then the evil made problem of multi-round physical access. So as those things get more intrusive, they're less likely to be blanket, they're more likely to be targeted at you specifically, so you should make sure you're resistant against all the things that are sort of in the environment, and it's less important to defend against things that are super targeted. less likely to happen for most people. And another huge element of this is how, once you get compromised, assuming you do get compromised, how bad is
the compromise? Like how long is the compromise going to last? The ideal is that only data that's your current working set of data is what's compromised. That is very challenging to achieve if you don't plan ahead. In a lot of cases, if you carry your laptop into a country, you're going to get It's compromised. Everything that was on that laptop is going to be potentially known to the attacker. And that's a problem. But the real target of a lot of these attacks, especially the automated, low effort, wide attacks, but the network active attacks, is future and ongoing system access. So they'll attack a machine purely to install software on it so they can then attack other networks that are
more protected. particular area of interest that we'll go over in a little bit. And then of course, you want to see how resource the attacker is, how seriously you want to attack. I think it's an open question whether you'd rather be targeted by a super awesome organization that doesn't really care much about targeting you specifically versus a less credible organization with extreme interest. I think the things that don't scale, like physical attacks or targeted implants, more likely to be the less capable organization that really cares about you because people are the one quantum unit of size. A very capable organization will put a lot of blanket stuff in there, but they're still not going to have an order of magnitude more or many orders of magnitude more people are
dedicated to you. And then, of course, the consequence of failure. Another element of risk is like how bad is it to be compromised? If you're a journalist or something in Latin America, maybe your source's lives are at risk. Whereas for me, in most cases, it would be if I have keys that belong to customers or things like that might get compromised or commercial return. And it's really hard to value lives versus money, but it's definitely something you have to put in there. And then how much resources you have to defend. If you're a government, you have a lot of resources. I think the people that actually have the best resources are people that build their
own platforms, say like Apple, Google, people like that that own the operating system, own the application, and are very familiar with how everything works and can modify things. resource to enterprise is not, is buying stuff off the shelf in most cases and implementing policy and they're probably less capable of defending themselves than the people building the platforms and then everyone else. And then the crazy problem about here, the crazy problem here is the people that are the most targeted are often the least resource to defend themselves. So it's sort of unfortunate. Degree of exposure, all this stuff.
more for if you're managing a group of people. If you have a lot of users and if they travel frequently and if individual people out of that set travel infrequently, those people are at particular risk. If somebody is a salesperson who travels all the time, they're going to be a lot more familiar with this. If somebody's first trip to a place like China, it's going to be a bigger issue. So we've gone through that taxonomy. Let's think about the risk of specific cases. If I were going to go to North Korea, one, I'm not allowed to bring a lot of computer equipment. Two, I assume they're not super resourced, but they are very motivated to attack anybody. And anything they want to do legally, they can do.
So I would probably say that's way too challenging a security environment for me to try to advise people on what to do. And out on the low side, like domestic travel within the US or EU, that's basically your day-to-day environment. Maybe a conference is a different environment, but if you're going to just like regular stuff out of your office, that's kind of boring. sort of unclear. Active conflict zones like Syria, I've been to places like this. I was in Iraq and Afghanistan when they were exciting. I guess they're still exciting. They're exciting again. Those are interesting because the adversaries you face are willing to do a lot of stuff that's very bad. And if you were compromised and somebody took your laptop and was like looking for
evidence that you're like part of the U.S. government or something, they're going to on the side of being very bad to somebody that's innocent versus letting somebody go otherwise. So it's a dangerous environment. That might be too high a risk to really deal with current stuff. U.S. to Russia, depending on who you are, yeah, unclear. And EU visiting the U.S., like I know a lot of people, like including people from the security community, even the last couple days have had negative encounters with customs and immigration. But I still think it's a pretty low risk. The U.S. is mostly a law-abiding country with respect to customs and immigration. So it might not be so exciting. However,
there's something in the middle here that's like perfect for all this stuff. It's Western people going to China. So it's cool because it's commercial targets. They're interested in your IP or they're interested in access to your networks. They're not interested in like imprisoning you forever to like find out about some spying organization or something. As long as you don't get involved in domestic politics in China, you're pretty much okay. And they're pretty law abiding. Like there's questions about how China treats their own citizens in China, but in terms of international stuff, they're pretty reasonable. And there's a lot of people who go to China for business. It's the place you build everything. It's a great
country, a huge country, all sorts of other stuff. And it's actually fun as a defender because they're technically sophisticated, so you're gonna see really exciting attacks. Back when I worked at Cloudflare, we saw some of the most amazing network attacks coming from China that were coming from a large country in Asia that were incredibly
advanced and they were very adaptive. So as an adversary, they're really great. And you can sort of judge how good a defender is by the kind of adversaries they fight against. If you're going up against somebody who's very good, you're going to raise your game. So China's a good country as a target for this. So my real target here is to avoid special treatment. You want to be part of a large set of people. You don't want to do anything that's really setting you apart as a super paranoid person. Black Phone had a lot of
ideas about what they were doing and a lot of cool stuff. Unfortunately, carrying a black phone in your hand at customs sort of identifies you with somebody who really cares about privacy and security. And that's not necessarily something you want to communicate to the immigration and customs person in front of you. And I want to also, I want to both resist attacks and I don't want to be embarrassed by the kind of attacks that I fall victim to if I do fall victim to something. I want to make sure I'm completely safe against passive network attacks and routine network attacks. Somebody has multi-round physical access as a maid to my hardware. It's bad, but it's
not great. I worked on something a couple years ago using nail polish to seal a laptop by putting it over the screws. There are countermeasures you can build against hardware attacks, but they're a different category. And for an attacker, it's a lot more resources. They have to put a team on you and everything else. And the other thing is, if you're targeted and they're in your room or whatever doing with their stuff, you don't want to piss them off. Because we have the advantage here that they're this is like gathering a bunch of information that might be attractive to them. You don't want to become like a special project or to antagonize them such that
you might get more attention than you otherwise would. And I want to use technology here. I don't want to use like crazy effort or anything else because I want to, if I go to China, I want to actually get work done. So we've got some techniques that can actually help.
There's actually a lot of overlap with just good conventional security stuff. If you take a machine that's unsafe in the US to a higher threat environment, it doesn't magically get secure. So you want to at least be secure in your baseline state. There's a lot of stuff about that. Many, many conferences, many, many guides about that. And this also, I'm generally talking about this from the perspective of somebody who goes on a trip to a place, from a safe place to a that's a higher threat environment and then comes back. If you're gonna go and just permanently operate in a higher threat environment, you don't have some of the same assumptions that you can make,
so it becomes a harder problem to solve. So it really has to be like a finite place that has like a safe place. There's an argument that no place is safe anymore, but yeah. So the first step is you wanna minimize your threat surface. So as I travel with like 10 machines or whatever, that's a very bad practice when you're going to a place that's high threat. limit the amount of equipment and the variety of equipment. It's better, certainly, to have, like, ten MacBooks than it is to have every possible operating system, every possible version. It's the same issue you have on a regular network. If you have a very diverse user pool, the lowest
common denominator is what gets hacked, not everything else. So one of the things that organizations do, particularly when they're going to defend against these hardware attacks, is they have a dedicated pool of travel hardware that's a special set. Like, they might, in the US, let you use whatever computer you want to use. But when you travel, you have to pick from a specific set of machines that you're using for that. And there's a little bit more about that later. You definitely want to prepare this stuff in advance. It's crazy to take a laptop and then go out of the box in the US and then go to China and then suddenly try to auto-update everything.
One, a lot of stuff doesn't actually auto-update. And other cases, it can be intercepted and modified. So you want to prepare in advance. And you want to know
don't go to your security people or be the security person with a couple hours notice that somebody's going to go. You want to give them as much advance notice as possible. You definitely want to harden your systems. There's some great security guides. Filippo from Cloudflare, the Gruk, some other people have done operating system specific hardening guides for mobile platforms, for desktop operating systems, everything else. You want to implement those policies in advance. And you want to plan ahead as much as you possibly can. The other thing is you want to minimize data. So this concept of you want to make sure that the only data that gets compromised, if you're compromised while you're there, is the computer itself. You don't want to carry your, like, six years of email
history on your laptop so that if it gets compromised, including being told, decrypt your laptop now and give it to me, or else you, one, can't continue your ship, or two, can't leave the country, you don't want to have to, you don't want to make a position where you have all the data there. The best practice is to travel with only not to have any actual data that belongs to you. Because I have no problem turning over, I mean, it's like an expensive machine, but I have no problem turning over a machine that only has like vendor operating system and some security tools if it has no data. I'll just go buy another machine, which
is unfortunate, but whatever. But if it has all your data, it's very bad. Sort of related is you don't want to have credentials on your machine that can get access to other systems. One of the main targets of these attacks is gaining access to systems that they can't otherwise attack. So either gaining data that's on the machine while it's there. Or if you have a very locked down network, attacking an endpoint while it's in country is actually a very attractive thing to do. So if I can steal somebody's SSH key that's deployed in all the production infrastructure back in the US, yeah, yeah, yeah, probably not a good idea. I've worked at companies where they sent some of the most senior sysadmins to countries to do this
kind of work. And they did need to have work access, so there is a trade-off there, but you certainly don't want to have long-lived credentials that are on everything. If anything, you would have a short-lived credential that can be revoked after the trip. And don't bring anything that has more privilege than you need. You should be very, very cautious about that. There's another potential thing you can do, which is to buy hardware while you're there. That has positives and negatives. So the way people do travel pools of laptops, they either just like people do for conference laptops, I think is another thing. There's two schools of thinking there. One is to get something that's really locked down. You put a lot of effort into locking down.
Two is to buy something that's completely off the shelf, that has nothing special on it, use it, just don't trust it at all, and assume that it might get hacked, but it has nothing on it of any value. That can work really well. Unfortunately, that requires a lot of user training. I think that's actually a good solution for security professionals at a conference or something. such a good solution for a security professional telling their execs what to do in the future in the field in an environment where they're not going to be particularly monitored or anything. But for somebody who's a security professional, it works pretty well. Another sort of annoying travel-specific issue is you want to protect personal accounts. A lot of organizations have pretty strict guidelines about
work use of personal machines. I know U.S. government is very, very particular about personal accounts on work machines and traffic like that. And in general, that's a good because whatever people do on their personal machines, one can bring liability to your organization, and two, it can bring legal liability for the actions they're doing. It can also be a vector of attack if they're using it to browse some sketchy site or whatever. But if you send your execs to a place like China and their Facebook accounts get hacked while they're there, and someone uses that to then look at their social network or whatever else and then attack other users, that becomes a problem both for
you and for the exec themselves. And it's one, hard to recover from. Two, can be more of an issue. So it's probably a good idea to actually give users training and guidance on protecting their personal accounts when they go to high threat environments like this, just to eliminate that as a possibility. Virtualization's a great solution there, some sort of separation. But it's definitely something to consider when you're setting up policies or when you're doing security stuff. And unfortunately, I think probably 30 years people have been talking about user security training as being critical thing. I think there are environments where user security training works exceptionally well. There are environments where it doesn't work particularly well. Sending someone to a new to them environment where stuff doesn't
work by default and telling them they must remain compliant with policies that were set by somebody who they view as far away and out of touch with their actual needs is probably the worst possible environment to get compliance with these requirements. So the trick that I found is actually to make the secure way the easy way to do whatever they're doing. China's awesome for this because the Great Firewall actually makes it hard to do a lot of routine operations like browsing regular websites or whatever else. So the user actually really wants a VPN. They want to use something like that. So if you can make that secure, then it wins. So offering them some sort
of benefit if they do things well is a major positive. So what actually works? Some things work pretty well. Other things don't. I'll go over both. So for China specifically, The Great Firewall is probably the number one concern most people have. It's both the passive and active network attacks. There's a lot of, like, it's basically an environment where you must have a VPN to get normal work done. A lot of sites, and the crazy thing about the Great Firewall is there's not a Great Firewall of China. There is a different firewall basically in every province and every network operator. So that's like 40 different options, and it changes over time. So saying, and it changes pretty substantially over time. So
there's probably not a single VPN that's like a wide scale thing that works really great all the time. But some of the ones that are advertised for China work pretty well. And certain things work well or not. The things that work not well from a corporate environment are a lot of the corporate, like IPsec VPNs are more likely to be blocked. And a lot of the big free providers are blocked. Because in general, I would argue that the Great Firewall of China is to protect government from Chinese nationals viewing Western sites. It's not so much to protect them from Westerners visiting and doing stuff. It's more collateral damage. So there's ways to make a VPN
that'll work particularly well or that work better than other ways. If you can have one that's got a small user population that's dedicated to your pool of users, that helps a lot. International roaming works awesome. It's a little bit expensive unless you buy from the right providers. But because of the way it's set up, it actually gets tunneled a lot of the, it bypasses most of the firewalling. It's pretty awesome. Dedicated pools of travel equipment, I think, are the standard at basically every Fortune 500 at this point and at any security aware organization that's going to China. There's either dedicated pools of equipment or the buy it there, don't do anything policy. It's really a pain to do that because I've seen a lot of, I've seen organizations
where They buy a bunch of iPhone 5. There's three ways to do it. So people go and they buy an iPhone 5 from the Apple Store. And the first thing the exec does is they put in their personal iTunes account to activate the phone, which I watched happen twice in front of me, which, while lulzy, was kind of annoying. So you have to really help the users build this kind of stuff. And the other thing is you can go, people either use their oldest hardware for this kind of travel, hardware which has problems. You're sending your very high profile, very demanding users with your oldest, crappiest hardware. They get angry at you. It might break.
It kind of sucks if you cheaped out like $100 on saving a laptop and it breaks while a user's in Beijing for two weeks. It causes problems. And the other option is you take a regular machine that's on all your systems and you just set up a dedicated machine there, but then you have access to your regular systems if it's compromised. And then there's the problem of if you just go to Best Buy and buy the cheapest random laptop, it might not be what the user's used to. It also probably includes a bunch of awesome spyware or whatever vendor crapware by default. So it doesn't work particularly well. So there's a lot of problems there.
If you can manage a dedicated travel pool hardware program, that works great. But it's that. And then tools. It's crazy. Snapchat, if it weren't blocked in China, would actually meet a lot of the requirements for a great security application. Doesn't have any history, a lot of transport security encrypted by default, it's on a relatively secure platform, everything else. So tools that are sort of like non-permanent or ephemeral by default are a great idea. So maybe push your users toward like chat systems that delete their histories, things like that. stuff you can do if you're setting up a mail provider that was just going to work in China, you would set a low, like a document
retention policy a very short period just so that the data is not there. So those are the things that work. Unfortunately, we have a lot more data that doesn't, stuff that doesn't work. So special hardware. So this was the route some people have taken. If you have special hardware that's advertised as like super secure government stuff, not good. There's people that would go to like war zones like Syria and places like that. were very, very particular that they had to be carrying stuff that everybody else had in the environment because just carrying the crazy stuff would make you much more of a target. If you had a crazy satellite phone or whatever else, you were
clearly somebody special, so that's not good. Chromebooks, initially, I thought would be an awesome solution here because they don't store a lot of data locally, everything encrypted. They're super secure by default, probably the most secure desktop operating system today by default. Unfortunately, for technical and business reasons, Google makes them depend on Google services a lot. Google's blocked, which it is in most of China, the stuff doesn't work very well. Another idea was desktop as a service. So that's great because the data's not on your laptop, it's on a remote machine. Unfortunately, you're on a lot of pretty mediocre networks that have latency, jitter, low band with everything else. And in a lot of cases, you're
doing desktop as a service from China back to the US, and then you've got even more latency. So there's a lot of issues with that. Then certain providers like Google, Facebook, Twitter are sort of blocked, and then the The key thing is so many things depend on those that a lot of other stuff's broken, so you definitely need to use a VPN. A lot of public VPNs are blocked, and then some corporate VPNs are blocked. So there's that. And then there's a whole category of stuff that fails. It's a good idea, but can fail fairly often. So full disk encryption. I'm certainly not gonna argue against full disk encryption, but if you have full disk
encryption and you have a password you can use to unlock it, and somebody at customs says you must unlock this, Depending on the country, depending on all the context, that may be a request that you have to comply with. And yeah, that sucks. Secure messengers, I think it was in Turkey during the coup about a week and a half ago. The problem with phones is people would like, the security people on the bus would say, they use WhatsApp a lot there, so they'd say, you show me your WhatsApp history. There's only one copy of WhatsApp on your phone, so they know if you're telling them yes or no. And if you say you're not using WhatsApp, they'll just look really quickly. So they have a good way to tell
if you're, they can verify that they're actually getting the data they want. And stuff that depends on user actions. There's all sorts of crazy ideas about like setting up telltales in your room and all sorts of other stuff and keeping your laptop chained to your wrist the whole time when you're there. Security people might do that. Corporate execs are not going to do that, especially when you can't enforce the policy. So you need to make sure this stuff works in a way where they can't, like it's easy for them. And the other thing that I had never really thought about that is kind of is if you leave for a three week trip to some
place and you realize that your system is in some way compromised, you might not have a better alternative. Here you can just go grab a new machine from stock and re-issue it. If you're in a place, you might be stuck using a machine that you think might be compromised and it might not be a clear case of that and you might have to advise somebody to do that. So it kind of sucks. So as we've seen, this whole problem today exists. There's no great solutions. There's lots of solutions that can stuff, but nothing perfect. So it's a great opportunity to actually build stuff that works better. So I'd say there's probably five categories of things that can be built. A few of these things
I'm working on, a lot of these things are things that industry needs to work on to sort of solve this problem for people. So number one area is better VPNs. So
VPNs that work particularly for this case of people in China or work for people that are traveling. Right now there's a split between public or free VPNs and sort of a commercial or like a corporate VPN for your dedicated small pool of users. And that's definitely a good split. There's the concept of the, like if they're going to prioritize which endpoints they're going to block, they're going to block them based on how many users they have or how attractive the users are. If it's your corporate user base or you have multiple VPNs, it's not really a terribly attractive target unless your company is a specific target. So it's better if you have something dedicated. The other thing that's worked particularly well has
been having fallbacks. So you give people four or five different network options that they can use. And if one of them doesn't work, fallback to the other one. Usually splitting that up by different protocols and having a UDP-based open VPN, having a TCP-based VPN, different IP addresses, different things there so that you can degrade if the thing doesn't work very well. But I think there's a lot of work that can be done in the protocol space to make this a lot better. There's providers that have web sockets within China, which would be an interesting way to transport data out.
The Tor transports people have done a lot of stuff on the circumvention side there. The problem is Most users using Tor 24-7 is not really a good solution. Too much latency, too much other stuff. So there's probably things that can be taken from that and used, but not necessarily use Tor itself. And one of the things I worked on a while ago was actually more a Gruck and some other people worked on, the idea of using a hardware VPN appliance to protect multiple machines. There's good and bad to that. One, it fails the special hardware that's secure and it calls you out as a security conscious person. router actually is a really convenient thing to
use, so there might be a trade off there. I would probably say it's probably a good idea if you can excuse it as a convenience or productivity device versus a security device, but I wouldn't carry like a crazy like, no, a 10 VPN or something. Desktop as a service is the, so VPNs, there are good solutions, but they're not great solutions, things like that. Desktop as a service is an area that does not work well from China at all. could work well from China. So there's a lot of R&D that could be done here to make stuff that works really well. The fundamental problem is the network doesn't work very well. And the desktop as
a service stuff works at a very high level in the applications, not very tailored to the individual applications. So it's much more network intolerant than it should be. If you were to build applications that were more
efficient on how they used the network, you'd have a lot better luck. And so it's that. And then two other hardware devices. There's the laptop and then the phones. So I think the holy grail of laptops for this would be some sort of like disposable laptop. Unfortunately, like this is a $2,500 laptop. The cheapest laptop you're realistically going to get somebody to use is like $200 or so. And $200 per company is disposable. $200 for like a journalist that's going to Latin America is probably not disposable for them, unfortunately. So figuring out a way that you can make hardware that is more inexpensive has a lot of benefits here. If I knew my machine were a target of an attack, I would
probably replace it. I probably would not replace a $2500 laptop on a routine basis otherwise. So as the hardware gets cheaper, it gets better. And the key thing that laptops need that they don't have is a way to go from the field, the operating state, to a known good state at any given time and guarantee that's the way it's going to be. The problem is most laptops today, or basically all machines today, have a state on many, many places on their machine. There's the main SSD or whatever, but also every other component on the device has firmware, some sort of flash, everything else. So you can't really tell if a machine is being returned to
exactly the same state each time. Plus, there's hardware modifications that can be made. lot of buses on there that are easy to modify, chips, everything else. Depends on how much effort they're going to put into it, but that's a problem. So maybe some sort of hardware that's more tamper evident. I would say that as the fewer components, better, which also helps with cheaper. So we're moving that trend in general, but there isn't a real effort to make the laptop as tamper evident as possible. The crazy thing is it also makes it less repairable, but whatever. And then just sort of like reduce So the reason why travel is actually interesting for a lot of high
security stuff is users might not accept a little bit of pain on their day-to-day machine, but if you tell them you just have to deal with this for two weeks while you're on a trip and you have to do it for security reasons, plus the baseline case is that their expensive laptop just won't work in that place, it's easier to get them to accept cool stuff. So it's good. Then phones. Phones are the other big area. Phones, cheaper than laptops, but still A good smartphone, maybe 50 bucks. It's still not super disposable, but it's getting definitely closer to disposable. They're great from the perspective of you can keep them with you all the time. But there's the concept of baseband risk. You've got a second processor in your
phone that could be attacked by your network operator, which maybe is a crazy thing to worry about if you're in your own country on your own network operator. That's crazy if you're on a network operator that is controlled by the state and the intelligence organization that is potentially your adversary. So that's a problem. And the problem with phones is they don't have virtualization. So the Turkish bus problem of you have one copy of WhatsApp on your phone, it's pretty easy for the attacker to say, oh, show me that WhatsApp. If you had some way to hide the state, it would be better. MDM, you definitely want MDM in a corporate environment for all your devices
in the US. MDM doesn't work so well when you don't have great network connectivity to these devices and when users are popping SIMs in and out and all sorts of other stuff like that. So making something works better in that context. And the other thing that's a real pain is just like with laptops, you want to be able to get them back to the same state. Phones are a lot better at going from usable state back to the same consistent state each time. But doing that in the field over a network is actually fairly hard still. In a lot of cases, it depends on, especially on the iOS environment, it depends on connectivity to Apple.
So the iTunes iCloud backup stuff doesn't work. Plus, for good reasons, a lot of security data is not included in backups, especially network backups. So telling somebody they should wipe their phone on a regular basis, like every time they cross the border, is going to run into a problem of have to go to set up your Google Authenticator each time and all sorts of other stuff. So there's sort of a trade-off there. So a better system for remotely imaging a phone, restoring it to a known good state, and everything else would be great. So it's possible for somebody to build all this stuff today. But it's harder to know if you're deploying this to 20
or 100 or thousands of users that it actually works for two reasons. One, it's hard to do for a very small-scale set of users. It's also very hard to see when it's working or not, has the security problem of if it fails, you don't know that it's failed, and you might be compromised. So it's a challenging problem. So you want something that's more visible. So these are all interesting areas. I'm working on a couple of these things, but a lot of them are that. So I have two sort of like calls to action. One, I think it's crazy, like this is a huge amount of effort, and I don't think we should have to deal
with it. countries should not, especially people traveling into the US and into like relatively stable countries, should not have to worry about the government imaging their hardware, modifying it, doing all this crazy stuff. So there should be a regulatory regime that protects users that are visiting the country from this kind of invasive search if they're not suspected of a crime. And that's a huge political challenge. I think that's going to be a difficult problem to solve, but it'd be great to solve it. I don't think, I'm not too optimistic about that. What I am optimistic about is when this stuff happens to people, I think the user has every incentive not to be public about this attack because if you're a visitor to a country, you can in
almost every case be rejected from that country or banned in the future for making any sort of complaint. But people who are citizens of the destination country, when they find out about these kind of abuses, should publicize them and take action against them. So, because they're in a much better position to do this. that happens. And then if you have a pool of users that you're managing as a security officer, you really need to worry about this problem, like identify that it is a problem and deal with it. And I would probably say the baseline is to look at where the people are traveling. I think everybody should do that, look at where they're traveling,
how much of a target they are. Probably if they're going to places like China or Russia, places like that, dedicated laptop and travel pool infrastructure makes sense. You're already going to need to set up VPN infrastructure for your user population, so you have to do that. And then user training specific to travel. I think developing something, you really need to know your user population and your target countries that you're going to, but something about get feedback from people that have gone there previously, what specific networks work well, what tools work well, things like that. So it's sort of an interesting problem to solve, and Definitely happy to talk about any specific countries and travel and organization things. Otherwise, anyone who's interested in this stuff would be super interested in
talking with you. Cool. But thank you.
I think we have about 10 minutes for questions. Yeah. We have some time for some questions. I missed some things moving around. So anybody have any questions? In the front? I'll repeat your question. Please bring a microphone,
one second. Testing, testing. Microphone's off. Oh. Cool. OK, so I have kind of an off topic question. I hope that's OK.
I travel to hacker conferences a lot.
presentation on DYOD, do your own diode, DIY, a low cost diode for ICS, for industrial control systems. Without further ado. Okay. Am I good to go? Do you hear me well? Okay, so.
Okay, is it better? I'm just trying to speak louder. Okay, so thanks for attending this talk. I'm gonna talk about the project that we created with one of my colleagues, Ari Cocos, unfortunately, is not here today. This project is called Diode, which stands for Do Your Own Diode. The idea is to create a low cost, do it yourself data diode aimed at industrial control systems. The thing is, it can be used for other things, but it was designed for industrial control systems. So before I start, just a few words about myself. My name is Arnaud Soulier. I work as a senior consultant at Wavestone, which is a consulting company. I mostly do penetration testing. I've been doing that for about six years now. And I started
working on ICS security, I would say four years ago, like everyone else after StaxNet.
I also do a bit of research, hence this project and this talk today. My interest is in security, our Windows Active Directory security. I gave a talk about that a few years back in France. And also SCADA security workshops. So I'm doing, let's say, a one-on-one session to introduce SCADA security to IT people. I will do that tomorrow morning here at the B-Sides and also on Thursday morning at Defcon. also like wine testing and motorbike riding, which is not in the scope for today. Okay, so we're gonna start with what I would call an ICS crash course. The idea is just to give you the required knowledge to understand what is an ICS. So let's start. Where can we find ICS? So it stands for Industrial
Control Systems. So these are the, let's say, the systems that people create stuff in the manufacturing plants, in power plants, in building automation, water treatment, also in the pharmaceutical industry, for example, let's say you want to create pharmaceutical drugs, then from the biological steps to, let's say, putting the specific liquid into the vials to the packaging, everything is controlled by specific systems that you call industrial control systems. You can also find these in what you can call critical infrastructures, which means, let's say, electrical plants, dams, in the nuclear sector, things that can really go bad if you cannot secure it correctly. Okay, let's continue. This is a very, very simplified network diagram, just to introduce the components that we can find in the,
in an ICS. So let's start from the right. you will see here you have specific devices. We will call that sensors and actuators. A sensor will simply give you, let's say, feedback on the physical world. For example, temperature, pressure, those are the things you can use as a sensor. And you will have just the opposite, the actuators that will perform an action in the physical world. things, um, they are controlled, let's say, by electricity. For example, uh, if you have a motor and you apply a specific voltage, it will start spinning. And so to control those devices, uh, we use what is called PLCs, that stands for Programmable Logic Controller. Those are like, I would say,
tiny computers with real-time operating systems, and their specificity is to have, uh, electrical inputs and outputs. That's for example, the PLC can be used to, uh, if I take really the most simple example, uh, you can wire, let's say, a switch to the inputs of the PLC and wire a light bulb to its output, and then you can program the PLC to switch the light on when you flip the switch. Of course, that's a silly example because you can do the same without the PLC, but the advantage of using the PLC is that if tomorrow your, let's say, your industrial process changes, you'd be just have to reprogram the PLC, you do not have to rewire everything. So that's why we use PLCs.
Sometimes you may also encounter the term RTUs, which stands for Remote Terminal Units. Basically it's a standalone PLC. So here we are with electrical connection. Here we already have network connection and to program the PLCs and to control them, what we call the supervision network or the SCADA network. Actually it's not so precise to say SCADA but it's the most used term nowadays. On this part of the network you will have let's say basically Windows operating system, standard work station servers. And the people working in the plants and factory, they will be in front of workstations, they will check that everything works fine in the process and click on some buttons to perform actions in the physical world. Then, this
supervision network, it's always somehow connected to the corporate network, which is somehow connected to the internet, because you have to read mail and then go on YouTube. Uh, okay, so that's basically a simple network diagram for, uh, for ICS, and, uh, I suggest we continue by introducing the security level nowadays of ICS. So, here again, I'm oversimplifying just for the sake of, uh, introduction. The problem we have in ICS security, the PLC, when they talk to each other, or when they exchange information with the SCADA network, uh, they use specific protocols. We can say Modbus, Profinet, S7, and they all share the same thing. It's the lack of security. to, let's say, perform a man in the middle on those
protocols, you will be able to, let's say, understand what's going on because it's not encrypted. Uh, you will be also able to, uh, let's say, replay some commands to perform actions, but the worst thing is, um, you actually do not need to be in the man in the middle position because you just simply can send un-authenticated commands. So that means that if you're on the same network, then the PLC, you can simply send comments to read or set some values. For some PLCs, it's worse than for others. For example, on Schneider PLCs, there is an undocumented function they use in the Modbus protocol, the function called 90. And that's what you use to program the PLC. And since it relies on Modbus, which is
unauthenticated, that means that if you can reach, from a network point of view, the PLC, then you are able to download the program that's running, and then re-upload it. So if you translate that to the IT world, it would be the same as just because you browse a website, you can change the code on the server side. So that's pretty, pretty bad. And you have to assume that as soon as you have a network connection to a PLC, you can own it. That's the, let's say, I would say the state of security, um, that I encounter with a perform penetration test. Um, network exposure. Uh, you may think that this PLC, there are specific plants, factories, and so
they must not be rich from the outside. That's not really the case. Uh, you have plenty of PLC directly exposed to the internet. That's a showdown search I did, uh, maybe last week. So, um, you can see that there are three Modbus devices in Las Vegas. I don't know where. I did not perform any kind of test. That's not my point. My point is to say you have the exposed to the internet, and as soon as your network access, you can fully compromise the device. So, uh, that's really, really, really bad. Okay, so what can we do? Of course, we want to perform some kind of network segmentation, so we have technical solution, we have
firewall, we have DMZs. That's really, really great, but that's not the challenge. The problem is that, as would Dr. Malcolm say in, uh, Jurassic Park, And it's the same for data. If you try to isolate the ICS network, people will use USB keys, they will use Wi-Fi access points, they will use tethering, uh, they will tether the internet connection with the phone. So that's not the challenge. The real challenge is to be able to perform network segmentation while allowing secure data exchange. Two simple use cases. The first one, want to perform security updates on their ICS. Not so frequent, but it happens. So there's a legitimate need to be able to transfer the updates from the corporate network or
from the internet to the ICS. And the other use case is the need to, let's say, export some production data from the ICS to the corporate network to be able to, let's say, design dashboards for the, for the COMEX. A solution to that is the use of data Why? Data diodes, you can also call that one-way gateways. They use light as the transport medium. And you can use some of the properties of the specific compo- of the optical components to have a very secure connection. Why? Because when you use, let's say, light as the medium, you will have some, on one side, a light-emitting diode that will create that's gonna go through the optical cable. And this light emitting diode has what we call a PN junction. So
that's, let's say, for those of us that did some electronics, that means that electrons, they can only flow from one pole to the other and not the other way around. So the security principle is backed by physics. So in theory, it's really, really secure. That's the main point of the data diode. It's allowing communication to go only one way with a really high security level.
Okay, so why did we start this project? It's mainly because of the feedback we had during our assessment. Ari, my colleague, did a lot of, uh, consulting for ICS security. I did a lot of pentesting, and we realized that in most of the cases, there were data exchange needs, but they were not done properly. configured firewall and stuff like that. Also, commercial data diode, I did not invent the concept. It's used for decades in, let's say, the defense area, for example. So it exists, but it's quite expensive. So there's a trade-off between the cost and the security. Most of the time, a data diode will cost between $5,000 and maybe $250,000 or $100,000, depending on your need. So for,
of course, if your client, to have, let's say, a synchronization between the ICS and SA, here it's a SA Pay instance, maybe it's willing to pay 50K for that. But we encounter a lot of situations where there was a need. It's not really, you do not have a high availability need, you do not need a lot of bandwidth, and so the client will not pay 50K just for this small need. That's the main problem. I have two examples here. one is about predictive maintenance. That's a concept in which, let's say, often it's a third party is able to, let's say, to predict what kind of pieces in your ICS will wear and so automatically order new ones. So that's kind of magical. And to do that, what you need
is, in this specific case, was to send a 100 kilobyte file every six hours. So you see that the size of the file is not really high. If you do not send the file, the process continues to work. That's not a problem. You can send the file the day after. Second example that I encountered during an audit in the pharmaceutical industry, it was refrigeration units. It was, let's say, maintained by a third party that needed to have real-time access to the PLC data in order to improve the efficiency of the system. So that was in the contract. So there again, if... the connection to the third party fails, the system will continue to refrigerate. That's not a problem. Um, so you have specific
needs, uh, that do not justify the investment of several dozen of thousands of dollars. Uh, so in those two examples, what did our clients do? Mostly they just connected an uncontrolled third party directly to their ICS.
that's not a good idea because if you have network connection, you can just mess everything up. Okay, so our project is not completely new. It's based on existing work from Philippe Lagadec, French guy, Austin Scott, or Robert Gabriel. The idea is to use standard, commercial, off-the-shelf hardware and open source code to produce a data diode with a target cost of about $200 per unit. we want to do that to have a working proof of concept. We wanted to try to create, let's say, a easy-to-use solution, easy to deploy, share the results. Also, just note, we do not have any commercial intent with this project. It was mostly to show that it's possible to create your
own device. But if someone is interested in creating or selling those kind of cheap device, do so. We would be happy and we do not want any royalties. We do not want to sell boxes. We do consulting.
Okay, so for the hardware, actually you have the hardware here. What we do for this data diode is we use copper optical converters to have an optical connection between the ICS and the corporate network. So how it work? Um, first thing, it's not possible just to use a one-way connection. Uh, it's not that simple because most of the protocols that we use every day, uh, file sharing using Samba or, uh, Modbus relies on TCP. So if you want to do TCP, you have to do the three-way handshake. You send SYN, you receive SYNH, you send HAC. That's not possible because if you have a one-way channel, you're gonna send your SYN and never receive the SYNH. So that's why
we have to use two computers. Here here. We use Raspberry P's because that's kind of inexpensive. And they will be in charge of performing some kind of network protocol translation from TCP to UDP to allow communication to flow through the diode. So how does it work? So here you have an Ethernet cable from the ICS to the first Raspberry P, which is connected to this box. It's the copper optical converter, which then has two ports, one for data emission and the other for data reception. So the idea here is to only have one cable that goes from the transmission port on the first one to the reception port on the second one. Since there is no cable the other way around, data cannot go the other way
around. So that's kind of simple, but in real life it doesn't work. Why? Because this, the first box will not accept to send data if something is not plugged into the reception port. So that's why we use a third converter just to simulate an active link. But this box is connected to nothing. Then the data will be converted to the copper ethernet to the second Raspberry Pi that will be connected to the corporate network. That's the basic idea. Here you have a picture of the inside of the box. So as you can see, it's not so messy actually. You have all the electrical stuff on the left. You have the two that's range, it means
input and output on the little stickers. Here you have, actually here you have two optical converters stacked one on each other. Then the third one, as you can see, there's only one cable that goes from here to here. So that's the communication channel and there's no channel going from transmission to reception here.
So just a few words about the real cost of the product. So we aimed at $200. Clearly we failed. Actually it's more like $400 if we convert euros to dollars. Why? Mostly because we wanted to have screens. As you can see you have two small LCD screens. It's not really useful so that was kind of a mistake. It's really not necessary. And also what was the most expensive part was actually the $90 aluminum rack, but since everyone wants blinking boxes, we thought it's a good idea to put it into a rack if we want to be taken seriously.
Okay, so I'm not gonna wait until the end of the talk to perform the demonstration. Of course, I have a video backup. I'm seriously hoping not to use it.
Oh, wait, I got one more slide. So here, what is the setup? consider that this PC will emulate the ICS network and a VM on my PC will emulate the corporate network. And between the two we have the data diode that is just here. Okay, so the first thing I want to show you
is how to transfer a file. That's the first feature that we did is the ability to transfer a file. Okay.
So I'm gonna try to do something. It's to film this screen so you can see what's actually going on.
part. So the idea is quite simple. The idea is that you copy a file on the network share on the first Raspberry Pi and the file ends up on another share on the second Raspberry Pi. So for example here, so on the corporate network, there's no file at the moment. Okay.
hoping not to use the video, but, okay.
Not working so well. So the idea was simply on this computer to copy a file, and a few seconds later it should appear on my computer.
to open the file share.
Okay, yes, okay, so here is the file I copied, just the logo from my company. I will do the same with a slightly more, uh,
which is one megabyte. Okay, so let's try again.
Okay, so the file is copied on this side, and in a few seconds, or in one minute, it should appear here. I will talk about that later, but the speed is actually not so great, which is not too bad, because, as mentioned, we do not target high availability or high bandwidth needs.
So also what's interesting is that since there is no bi-directional communication, you can actually use the same IP address on the two Raspberry Pis, which means that you do not have to configure anything on the ICS side or on the corporate side. Okay, so that's the, the file I was sending. So as you can see, it's a one megabyte file and it took about 30 seconds, so really kind of slow, but it works. The second demonstration I'd like to do, you have it here, I'm gonna make the screen slightly bigger.
Okay, so that's the Modbus client. So let's say I want to transmit Modbus data using the diode. PCI have the simulator and I'm gonna change the values that you can see. So I'm gonna change the one to a zero and then put the following one to one. Yeah, so as you can see, the delay is maybe about 500 milliseconds or one second. Works quite well. So those were Modbus coils. Of course we can do the same with Modbus registers.
one two three. Yeah, like, as you can see, it's modified. Okay, and the last feature which is kinda interesting is the screen sharing. So let's say you're on your ICS, you need the help of your vendor to perform some debugging operation, but you do not want to expose RDP directly to the internet unless and let your provider do whatever it wants to do. So with this solution, we offer a one-way screen sharing. That means using a simple web browser, the third party would be able to see what's on the screen, and being on a conf call with one of your employee, it can help you click on the right icons and perform the action, but you keep doing the action. It's not the provider
that does the action, so that's, a better security mechanism. So here also, as you will be able to see, there's about a one second delay if I move the... Yeah. Can you explain what direction the communication is going on here? It's going from this PC to this one. The screen sharing goes from the ICS network to the corporate network, or to the internet, let's say. I'm gonna explain the whole workflow in the following slides. as you can see, yes, I just opened video. It's really not suited for video. You have one or two frames per second, so it's really not working that well. However, the resolution is high enough to let you really see what's written, so I think that for remote
maintenance, it works kinda well. Okay, let's switch back to the slides.
I talk about the hardware, and now I'm gonna talk about the software. We wanted to have a work... solution that's working quite quickly. We didn't want to invest six months in development, so what we did is reuse something that already existed. It's called UDPcast. It's an open-source application, and it has a feature to send data through a one-way channel. It was mainly designed for satellite communication, where the downlink is quite cheap, but the uplink is expensive, so using this, this application, we were able to, let's say, have the core of our product, and we produced some Python code to use that application to do file transfer, webbiz transfer, and screen sharing. We also have a quite easy to understand configuration file, and it's only about 500
single lines of code. So what happens when we transfer a file? So on the ICS network, you're on a PC, you copy a file to a share, then
what the Raspberry Pi will do is calculate a checksum of the file, put the checksum and the filename into what we call a manifest file that will be sent to the second Raspberry Pi, and then you send the actual file. You receive the file, you calculate the checksum. If the checksum is the same as in the manifest file, that means that that data transfer went well, and so the file is copied to a network share, and you can access it from the corporate network. For Modbus, how does it work? have a Modbus client on the first Raspberry Pi. Every second, it's gonna request some values from the PLC, put those values into a JSON object that's gonna be serialized and send using sockets to the second Raspberry Pi.
It's gonna be deserialized, and on the second Raspberry Pi, we have a Modbus server that we instantiate, and the values of the Modbus server will be updated with the value sent by the data diode. So that means that on the I was not directly addressing the PLC, I was addressing the Raspberry Pi inside the diode.
Lastly, the screen sharing workflow. Uh, so on the PC of which I want to share the screen, I have a PowerShell script. It's really easy, maybe 10 lines of code, that will take a screenshot every 500 milliseconds and save that to a network share on the first Raspberry Pi. Raspberry Pi will, uh, use sockets to send the picture to the second, uh, Raspberry Pi, on which we instantiate, um, a web server that will serve an MGPG file. That's technology that's mostly used for webcams. Uh, so that means the client does a GET request to an MGPG file, and then the server keeps sending the new pictures, and it looks like a video, where it's just a series of, uh, pictures.
answer the question on how it works. We may now take a look at the configuration file. So as you can see, it's quite easy. We have, let's say, the useless stuff like the name of the configuration, the version, and the date. Then we have some properties about the Raspberry Pis, IP and MAC address, because then again, since the data only flows one way, you have to use static RRP. first time we tried to make it work, it didn't work at all. It was because there was no response to the RRP broadcast. And then you just define all the modules that you want to use. So at the moment we have three types of module, file transfer, it's called folder. So you just
have to choose a part number which must be unique. And the file path for the input and the output. If you want to add a Modbus PLC, you just type Modbus. Then you define the IP address of the PLC. Then you define on which port on the second Raspberry Pi will be instantiated the server. And then you define the values that you want to copy because we cannot copy all the values. It would be take too much time. So you would just define what kind of values we want to copy. the screen share looks like the, the folder. It's actually kind of the, the same directive in the configuration file. So as you can see,
it's quite simple to use and to configure. And the, the config, configuration file should, is the same on the first and second Raspberry Pi. So that's easier to, to perform. Okay. Now this is an interesting question that I received one morning from one of my colleagues, came office and say, what's all the fuss about this diode? Uh, it seems overly complicated. I can just use an ethernet cable and cut two, uh, two of the strings inside, for example, the two reception strings, and then I have a one-way communication medium. That's easier. This is kind of true. However, as mentioned, the main problem is that all the protocols, most of the protocols use TCP, so you still need to have the Raspberry
Pis, for example, to print, to pair from the protocol translation. And then, some may sound NSL-like attacks, but in theory, if you use an Ethernet cable, even if it's cut, and you use half-duplex mode on each side of the Raspberry Pi, you may perform port up and port dump actions, and that may be used as a side channel attack. So, using light as the medium is the only thing that ensures that there is no back communication. But actually you can build a working solution secure enough without the optical copper converter. That's one option.
Okay, then what are the limits of this project? So at the moment, it's really, really slow, maybe one to two megabytes per second tops.
was a high latency caused by the flat file transfer because as mentioned at the beginning, we used UDP cast. So from the Python code, you use an external binary, uh, which displays some things on the console before really launching. So you have, you have at least a two second delay, uh, which is not good for modbus and screen sharing. So we replaced that. We only keep UDP cast for the file transfer and we use a very basic naive implementation using Python UDP sockets to send the, um, the modbus data and the screen sharing. Other problem at the moment, let's say this box is not production ready. It's working, but it needs to add more, let's say, error catching if something is, it may be
buggy a bit, and also the components are not really meant to be used in, let's say, harsh environments like we sometimes encounter in ICS, so it's not dust proof, like that. But it's working quite well at the moment.
Okay, so maybe we can take a step back. So I explain what is an ICS, why we need data diodes, how my data diode works. Then let's think, is it magical? So the whole idea of doing a data diode was to have data flowing only one way. But as I mentioned at the beginning, most of the time you need to exchange data Updates, antivirus signatures need to flow from the corporate network to the ICS, and the production report needs to flow from the ICS to the corporate network. So in the reality, in the real world, you may end up using two different data diodes, one from one side, one on the other side. So yes, you will still have a high level of security, but that goes
a bit against the principle of having one-way communication. In reality, it's not that easy to have one-way communication. uh, you may imagine that a malware may be able, it's gonna be very complicated, but it may be possible that you can have a communication channel with the control command that goes with, through this diode on this side and through this diode on the other side. So I'm not saying that the diodes are not good. I'm just saying it's not magical and it's not as simple as putting this box to secure everything.
uh, still on the same topic, what exactly guarantees the diode? Only one thing, data is flowing one way. So that means that if you want to have a secure solution, you still need to perform all the kinds of logical security and hardening that you perform on your devices. Uh, if the s- the output Raspberry Pi is not secure enough, you have default credential, SSH exposed, you may be hacked, that means someone could perform a denial of service, or could let's say on the fly, modify the modbus values. So it's not enough to have a one-way communication. You still need to perform standard security.
Okay, the roadmap for the project. So the next step that we want to take is make it more reliable by using a heartbeat feature. What we call heartbeat feature is the, let's say, ability to send a defined file maybe every half an hour, and so if we do not receive the file on the second Raspberry Pi, we raise an alert or we send an alert to the syslog to say, hey, something is wrong. A bit of security hardening. We aim at the beginning of providing, let's say, complete and ready-to-use images for Raspberry Pis, but that takes time. Possible improvements for which we may need your help. protocols. At the moment, you have file transfer modbus and screen sharing. We'd like to
have S7 protocol, the one used by the Siemens PLC. It's not too hard to do, I think, but because there are open source libraries that will help us. It could also be interesting to have a syslog feature, SMTP feature, maybe integrate some kind of integrity check on the data. having cryptographic signature on transfer file would also be interesting for high security environments. I think it's not gonna be that hard to implement. But really the next big project is this one. Is the ability to use infrared as the communication medium because that would allow us to, let's say, remove the three copper optical converters and just have a, let's say, the light emitting diode on one of the Raspberry and the light receptor on the
other side. So of course that's gonna be really, really slow, but if you just need to turn to, let's say to synchronize 20 Modbus values, that's enough. So with this solution, we aim at a solution about $50 maybe, if we use a Raspberry P0s and just infrared devices.
think was really, really fast. Uh, so the code is on GitHub. Uh, the code on GitHub is working. However, it's not the latest versions. Uh, I actually did some modification and improvement just before leaving and, uh, I did not do a git push. So it's on my work computer that's in France. But, so, maybe if you want to try the project, wait just one week or two, time to come back and push the new version, but the version that's on GitHub is working, it's just not as reliable as the latest one. And so, again, if you want to help with this project, if you find it interesting, just do whatever the hell you want with it. We do not want anything, we just, we were
hoping to help and to show that if something is missing from a security point of view, being, uh, hardware, maybe there's a solution to do this yourself. Okay, so do you have any questions? I'm gonna open the floor up for some questions, but I'm gonna cover something I missed, I'm gonna do some name drop in our sponsors, it wouldn't be possible if we weren't for people like Verisprite, Pertivity, Tenable, Amazon, and Source of Knowledge. Thank you. Yeah, I had a question. You were sending info from one Raspberry Pi to two Raspberry Pis, one of which would respond. How did split the light between the two so it went both ways instead of just one?
Okay, um, you were talking about this slide maybe? Oh yeah, you did it via, uh, Ethernet converters between the two top ones and Ethernet on the bottom, or I didn't quite understand how you sent it both ways to two Raspberry Pis for transmit and one received. Okay, okay, so I'm gonna do it again, so here you your ICS devices, they talk using a standard copper ethernet to the first Raspberry Pi. This one has a second network interface connected to the ethernet copper port on this optical converter, which then has the transmission optical port connected to the optical reception port on this one. And so this one is only used because if you plug nothing on the reception
port, the converter, that it's not working. So this is just to emulate a valid signal on the first converter to make it work. If you do not do that, it's not working. I think it's a safety feature actually. It's like when there's high power lasers, fiber objects, they don't want transmission to happen without seeing connected. So there may be some possible way to modify that. I know that some of the models, can be disabled in some of the optical converters. The one we found on Amazon, we were not able to disable it, but someone, actually a call for paper reviewer from another security conference gave us an interesting solution. He told us we can use an optical splitter to actually put
the transmission signal back into the reception port. So that would actually, it could be cheaper, but the problem is those things, You do not find it on Amazon. You have to buy it from, let's say, China. And actually, the shipping cost was so high for one or two splitters, then for us, it was cheaper to stay with the converter. But let's say on the mass scale production, you would replace that with a splitter that should work. Yeah. Hi. Did you consider using opta-isolator chip which you can get about $2 and just run a serial through it. I mean you can run those up to 10 megabits and it wouldn't cost very much. It's a standalone chip that has both the diode and the
like an LED built right into it. I mean they're used pretty commonly for isolating electrical circuits from each other. I'll check it out. It might be a way to really reduce your cost. Okay, how do you call that again? It's called an opta-isolator. They're used a lot of times to divide different circuits from each other electrically. Okay. And I think that would probably be right up, right up the up your alley for this particular project. Okay. Thanks. Anybody have any other questions?
we thought the solution that we implemented where you have a client on one side and a server on the other side was maybe the most straightforward. So, but yes, a proxy would definitely work, but I think it would be more complicated from a software development perspective to code the specific proxy stuff than here where I can just reuse existing libraries. For example, for the Modbus client and server, I'm just using existing Python code. So that was, that's it was so easy and hence so cheap to do that. Yeah. But that's not I think the most efficient way. I know that, but that's the cheapest way I think. At least without using the optical isolator. All right.
Well let's give another round of applause to our presenter. Thank you.