
Hi everyone. This is great. Okay, good. Do you want to turn this off for a minute? Then I'm going to tell jokes and give away stickers. So that's how I am. Oh, do you need more? Do you want me to get up here and do the sound check from there? Okay, so now I understand why... Okay, so I'm up here. I like to move around a lot. So I'll definitely like stand over here at some point. So if you're filming this, I don't like these. I feel like they're in the way. I'm very much so like a moving aroundy type of person because I used to play punk rock music. So like I'm not down there. So you're welcome.
Is this a good enough sound check for you? Do you feel good now? Okay, cool. So I'm gonna, okay, hi everyone. Guess what? I brought a ton of stickers and I've been on this really long trip and then this is the last stop. So don't make me carry these all the way back across Canada. Come get some stickers. I have security's everybody's job. I have a whole bunch of OWASP dev slop. So please come on up. I'll take a couple of them. I wanted to take this brief moment while everyone's getting stickers to tell you all that I don't know if you know but Vancouver Island's having a B-Sides in August and I'm gonna be there too
And their call for papers is open. So if you're thinking about speaking, you should think about speaking there. I'm going to be there, and I'm biased, but I think I'm fun. Would you like a sticker? So there's a whole bunch of people here that are from besides Vancouver Island or Victoria. However, it's Vancouver Island, right? Okay. Here you go, here's stickers. So I would like to encourage you all to consider applying. I'd like to consider you, encourage you all you already got stickers to consider coming. The island's beautiful. No one's like, oh God, the island's the worst. You'll see later why the sticker's important. Hi. Thank you. Sticker? Sticker? You have some? You got some?
Okay. Source intelligence gathering, OSINT. Okay, so that's like her job. And so, okay, so the talk later is Isaiah Sarju, how online dating taught me how to threat model. So that's really cool later. And then right after me is Shira Shamban. She's gonna teach us about cyber warfare. The whole day has awesome talks. But, so she's doing online dating and so open source intelligence gathering is where you like check all the public records and look things up about people and then you learn how to hack them. So she's like chatting with this dude online And he's like, "What do you like to do for fun?" You know, and she's like, "I do OSINT." And he's like, "Oh, that's cool, yeah. I like to swim."
She's like, "I know." Yeah, I tweeted her chat. I'm just like, "You're creepy, sweetie." So I started a thing recently that I wanted to tell you about while I have you hostage. I started this thing on Mondays called Mentoring Monday. And I use this hashtag on Twitter. And the hashtag is, wait for it, Mentoring Monday. And the idea is to try to find people mentors. I got into InfoSec with a mentor. Do you have a professional mentor, Alex? Yeah, absolutely. Yeah, it... They can help you excel, they can advocate for you. My first professional mentor went to bat for me. He was like the most well-known hacker in our city. And he told my first employer, "If you don't hire Tanya, I'm not
coming either." And I'm like, "Yeah, I know, right?" And he, "No, no, no, no, Mike." But the point of the story is that I'm trying to find more junior people, mentors, so that we can have more people in our industry and we can do a better job. So who here has worked in our industry like four years or more? Okay, so all of you are qualified to be a mentor. And if you would consider searching the hashtag and answering someone, that would be so valuable. And if you're new, tweet out what you're looking for. Be like, you know, I'm really interested in incident response. You don't have to meet with them four hours a week
or some crap like that. just having someone that they could call and ask career advice is literally the most valuable thing ever. Like, I remember having this discussion with Mike, and I was offered a full-time CISO position. And he's like, "Tonya, every time you're a manager, "you're stressed out of your mind. "You're this total mess. "You're miserable. "You're angry all the time and jittery." He's like, "What are you thinking? "You know, like, it's fancy to be a CISO, "but you know you're gonna be miserable, right?" And I hadn't-- I was just, like, so flattered that someone would offer me that role. I was like, "Ooh, fancy title." And then he's like you're happiest when you're
nerding out. You know that and then like I thought about and he was right and I declined the role. I'm so much happier because of it and better paid as it turns out by a bit. Do you have question? I feel like you're gonna ask a question. No you, you, you. Okay. Also who here has heard of OWASP? The Open Web Application Security Project. Did you know that you have a chapter in this city? Yes, okay, so I got one hand on that that stinks so I'd like to encourage all of you to consider going to the OS meetings because it's run by a couple awesome human beings and they have talks by like all sorts of nerds and And it's good. Okay, one last thing. How much more
time do we have? - Five minutes. - Oh, great. - You have four minutes to plug. - Do you have a question? No, okay. - Five. - Oh, five, okay, cool. So how many ladies, women, and B folks are in the audience? Okay, great, like two hands, awesome. Three, four, awesome, yes. I wanna tell you about WIST, Women in Security and Tech. We have a meet up here in Vancouver and basically we like brunch like badasses. We give workshops to each other and like just do nerdy things together because sometimes it's fun to have a friend that is the same gender as you. Like outside of work you probably have female friends. Wouldn't it be
cool if you had female friends that did what you do? I started it so that I could just meet cool chicks. Seriously. I was just like, I want to be other cool women because everywhere I go, it's like this sea of awesome dudes. But sometimes it's fun to have a friend because they'll tell you like, oh, Tanya, your dress is so nice. Or I'll be like, oh, I have this problem with my hair. But it's true, though. It's just nice to have female friends. You get free stickers with these seats. Yep. These seats come with stickers. How was the status at the streamer? I know yesterday was awful.
We're just waiting a few more minutes to get our internets and pipes back up and running. So please stand by. Okay. I'm going to go up front with you guys. Nice. All right, there's a few more seats up front if you want to come in and you don't want to squish in and get a middle aisle seat. They're the worst. You can have your own row. There's lay down accommodations here. In case you need a nap, wash it, blow your own bath. Your seats are here. Thank you. And stickers. Yeah, and stickers. All right. Does your watch sync to an atomic clock? Close. Mine's pretty close.
People at the back, there are seats up front. VIP, free stickers. That means you or me, get down here. Come on down. Come on down. The hack is right. Please take pictures of me because my mom follows me on Twitter and I told her to look today. And she doesn't really quite know how Twitter works yet so sometimes she signs her tweets, Tanya's mom. And my grandma follows me too. It's the best. We just got a double thumbs up. Looking good. All right. All right. So before I introduce our next keynote, it may be not much of a surprise by now, but we want to let you know we do have a closing ceremony where we do encourage
you to stick around. I'm sure there's probably some after event hanging out, but we're going to be giving out our prizes for the capture the flags and whatnot. So please do stick around. You know, like B-Sides has been working very hard to make sure that it's an inclusive environment and making sure that women are part of our community, are here to speak, here to participate, here to volunteer and here to learn. And we've worked really hard at campaigning and working with all the various groups that are both local as well as international. And, you know, last year, I'm originally from Ottawa and, you know, I want to see all my Ottawa friends and you can't
book an evening with each friend. So I said, "Alright, well, I know how to put it together, a little hacker thing." So I made this little hacker social in Ottawa and Tanya ended up showing up and we became friends and I said, "Please come out to speak at B-Sides Vancouver." And not only did she do that, but she ended up bringing a lot of other women and she's been a very big supporter of our community and all the other communities about getting more and more people involved. So, you know, I really appreciate the friendship that I've built with Tanya. She's been very supportive, and it's awesome. Now, on to our keynote, who is also Tanya Janka. You know, when I met her, she was working for, I
think, the government, and she was on the cusp of onto her new world of traveling around the world and sharing the gospel with Microsoft. So let's give her a big round of applause. Thank you. So hi, everyone. I'm Tanya Janka. What's up? Okay, so today I'm going to tell you how security is everybody's job. Who here writes code? Awesome. Who here does like IT pro, like server support, stuff like that? Okay. Who's project manager? Awesome. Who does tech support? Who does security analyst? Who does web app hacking or other types of hacking? Security research? Okay, now who as part of their job does security? All the hands. All the hands. By the end you're going to be like,
oh, sorry, my hand should have been up. Security is everybody's job. I'm totally not even kidding, which I generally am, but I'm not about this. So what are we going to talk about today? So spoiler alert, I'm just going to tell you what the whole talk is about so that you know you're in the right place. Okay, so today we're going to talk about DevOps. I'm kind of obsessed with DevOps. I was going to say this year, but it's been a little while. We're going to talk about turning DevOps into something called DevSecOps, and I'm going to explain what that is. I'm going to explain what DevOps is. So when you leave this room, you're
going to be like, I totally know how to explain what DevOps is. I know how security fits into it, and I know what my job is. Part of that means security being a part of your daily work, no matter what you do in IT. And I'm not kidding. Now, who's seen this slide before? The magical DevOps pony pooping its beautiful rainbows and cleaning it up. This is how a lot of security people tell me they feel about DevOps. But this is how I feel about DevOps. I feel that DevOps should have security, like it still poops rainbows, don't get me wrong. But here's security, helping, giving tools, rewarding them, teaching them. enabling the dev and
the ops folks to do their jobs securely. That's my viewpoint. So if you think this is awful, you're in the wrong room. So this is me, I'm Tanya Janka. I'm a giant nerd, I like punk rock. This is me at the Amesia Rock Festival in Montevallo, Quebec, which I go every year. If you want to live a very debaucherous two days, you should go there. You will not smell good at the end. On the internet, I'm SheHacksPurple because I'm a purple teamer. It means I do red and blue team because I can't help myself. I work at a startup. I don't know if you've heard of us. It's pronounced... No, you can figure it out.
So I have a really weird job title. I'm a cloud advocate. I also did not know what that means. It turns out I had to make a whole slide to explain what I do because I'm like, I don't really get what I do. So like I was already doing this for free. I was just going to conferences and teaching and writing white papers and releasing all of it for free like a silly, silly person. It turns out this is a job. And they said, "We'll pay you and you can say you're from Microsoft." And I was like, as soon as I figured out they weren't kidding because I thought for sure someone was just fooling
with me. So I am an application security evangelist. I'm obsessed with AppSec, but I'm not this dude. I'm this guy. I'm like, oh my god, OWASP, it's the best. I'm wildly obsessed with OWASP. I've had two artists now make me OWASP graphics because I'm like so obsessed. So this is me sending out the bat signal, the OWASP signal, like I need help. I'm so obsessed that... Okay, yeah, so OWASP is an international nonprofit organization. We have chapters all around the world. We have conferences and projects, and you can find me at most of them. Okay, so I have a project, I'm gonna tell you about it later, DevSlop, SloppyDevOps, that's me and three friends. And
this is Wist, this is the Ottawa chapter of Wist. We hang out, we brunch once a month, we do all sorts of stuff together. Those are my friends. So I've been coding a long time. However old you think I am, just add 10. That's probably right. And the whole reason why I started doing all of this is 'cause I was a software developer and I felt security was impossible. I found like the security team didn't have answers for me. I felt like someone gave me the secure SDLC book by Microsoft that was like a billion years old at that point and like was like, "Here, read this." And I was like, "Oh, I'll just take
it back to my desk." I want to make it easier to do security. And right now I think security is way too hard. So I'm trying to do things so that all of you can do your job securely and actually know how, feel confident, know you've done it right. Because right now I feel like it's this giant mystery and that's not the way to make secure stuff. So that is enough about me. I hope that you are now like, "She seems fine, I guess I'll listen." We do those slides just so you know, so that you feel we are qualified enough to give the talk. It's not because we, "I want you to know my
resume." It's because it's like the "I'm qualified, I swear" slides. So I'm going to do two minutes of introduction to application security because I want us to all have the same definitions and I want to make sure that we are all on the same page because just because I'm obsessed with DevOps and AppSec and all those things does not mean all of you are, but by the end I hope you are, but that's beside the point. So what is application security? I call it AppSec for short. It is every and any activity that you do to make software more secure. That can mean, oh, who said that? Me. So that can mean, like a month
or two ago, Node.js had this awful vulnerability in one of the versions. So you upgrading your framework so that you are not subject to that vulnerability. Like that's no longer a risk for you. Another thing you could do is like, let's say there's this old hashing algorithm that really sucks. and it's been broken and it's not appropriate anymore, so you grep through all your source code looking for it and removing it and adding a new one. It can mean having a nerd like me in at lunch to come and explain some stuff. It can mean any of those things. So anything you do is considered AppSec. It doesn't have to be a formal activity. Okay,
security is a serious problem. So the new Verizon breach report came out and guess what? AppSec won again first place for causing the absolute most breaches. It's three years in a row that we stink. My industry, I'm just going to stand over here because I want to. Can you hear this creaking? I hope the mic's not picking it up. It's like, I don't weigh that much. But the point is, is that basically like the best vector for bad guys, people, gals is is to hack our web apps and then just walk right into our networks. So we are awesome at the perimeter, we are awesome at enterprise security and taking away all of the rights
that you had on your machine, no more admin rights for you. But we're not great at making secure apps yet, unfortunately. Okay, so another thing is that security's outnumbered. Or no, another thing is that security is not really covered in school. So who here took a computery schoolie thing after high school, some sort of postdoc? Yeah. And then how many of you felt that they really covered security of software in depth? Yeah, there weren't any hands just for the record. They usually don't cover it at all or if they do, they just talk about like access control, which is important, but that's not AppSec. And if you are learning to build software, you should learn
how to build secure software. Like imagine you go to become an electrician and they teach you like, oh yeah, just twist the two wires together. Yeah, yeah, just push them in the wall. Barely any houses burn down. No, it's only 120 volts, not that bad. Yeah, so and when they do teach it often it's an afterthought. Okay, so what else? See her and how she's alone and sad. That's because security's outnumbered. See these numbers? Okay, so they say there's one developer for every 10 ops people for every one security person. Wherever I've worked, it's never been this good. Like, I'll be the app sec person and I come in and I'm like, "Hi, guys." There's like 400 developers and me. Once I worked somewhere, and I
kid you not, we did have two other people technically on the team who knew how to press the scan button on AppScan, but that was it, and then they would email the reports directly. So there's me and 2,000 developers. I had them writing me on like LinkedIn, Twitter, my OWASP email, at work, coming by my desk. I was like hiding out because like I couldn't answer all of them. Like you can't work harder, right? You have to work smarter and yeah. So this is the last one. So who thinks waterfall's the best? No hands. Waterfail? The thing about Waterfall is it didn't work that well. I don't know if you know, but only 25% of projects actually succeeded in Waterfall on average. And so see
how there's dinosaurs and they're happy and they're frolicking in a waterfall? My friend's an artist. He made that for me to send to people when I feel passive-aggressive. Such nice friends who are artists. I'd send it to people at work. But so like the accompanying model to waterfall, I affectionately term is wait while I do some security. And it would be like, could you just do a code freeze for three weeks? No developer ever is going to do that unless they're on vacation. Like it's just not going to happen. There's no developer that's like, look at that awesome big backlog full of bugs. I'm going to just stop for three weeks while you do some
sort of silly security exercise. It's not going to happen. So that... Security model really didn't work well, which is evidenced in the Verizon breach report. So now I want to talk about DevOps and the main goals of DevOps. So normally if I was in a smaller room, I would torture all of you and make you tell me and then offer you candies as bribes. But we're in a big room and it's much scarier to talk in front of this many people. So you're off the hook this time. But the main goals of DevOps are threefold. So first one, improve deployment frequency. Right? And actually what we should say is higher deployment frequency. We want to
deploy quickly and often. We want to do this to get out bug fixes. We want to do it so we can be more reliable. We want to do it so we can get features to our clients faster than our competitors. But what does this have to do with security? It means you can fix my security problems now. It means no more I found this giant bug that is the end of the world or we're having an incident and it's awful and I need a fix and you tell me with waterfall it'll be four months, it'll be two months, it'll be two weeks. If you're deploying 10 times a day that means you could deploy the
fix today and that is awesome from a security perspective. The next thing is lower failure rates. Part of DevOps, part of why Dev and Ops had to start talking to each other and working together is so that we could ensure that we didn't fail all the time. Now what does that have to do with security? So we call this resiliency. A lot of people move to the cloud because they want resiliency. They want something that's always up, right? Like if you were like a big company like Amazon or Walmart or Shopify and you sell things on the internet and you are down for 10 minutes, all those people go to your competitor and buy stuff.
That's awful. That's really, really awful. So we need resiliency. Who here has seen this? The CIA triad. This is the reason why every security person has a job. It is our job to protect the confidentiality, integrity, and availability of the systems and the data that all of you build and support. That's our job. If those things are in order, security team's happy. So when we look at the security of resiliency, that is the A. It's kind of hard to tell here, but this is bolded. The A availability is bolded. If you have a super resilient app, super resilient infrastructure, then this means you have the A done, and this is a giant checkmark for security. So security, you're like, okay, two out of two so far
we love of the main goals of DevOps. So this is the last one, faster time to market. Okay. I want to beat my competitors in everything. Every business, that's what they're trying to do. If not, they're probably out of business. And faster time to market, what does this have to do with security? It may not be what you think. Security doesn't win if the business doesn't win. If security We are the threat to availability. If we have put up so many roadblocks, if we are like throwing NIST and ITSG and all the other things at them and not helping them meet those goals and then stopping things from getting out, then we are stopping business from succeeding, which means there's no
need for a security team anymore. And I can't think of a bigger loss than getting laid off because your business went out of business because the security team got in the way. So faster time to market means we win because the business wins, right? And that does not mean we cut out security. It's not what it means. Okay, so the best thing to happen to DevOps, the best thing to happen to application security, let me say this again. DevOps is the best thing to happen to application security since OWASP. I've briefly explained my love affair with OWASP. It's been years now. But this is the highest compliment that I could pay. I really, really think
DevOps is a big deal. I think it's going to put us way, way ahead in so many ways. But we security folk have to get on board. I've seen a lot of security folk that are just like fighting, fighting, fighting DevOps. And a lot of times it's because they actually don't know what it is. I remember my cousin calling me and she's like, my manager says I have to do DevOps now. And I didn't know what DevOps was yet. This was years ago. I'm like, what does that mean? She's like, it means he laid off the ops guy and I have to do his job too. Oh my God, DevOps sucks. But it turns out
that's not DevOps. I'm just going to not say what that is because cameras are rolling. Okay, so what are the three... The three ways of DevOps. So there's several definitions of DevOps. So the Microsoft definition is it's the combination of people, process, and products to do increased value awesomeness. But here's the Gene Kim, Jez Humble, Nicole Ferguson, the people that started DevOps definition. you have to do the three ways in order for DevOps to happen. So if you just have a deployment pipeline, you're not doing DevOps. You have a deployment pipeline, and that's awesome. You're not there yet. So here are the three ways. I'm gonna tell you the three ways, and then spoiler alert, the rest of the talk, I'm gonna
lay out 21 different ideas of things that we can do to weave security through the three ways. And then after this, if you want to, I'm getting ahead of myself, but I have a video channel where I actually do most of the ideas. So if you're like, oh, that's nice, she talked about negative unit tests, how the hell do you do one? I have like three videos about it. So when you're watching this, take notes, and then you can just actually do them later. 'Cause I don't wanna give you this high level whatever and then you're helpless. You're like, that sounds great, how do I do it? I want you to go do it. Okay,
so the three ways of DevOps are one, Increase the efficiency of the entire system. So I think that's this way? Yes. This way. So if you have the system development life cycle, you want everything to go out faster. If I work really hard to make my part faster, but everything else is just as slow and there's still a blocker somewhere else and everyone's waiting on that person, then I wasted my time according to the whole system. So that means I need to talk to other people and other teams and work together to make the whole system faster. The second way of DevOps is the other way. You want to give everyone feedback as soon as
humanly possible so you can fix it right away, and it does not go down the line. So none of this, like, you find a design flaw after a hacker has exploited it a year after you have published. We want to find that out as soon as possible and we want feedback all the time, not just at the end, which is why some people call waterfall, water fail, because we would be like, "This is great, let's just keep working on it. We don't need feedback. We don't need any feedback. Let's not test it until the very end and then, oh, pfft." And a lot of it is because they didn't have feedback. The third way of
DevOps is continuous learning. And I don't mean that you say that you're allowed to learn, but then you're like, your budget's $500 per year for training, and P.S., you don't get any time off. I mean prioritizing training as a unit. So I just had to write my performance review for where I work at that startup, and they made me make a learning plan, like a real one. And I got to pick books, and then my boss bought them, and then I have to read them. And then I picked conferences that I want to go to and I'm going to go to them and I have time built into each week, not really each week, but
some weeks, where then I learn and it's a priority to him that I get better and better and better. And if I don't do my learning stuff, I will not achieve all my marks for the year and he's serious about it. And when we do this, we keep ahead of the curve. So those are the three ways I talked along the slide. So we're going to go faster now. Okay, so increase the speed of the entire system. So here is some amazing graphic art for you, right? I know, I know. I should work at Pixar. This is why I have artist friends make things for me because then this happens. But anyway, like this took
a while. Anyway, okay, on topic, Tanya. So we need to increase the speed of the entire system. So this is the system development life cycle. This is the thing that software developers live and breathe every day, right? Agile, DevOps, whatever you're doing, you still need to know what you're building, which is requirements. You still need to make a plan. That's your design. You need to code it. That's the fun part. Testing is also the fun part. And then release is hopefully nice and smooth. Okay, so what does that mean for us? Right? So first I'm going to talk about with each of the three ways what it means for security. Then I'm going to talk
about what it means for Dev and Ops because I want us to know both things. And the main thing with this one is security can no longer be a bottleneck. Who here has had to wait on the security team and has been a pain in the butt? Oh my gosh, only 10 hands? Are you kidding? You're liars. Seriously, like everywhere I've ever worked I've had to wait on the security team like a million times. Like they're like here, here's that advice on that thing that you wanted and then they send me a link to ITSG 22. Who knows what ITSG is? It's like the government standard for security. Okay, so two hands. Who knows what
NIST is? So imagine someone sends you a link to NIST. That's unhelpful. So if we want to speed things up, we can't do that. We have to send them actual answers and information. And that is the main thing for security. But when it comes to Dev and Ops, We need their help. So security people, we need all of your help. So if we are gonna put a static code analysis tool into your pipeline, we need you to help us tune it so that's not giving you false positives. So we need to try to get rid of them first, and then we need your help to tell us if we've done something wrong. We need to
add security bugs to the defect tracker, and we need you to not mark them as fixed when we're not looking if you didn't fix them. I've had this happen quite a bit, and I'm on to you. So we need to use templates of known secure code. So if you are, like I worked at a shop where we just kept making custom apps for everyone, and guess what, they all had a username, password, login. So we had a hacker come, hack the crap out of that. We fixed everything, made it super secure, and then we kept reusing it and then just retesting it occasionally, and then circulating the fixes to all of our apps, as opposed
to everyone reinventing the wheel. This is a thing that you can do to make things go faster. Another thing is if you're going to release an image, it should be fully scanned before you release it. So a container or a VM, it needs to be fresh and fully patched. I've worked at a lot of places where on the first of the month, they made the golden image and they used it all month, which meant it was already missing patches when it went out. This is very bad. I don't like this. We can set up things in the pipeline that check automatically so that nothing gets out that isn't done. That isn't fully looked at it's
really really really important that we don't put things out that we know are vulnerable from the beginning And that means we have to look and the last thing is that We need your help to set these things up so we can put scanners for your containers and scanners for your images and scanners for if like let's say you're doing infrastructure as code there are scanners for that and that will scan your infrastructure's code to make sure that it's secure, but we have to be allowed to put things in your pipeline to do it. So we need to work with you and speak to you and stuff. I'm gonna do this thing called the photo slide.
I think I do four of these in this talk. This is in case you wanna take a picture of the things I just said. So I'm gonna show you nice images of women of color working in tech. so that you listen when I'm talking instead of a wall of words. But then I'm going to show you all the words after so you didn't have to take notes. Also, all of these are on the internet already so that you could just have them at the end and I will have the slide at the end. But I personally suck at taking notes, but I'm great at taking photos with my phone. Okay, so next. What does this
mean for Devon Ops? So I'm just going to talk about Devon Ops a lot, just so you know. It means, so we are going to want to, we security, are going to want to do dynamic application security testing. And again, we need to be allowed to put some of that in your pipeline. Again, that means we need permission. We also probably want to see your code repo. So whether you're using GITs, a version, whatever you're using, we are going to want to be able to run scans of that. Has anyone heard of a little tool called Snyk? It will scan your repo daily and just tell you, oh, that library actually is no longer safe,
it's no longer vulnerable, there's a CVE out for it, you should really update it to this version. So I have that run all the time, and it's great if you have an open source project, it's free, which is nice. There's SonarQube, anyway, there's a huge pile of them. But we also want to do dynamic application security testing, which is sometimes known as, pew, So that means we want to have your app running on like a dev server or something and we're gonna shoot at it. We can automate this as part of the pipeline. It makes your pipeline slower so we can make our own pipeline and do it but we can only do this if
we have cooperation from you. Do you see how I keep saying cooperation? Okay so the last one is if you are going to Basically, like I'm a big proponent of pipelines, but like you don't just need one you could have a bunch so you can have a parallel security pipeline as I call it that goes nowhere. So whenever you push code it will go and it'll go to dev, UAT, prod, whatever it is you're gonna do, but another copy goes off into nowhere where I run every security check that's ever happened. and it does not stop you, it just keeps going without you, your code's gone to prod, it's fine, but we can run this
on Fridays, whenever you want, and then it does every security test I've ever dreamed of, which will take like 24 hours or whatever, but it doesn't matter, because it's not stopping you from working, right? And then on Monday I go in and I get to dig through all the results, like static application security testing. It's not very accurate. It doesn't give you answers, it gives you hints, And that takes time for you to look through and we can't put that in the pipeline directly like a full scan because then you'll be there forever. That's not fair. But if you could use this parallel pipeline, if you could let us put your code in it, life
will be good for the security team because we don't have to slow you down. The last one here, I want to talk about unit tests. So who here does unit tests? We're all good developers. Who here has complete code coverage? Okay, good, no hands, because I don't have to be like, you're full of it. But I dream of the day of good code coverage, like over 50%. So when you have a unit test, it's a positive test. So you test the stuff does what it's supposed to do. You know, like does this number add this together and it equals that? Good, it worked. So what I'm gonna do then is make negative unit tests. So
I really like this idea. So when you run these unit tests, they run wicked fast, right? And you can run them whenever you want to. I wanna add tests. I wanna duplicate your tests and add payloads, which is like malicious code. So, oh, you're gonna do this call? Great, I'm gonna add a semicolon, I'm gonna add a single quote, a double quote, other things like that. And make sure that your application fails gracefully. If your application can handle attacks gracefully, we can do regression testing, but security regression testing really quickly with unit tests. This takes time. It takes time to keep them up to date, but it's incredible. Seriously, the value adds is really quite good. Okay, so here is... the
photo slide. And this last one here, I didn't talk about it this much, but there's a whole bunch of things that can test your third party components in your libraries, all of the things that you plug into your code. Most apps right now, between 60 and 80% is actually like your components in your libraries. Very little code is actually what you wrote. But all of the code makes up your attack surface. So if you have that really old, that one version of node.js that had that awful bug in it, you are subject to that bug, like you're at risk, right? And so just checking, it's so quick, it's really fast. I remember like I did
an episode on my show of Snyk and I planned an hour and then after six minutes we'd implemented it and I was like, "Where are we gonna do the other 54 minutes?" Anyway, it's a good problem to have. So fast feedback, right? So this is the second way. We just did the first way, we're doing the second way. Now we are going the other way. We are pushing left, my favorite thing, or shifting left, whatever you want to call it. But basically, we as security folk want to give everyone else security information as soon as humanly possible. We do not want to give it at the end anymore. We do not want to find out
when there's some sort of incident. Okay, so this is the slide that you should show your boss about why you should push left. When you fix a bug, the later you fix it, the more it costs. This is not my slide, this is a fancy marketing slide with someone that did a study that worked very hard on it, the Ponemon Institute of Research. Whether it's exactly this much, it doesn't matter, it just costs exponentially more the later you fix a bug. This is for security bugs, any sort of bug. A defect in the logic of your app It's basically all the things that a breach is made up of. So we want to give feedback
right away, and there are a whole bunch of ways that we can do that. So the first one is that the security team needs to get familiar with what pipelines are and add tools to it, and we have to ask permission from all of you and work with you in a nice way to make sure that that does not break everything you're doing. So we need to basically have faster feedback loops. I know as a security person I'm really bad for making people wait a really long time because there are more of them than me and because I had to do everything manually. But since I've started adding as many of the boring tests as
I can into the pipeline and automating all the boring stuff so I can do the interesting stuff, life has gotten a lot quicker. Yes, shifting left. But what does this mean for Dev and Ops? So this means that we actually want you to tell us what you are concerned about because feedback goes both ways. So I'm going to tell you a quick story. How much time do I have left? I told 15 minutes. Good. I'm going to keep going fast. So I talk a lot. I like to talk. Okay. So Who here has heard of Netflix? Yeah, so they have an AppSec team, and I'm kind of like a fan of their team, if that
makes sense, and they keep publishing awesome stuff that they're doing. So when you watch a movie, it spins off an instance into AWS, and it does its thing, and it has a whole bunch of permissions, and they wanted to do something called least privilege. So least privilege is the thing that security people constantly want to do, but it's really hard because we piss everyone off when we do it. And it means we only want to give you the exact permissions you need to do your job and nothing more. Because if someone takes your credentials, then they can't do as much bad stuff. And also because then if you make a mistake, like the mistake is
smaller. And when you have implemented a good least privileged policy, that means if you do have a malicious actor in your midst, they can't get very far. So they created this thing called RepoMan. And RepoMan started like... basically like pulling back permissions very aggressively and it started killing people's video and I don't know about you but if I'm watching a video and it just stops and then it forgets where I was I'm just like oh that sucks and it's not good for customers and the developers were like no and they stopped getting invited to parties and everyone was pissed so they took a they listened and took away repo man so then they made a
new thing called repo kid well And what repo kid would do, so they had this base set of permissions and the new instance or service that they wrote would go up and then they would watch it. And they're like, you know what, they haven't used this permission. So then it would peel one permission away and watch. Okay, so it's still working. So if it hasn't errored, we don't have to give it back. And then it kept repealing things slowly over the next three months. And they repealed 70% of all permissions for every service. That's the best, most impressive implementation of least privilege I've ever heard of. And they did not crash one service. That's amazing.
And it's because they listened. And so I try really hard to take examples from them and listen. I have this friend named Clint. He's really smart. You can ignore most of this slide. Just look at this. He worked somewhere and they wanted to get rid of this thing. this MD5 hash, 'cause that is broken. It's no longer secure. It's not effectively random enough anymore. There's ways you can predict it, which means it's not safe. So he brought this wrapper to go around it that says, "Non-cryptographically secure MD5 hash." And so whenever anyone, non-cryptographically secure MD5, so whenever anyone was coding and they put that in, as soon as they tabbed away, it would change the name to this, wrapper. And then they would see it,
and usage went down 97% the first month. It removed 98% of the usage across the entire place in like two or three months. And he's like... the few people that were using it, then we went to go see them a few months later and we asked and they're like, "Oh, we don't actually need that level of randomness. "This isn't for security. "This is for, I don't know, some other thing." So then they only had a few instances of people using it and they just gently, not like, "You're still using that, grr, we're security," but like, "Hey, we see you're still using that. "Could you tell us about why you need it "and what it's doing
for you and how can we help?" Yeah, Clint's actually really awesome. Okay, so what else does this mean for dev and ops? It means we need you to participate in security activities. What do I mean? I mean if there's an incident, please don't go home. If your app is currently on fire and all of the server logs are saying things in another language, you know which one. Please don't go home without telling the incident manager. Or if there is an incident, however you can help us, we need your help because all of you are the experts on what you build and what you support. So I was working on an incident and it was like 3:15 and I went back to the developer's desk and he'd gone home
for the day and I was just like, And like I couldn't push his code. I didn't have like I knew how to fix it but I didn't even have access and like we just had to wait till the next day in an ongoing incident that was really bad and he wasn't on call, he didn't have a pager and I was just like but no one had ever told him like we need you if there's an incident please stay. Threat modeling. So threat modeling I like to affectionately call evil brainstorming. And this is basically where you get a bunch of people in a room, including the business people, and you look at your design and you
look at what you're doing with it from a business perspective and what keeps you up at night. You make a list of what the risks are, what sort of threats you might be facing, and then you go test for these things. We need you there because you know your baby better than we do. Security sprints, that is a sprint where you fix all the security things. That's not where you just work on other things in the backlog and ignore the security team. There's a whole bunch of different security activities and if we don't have you, we're just not gonna get very far because all of you are the experts. Okay, so this is the photo
slide again. The main part here that I want you to get across is that security needs to become a part of the definition of quality. This means if a giant security bug Happens in the build it needs to break the build. I know It's unpopular when you break the build I am in an open source project and my friend Abel pushed some code and it had an SQL injection in it and it broke my build and it was the night before I had to demo it at a conference, so obviously I was stressed and But it made a good lesson as I showed everyone like look I We broke the build. So my pipeline worked.
But the point, like, this is really, really important. Oh, sorry, I didn't know this was on. I hope I didn't blind any of you. Okay, next. So this is the third way, continuous learning. Risk-taking, experimentation, failing fast, learning all the time. And this doesn't mean just going to conferences. It means, like, reading, like, keeping up on a blog with someone that, like, does your specific specialty or, like, Well, actually, let's look at the ideas. So full circle. I don't have any cute ASCII art. I'm really sorry. Okay, so this involves culture change for a lot of people. So we need to allocate time as part of our daily work to learning. Not necessarily every day, but if we don't allocate time, you better believe it, it won't happen. Who
here has worked with someone, someone I would probably call a dinosaur, and they're super, super senior, and they never have any time off because every single thing will break and the world will burn to the ground if they leave and do training? Who here has worked with someone like that? Yeah, everything's so fragile because you haven't had training in five years. We need to help that person be allowed to have days off. I've been that person and I learned that I could be an even better employee if I made it so things ran on their own. So what does this mean for security? It means we need to teach everyone what we want from them.
We should make secure coding guidelines. We should make secure design concepts a thing that we know that we can help developers apply. We need to allocate time for actually teaching. But I'm more concerned with dev and ops. So if we offer you training, we the security team, please say yes. If you don't show up, that's not great. I started giving lunch and learns at my old office and my manager was like, "No one's gonna show up." And we had 600 developers and I did a lunch and learn and like 11 people showed up and she's like, "See, you failed." And I was like, "No, 11 people came." And one of them, it turned out, was a team leader and I had done like this XSS deep dive
and like how to look for it, how to find it, and how to fix it. And at the end, I was like, please, please, please open up all your legacy apps and then take the cross-site scripting cheat sheet from OWASP, the cross-site scripting filter evasion cheat sheet, and just copy and paste everything into every input in your dev environment. And he came up to me the next week. He's like, hey, guess what? We did your thing. We found 20 vulnerable apps. with like a ton in every single one of them. And I was like, "Cool, what are you gonna do?" He's like, "Next week is cross-site scripting week, "and we are going to eliminate this
entire bug class "from our department." So I guess the right person was in that room, right? And then my next lunch and learn, it was like there was a waiting list, it was great. So the key is, please, if you could show up, that would be great. You could train yourself, I'm gonna give you free resources at the end. And when you fix something, share that information widely. So if there's a penetration test, for instance, and then you find this interesting thing, tell the rest of your team. Tell the other team. Share information amongst you. You are our best champions. OK, so next, what else does this mean? It means if there's a simulation-- so
I'm a big fan of practicing things. I practice this 500 times. So, I used to work at elections. They actually run an entire fake election, a simulation of one, six months ahead of time for every big federal election. And they throw security incidents in, they throw all sorts of problems in, they build an actual office, they invite hackers, they do all this stuff, and then they take lots of notes and spend the next six months improving everything, and then when the election happens, it looks like we're perfect because we practiced first. So if we invite you on a simulation, please try to make time. It's worth it. Wait, wait, wait, one more thing. Ask for
metrics. So whenever I start somewhere, I ask for all of the VA and pen test results from the previous year. Five minutes, oh, I really got to move it. And then I try to make lunch and learns on the top three things. Okay, awesome. The last thing is, is when you do a postmortem, oh, so you were showing me pictures before of running out of time and I wasn't noticing. When you do a postmortem, it needs to be blameless. We're gonna talk about that in a second. So now I wanna talk about how security is everybody's job and this means we need to do culture change 'cause security won't be everyone's job unless you change
your culture. So first of all, Celebrate when you win. If there's an incident and you clean it up fast, people deserve high fives. If there's a pen test, you don't have any high marks, high fives. You do a security sprint, you fix all the security bugs in the backlog, high fives, cake, cupcakes, muffins, whatever it is that you, a fruit platter, whatever it is that you like. You deserve a celebration because you're awesome. Security people usually just tell you when you suck. We need to reinforce culture. It's true, right? What if we went around and told you you're great when you are great? So we need to work more closely together. We need to socialize.
We need to speak to each other. No more blaming. This is the most important one. When there is... Seriously, I made this mistake before. I'm not going to tell the story because of the five-minute picture thing I just got shown. But I can tell you about this after. And after this, I'm going to be in the hallway, and I'm going to talk as long as all of you want, and I'm going to bring the stickers. But really, when we point fingers, we lose trust sometimes. That's the most important thing for a security team. Okay, last one. We want security champions. We want all of you. So if there's someone that's interested in security, buy them
a book, send them a video, show them a cool blog, encourage them, and then they will speak the gospel for you. Like that guy that was in my lunch and learn, and then he brought his whole team into testing, and he brought his whole team into fixing things. That's my guy. He was awesome. Okay, so now this is the part where we're going to get weird. So the definition of quality must include secure. Security means resilience. It means learning about security. It means speed in doing security. And it means we need to give feedback often about security. Okay, so this is the part where we get weird. Everyone put up your right hand. I have
read a bunch of papers that say that if you promise something and you say it out loud, you're 80% more likely to do it. Okay? So everyone say with me, from now on, my definition of quality includes security. I promise. Thank you. Thank you. Thank you. Thank you. Thank you. Now I have resources, so everyone get out your phones. So conclusion, we're going to do DevSecOps, and that's great. The little startup that I work for, we went to DevOps and it was a bumpy ride. And we like to show our dirty laundry on the internet to everyone. So if you want to see, if you want to feel better about yourself, go look at this. Next one. If you want to
learn about DevSecOps, so all the ideas that I have in this talk, we're exploring on the show and it's me and three of my female friends just being nerds on the internet and we do it to amuse ourselves. And lastly, please consider following me because this is all I ever talk about. And I would like to start a long-term application security relationship with all of you. And lastly, what did we learn today? Security is part of your daily work. We all know this now because security is everybody's job. Thank you. These are the slides. Thank you. Thank you. Thank you.
These are the slides from today. They're already up in case you want them. Thank you so much. Please come back here after the break to see Shira's talk. All the other talks today are going to be awesome. Thank you again. Hi, everyone. Hello. It's a pleasure to be here today. My name is Shira Shamban. I came here all the way from Tel Aviv to talk to you about cyber warfare. So let's get started. So as I said, I'm Shira. I spent 13 years in the Israeli military. And this is me. A couple of years ago, I finished my military service in the intelligence as a major and went to the public sector and started working in a startup called Dome9 Security. We were doing
public cloud security until we were acquired by a larger company called Checkpoint. And we still do public cloud security, but now as a part of Check Point. And I was always interested in international affairs and what do countries do with one another when they're facing one another and behind their backs. And when I got into cybersecurity, I realized that there is a lot in common between foreign affairs and cybersecurity. So technology really changed the way we do espionage today. It really changed the way we use weapons today. In the past, if you wanted to know another country's secret, you probably had to send over a spy that would walk around there and find the answers to all
of your questions. But today you don't have to do that. Today you can just send over a malware to look for the answers that you want to get. So if in the past, you know, the Cold War was between America and Russia, so today many countries are in a way in a Cold War in the digital arena. So they pretend to be friends of one another, but behind their backs they are, you know, betraying one another or spying one another. They want to check out what their friends are doing, what are they looking into, what are they researching. because knowledge is power and everyone wants to know more. So if in the past we had a dirty bomb, today we have
a dirty worm, and this is really how things are working. So a few challenges that we're facing today that we did not face before, like this for example. So first of all, detection. How can I tell I've been hacked? So if in the past you wanted to detect if someone is attacking you, all you needed is a radar because you can detect, I don't know, an airplane approaching or a missile that was shot at you. So there is no real radar today to detect a state-sponsored attack. Yes, you have your firewall, you have your antivirus, but something I will keep on repeating during this whole talk is that a good cyber hack will be undetectable and you will
not know it happened. A good cyber hack, it would be as if it never happened. You had no idea it happened. If you had no idea it happened, it will be undetectable and it goes under the radar. It will not be detected by any firewall. Damage and impact. So if in modern warfare an impact might be a big explosion, a building that collapses. So an impact in a cyber attack is very much different. An impact could be influencing the elections. An impact could be hacking a company and obtaining its IP. So the impact is very, very different and again it's not that noticeable unlike in the traditional warfare. Preparation and defense, I will talk about it more in
the end, but you cannot prepare and you cannot really defend yourself against a nation-state attack. You cannot do it because these attacks are not really detectable. And as a nation-state hacker, the assumption is that there is nothing that is unhackable. There is nothing like that. If it has an IP address, if it sends and receives zeros and ones, it's hackable. Psychological effect. So if in the traditional warfare you knew where the combat zone was and you could have just stayed away, today with the cyber warfare, any device is the part of the warfare. your smartphones, your laptops, they're all part of the warfare and just knowing that there is some kind of war going on and you might not have Wi-Fi could
really drive you crazy. Attribution. It's very, very hard to tell who did this to you. It's very hard, first of all, because of the obvious things. You can spoof IP addresses or you can use someone else's malware code But it's also hard because very often countries try to deceive what they're doing, to pretend to be other countries. And also for you as a country, very often you don't even want to admit you've been hacked. And you also don't want to say publicly who hacked you. So this is all very, very complicated. So we need to get into a different mindset right now. Okay? So far, I mean, some of you may have done some hacking before and I guess a
lot of you here are working on the defensive side but you were defending up until now mainly against criminal hackers and state-sponsored hackers they have a different mindset which we're going to analyze right now so be open-minded stop thinking like the criminal hacker because I'm going to teach you what is it state-sponsored hacker thinks like And we'll talk about the differences, first of all, of the goals between the two types of hackers. So first of all, a criminal hacker, the first thing they're thinking about is how to get, you know, money or money equivalent like cryptocurrency. So they would break into, I don't know, like a stock exchange and get all the coins they're storing over
there. Or they'll probably look for your private data, which is kind of a derivative of a financial profit because with your private data they can hack into your bank account, fake your identity. I don't know, maybe call Uber and tell them that they have all of this data and just give us money and call it bug bounty. So it's another way of getting money. and awareness. Very often we see hackers, hacktivists that do some defacement and do other kinds of damage. But the common thing among all of these three is this. This is detectable impact. We can see it. We're going to know about it. We're going to find this data in the dark net or we will be offered to pay for it. So
this is very much detectable. But the state-sponsored hackers have different kinds of goals. First of all, they want to do a lot of destruction and chaos. They will want to destructure data and not ask for a bribe in order to give it back to you. They will want to make people be very much afraid for their lives. They will want to make them question and doubt their government. This is another type of chaos that they're creating. They want to know all of your secrets. So companies, countries often hack one another because they want to know the other company's plans, what do they want to do in the future, what do they bring into consideration so that, you know, when there are bigger decisions to make, they would
know to predict what are you going to do about it. Sometimes countries want to deliver a message. Listen, don't mess with us. Look what we did to that country. If you try to mess with us, we'll do the same to you. Financial or trade advantage, I will talk about it more later. But yeah, countries maybe would like to manufacture on their grounds the same product for a cheaper price or a better product. So they would just like to hack other big companies in other countries, get their IP, and use it back at home. And the last thing is an Easter egg. Sometimes you hack somewhere, leave an Easter egg for later, for when you want to start a war or like whatever, I'm going
to just leave it here. It's not only a backdoor, it's very often code that can cause real damage, like the chaos that I was talking about before. So there is no immediate impact. with these goals, sometimes you are not going to know that it even happened. Very often you're not going to know it happened. Tools and resources, they are also different. So the criminal hackers would probably use some generic or open source tools that you can find online or even leak tools. like the tools that were hacked a couple of years ago by the shadow brokers, and you can just find them today in the wild being used by hackers. People make mistakes, and the criminal hackers are depending on that,
that you will misconfigure, that you will reuse your passwords. This is something they're counting on, and they're right. People actually do that. It's still working today. This is why known exploits are working as well. And the people who actually do these kind of hackings are probably script kiddies or black, gray hat hackers. But this is only a handful of people, so this is not a real big army of super professional hackers. On the other hand, we have something totally different with the state-sponsored attacks. Do you need a zero day? We will find a zero day, no problem. They have the best engineers and the best academics at reach of their hand, no problem. How many do you need? We'll have them because we have limited access
to technology and we have limitless access or we have as much money as we need to develop the most advanced tools you can ever dream of. There is also international support. Sometimes countries help one another in the cyber arena because they have common goals. So if you have the best engineers here and the best engineers there and that country has a zero day, they're willing to share with you. So this makes the whole process easier and more efficient. But what I'm saying here, here we have no resource problem. We have all the engineers we want, all the money we want, all the infrastructure we want. Having a big, you know, APT set costs a lot of money.
You need CNC servers and you need big database to store everything you've exfiltrated out. Think about your production environment. and it's much bigger and you have to maintain it and obviously it holds a lot of very, very sensitive data and you cannot really keep your code in GitHub. So, I mean, this is very, very complicated to handle. The methods that the criminal hackers use, so you probably know a little bit about this. So, they kind of spray around, they spray their malware, it's gonna hit, they want good value for money, they don't do very expensive attack vectors, they just try it out, they know that at some point someone is going to click that link, someone is going to believe
that the Nigerian prince is going to give them money, so it's worth it, usually they trust the people, they will eventually click that link. On the other hand, in a state-sponsored attack, you have to be very focused on what do you exactly want to get, what is the prize that you want to get, and then you plan everything accordingly. You build a whole plan that will get you exactly what you wanted. You go low and slow. There is no rush here. Cyber operations can take years. It could take you years to get what you want and that's okay because there is no rush and it's much more important not to get detected. These kind of cyber operations are very, very, very well crafted. If we will
see social engineering or phishing emails, they will be the best. I promise you that Each and every one of us in this room, if we get targeted by a nation state and they're going to email us a phishing email, we will click it and we will open the word document with the macro. I promise you, it will be that good. There is no limitation to digital vectors. So criminal hackers would usually stick to the digital attack vectors, but in this case, in the state-sponsored case, If we need access and we need to send someone over to ask some questions or we need to use a satellite image, we need to use any other kind of vector that will bring us intel, we will use that.
The last and most important thing is to keep persistent, keep your ability to reinfect the target, even if they've updated their operating system, even if they've updated their firewall, no matter what, you make sure you can always reinfect the target. This is very, very important. This is why these operations can last for years, because you always have access to your target.
You've probably seen these APTs before. I will not be talking a lot about them. These are just a few countries who are very, very active in the cyber arena. They all have a bunch of malware families. They're known for... by numbers or they have funny names like Fancy Bear or the Dukes or you know they have all kinds of names and they target different industries so you have malware for telecommunication, malware for the energy sector, malware for governmental sector and so on. I will say that China is leading the economic espionage in the world. They've been doing it very well for the past few years. It caused them a lot of troubles with other countries, but this is something that they do and they're
very, very good at. Another thing I wanted to mention about Russia, I wanted to mention it before when I was talking about this slide, about the resources. In Russia, there is a very, very thin line between criminal hackers and state-sponsored hackers. Russia really likes to use the criminal hackers for their own purposes. And there is kind of an unwritten rule that it's okay for you to hack whatever you want as long as you're not hacking Russian citizens. So you have Russian hackers who write very, very advanced malware. They have very, very big botnet networks, and they can do whatever they want. And then one day, you can have some government official coming and saying, hi, I would like to use your botnet for this, or take this
little piece of code and put it into your malware if you want to keep your head attached to your body. They do it and everyone is really, really happy. So the government has a lot of, like, they have access to lots of infrastructure, lots of new code. And the hackers are happy because they do what they like to do. So I'm going to run back to where I stopped. I was talking about Russia and about the good relations between the government sector and the private sector. And not only that, but in Russia there is very, very good, there's a healthy competition between the intelligence agencies. So you have the FSB and the GRU competing with one another. And then when Putin
says, listen, I want access to that. So both of these intelligence agencies race and try to be the first ones to get there. And this is why very often we can find in one network malwares of both of these intelligence organizations. So competition is very good. So now after I've told you what is a state-sponsored hacker thinks like and why is it different from the criminal hacker, I want all of you to put on your imaginary black hoodie. Yeah. And let's play a little game. So You have endless resources, all the best engineers in the world. He really put a black hoodie. So you have practically anything you could ever dream of, the best technology, a thousand zero days. And you need to think about
the coolest tech that will get you persistent and long-term access to government data and to public sector data, to banking, to technology in your target country. So just think about it for a minute. Be creative. Ready? I would like to tell you now about the best supply chain hack ever happened that we know of, because maybe there are others that we do not know of. You probably know what am I referring to. Yes. It's the Chinese microchip pack. Now I know what some of you are thinking right now. Are you kidding? This was published in Bloomberg and they did not disclose their sources. and we do not know what are their hidden interests behind this publication and you're not serious, right? And you're right.
You're absolutely right. But I will say this: very, very often, governments use the media Sometimes they use it for fake news. They make up stuff and just publish them for political reasons. But very, very often, for different reasons, a government cannot... publicly make an announcement about something that happened, about a hack. We talked about it a few slides back about the fact that sometimes you just don't want to admit that you've been hacked. So in this case, you just leak the information somehow and let someone else publish it and like stand there and say, "We don't know anything about this." So this is kind of what happened here. Also since very very big companies were hacked like Amazon
and Apple and some very big American banks They were all hacked, but you did not hear them strongly Like protesting against this publication and they did not sue Bloomberg and So for me this is like the strongest sign that they really have no case and this in fact happened and everyone is like trying to... the audience will forget. And none of us really stopped using AWS or Apple so it worked. And this was a very very interesting operation. What happened was that... so this was published October 2018 But back in 2015, AWS wanted to buy a company called Elemental. And as a part of their due diligence, they wanted to send the company's motherboards for security inspection. They sent them
to a Canadian company. And the Canadian company did a very good job. And they found tiny little microchips in those motherboards. The motherboards were supplied by only the biggest motherboard supplier in the world, Supermicro. That company is based in San Jose and it belongs to a group of Taiwanese people. So this is how this was discovered and reported to the government. And I'm not sure you guys really understand how serious this hack is because This makes any software hack look ridiculous. I don't know when was the last time you wrote a chip But this is a serious, serious headache to do. And it's not only writing a chip. It has to be super tiny and undetectable. And you have to
put it in these motherboards and make sure the motherboards get to the place you wanted them to be and that they exfiltrate data from these places. And this will all go smoothly and no one will know about it. This is amazing. So these tiny little chips, what they were actually doing was two main things. One is communicating with the CNC server and getting more code or more commands from it because the chip was so tiny you cannot keep a lot of data on it. And then the second thing that Chip was doing was actually making sure that the motherboard or the servers would actually accept those pieces of code and be doing what the hackers meant them to do. So let's quickly go over this process.
So we had to build those tiny little chips. We had to plant them in the motherboard. We had to ship the motherboard and install it wherever we wanted. We had to create the CNC infrastructure and the payloads and make sure everything is working smoothly. And then we can get all the data that we want. It's really that simple. It wasn't really that simple. I think that one of the greatest things that we saw in this big incident was how well the Chinese decided what is going to be the target of this hack. It's super complicated because when we started we were talking about Elemental, which Amazon wanted to buy, but the motherboards were supplied by another company called Supermicro. and Supermicro
is manufacturing in Taiwan and in China. And the specific motherboards that got to Elemental were manufactured by four subcontractors. So the Chinese intelligence had to figure all of this out and decide that these four factories are the right place to start working at. And we're talking about four factories. So what happened was that the Chinese officials who were involved in this came up to the factory manager and told them that they were actually from Supermicro and that they have this tiny little change in the design. They're just going to add this little chip. And if the factory manager did not cooperate with them, they just told them they're going to shut the factory down. So it really gave them motivation to
help. I did not mention this, but there were several types of chips found on those motherboards. So there was a lot of R&D involved in this operation. So way to go, China. This was a very, very good operation. I would like to pay my respect. Hardware is really off the radar. Hardware hacks are not a very, very common thing to do. This is not something that happens within a few weeks or a few months. This is something that takes years. This was detected in 2015. We heard about it in 2018. And who knows when did it start? We don't know when did this whole operation begin. But hardware hacks are very, very difficult and require a lot of resources to pull
off. The Chinese did a very, very good market analysis. They figured out that a lot of people manufacture in China. And so this is their strong advantage. They will get to the hardware while it's still in China. That's much easier than getting to an Amazon data warehouse and putting their plug over there. And this kind of operation is kind of foot at the door to get anything you want because whenever you want to update the software on the chip, you just do it remotely. And today you want it to exfiltrate data, tomorrow you want it to corrupt the data. It's all okay. We can do all of that. So they really had good technology and a good
mindset to pull it off. Way to go, China. What did we see here? We talked about these things before. The Chinese wanted financial advantage. They wanted to know America's secrets, and they wanted to leave an Easter egg, and they managed to do it very nicely. They used their financial resources, their technology, their engineers, and they went low and slow. This thing probably took years to pull off. It was very well crafted. They left a back door. And as you can see here at the bottom, I have put two little oranges. One is about the digital attack vector. I'm not sure about this, but it's not unlikely that some people in Supermicro helped out the Chinese to do this. I don't
know though. And as for keeping eyes on the price and knowing exactly what your target is, I think that in this case specifically, There was no specific target. The Chinese just wanted broad and persistent access to a lot of American data. And they did it. They did what they wanted to get. I said I don't really want to get deep into politics or geopolitics, but I do have to mention this. In 2015, the year when this hack was discovered, President Barack Obama signed an agreement with the Chinese president that the two countries will not hack one another for financial espionage. And I just wonder what, is this a coincidence? Like these two things happened in the same year? I don't know, you know, conspiracy theories
and everything, but I mean, perhaps Obama got sick of the Chinese hacking him, so he signed a contract. Maybe the Chinese agreed because they know, oh, it's okay, we have this other hardware vector you're never going to know about, so we don't mind signing this little agreement. So I'm not sure why it was, if this is a coincidence or not, but... I don't believe in coincidences. And then, you know, in more recent times, we see American officials recommending not to use Chinese equipment, not to use routers or cell phones. And maybe this is all part of the trade war between China and the United States. But maybe this is really because the United States is worried about more hardware hacking.
Moving on to our next exciting hack: Who turned off the lights and why would they do that? So we're still in 2015 and one day in Ukraine, it was December, it was probably a little cold, about a quarter million Ukrainian citizens found themselves with no electricity for three to six hours. No, I don't judge or anything. I know that Ukraine and Russia are having their problems, but this is not a very nice thing to do. And it looks like Russia is referring Ukraine as their little backyard to test all of their cyber malware and capabilities. They're just testing them on Ukraine. They don't care. Let's try it over there for just a few hours. So Ukraine has several electricity companies that supply
electricity. And on this day in December, three of them, in great synchronization, were blacked out. So these are three different companies with different equipment from different suppliers, not providing electricity. on the very, very same time. I don't know when was the last time you organized a potluck, but organizing this is crazy. It all happened at the same time, very smoothly. I cannot believe the hackers did not train on this to make sure it works and orchestrates perfectly. And it's not only that they managed to hack the power grid, this hack goes way, way deeper because the hackers operated the ICS, the SCADA systems that run the electricity grid, the power grid. So it wasn't enough to just break into somewhere,
you actually had to operate it. So this is not exactly something you would teach a criminal hacker to do. You'd have to bring a professional in to do this. So again, we understand that this is a very, very big event and not just breaking somewhere. So I would like to talk about how did the Russians get to the target and then what did they do when they got there. So first of all, you do your homework, naturally. In this case, we understand that the Russians did their homework for at least six months before the operation began. So of course, you learn about your targets and what would be the best way to approach them and so on and so forth. This is not a surprise to you. And
then came the phishing. This attack vector just keeps on working even for state-sponsored attacks. It's great, but specifically in this case, It wasn't that obvious phishing email that you are thinking about because the emails were received from legitimate IP addresses of legitimate people who actually did send these emails. So what we understand is that the homework the Russians did were that good that they managed to get to the computer of the sender and in fact the attachments that they then sent to the people who work at the power grid. These attachments obviously included Excel and Word documents that had the malware Black Energy within it. And Black Energy was used to do a lot of reconnaissance,
to gain credentials, to harvest credentials, to wander around and get more credentials and basically get credentials. Another great thing they did was to read a lot of documents off of the operators' computers and to understand perfectly what exactly do they do when they operate the power grid. And a great thing they learned during this phase helped them to do the very complicated, to overcome the complicated challenge of hopping between the management network and the SCADA network, which are completely and physically separated. What they learned was that the engineers are connecting with RDP from the management network to the SCADA network, and they're not using MFA. So the credentials that they harvested before were very, very useful because they could have just stayed at home wherever
they were and to connect to the SCADA systems, to the operating systems of the power grid. So once they got to the target, what did the Russians do? So obviously they got the power down. They now know how to operate the power grid, so they use that knowledge to open the breakers in the power grid, and this is actually what caused the blackout. They took control over these devices, and people just did not get electricity supply. It's that simple. But they were not satisfied with that. They also decided to cause some extra damage. So the devices that run these breakers have their very own firmware. And the Russians wrote a new firmware. And they rewrote the original firmware
with their own. And this is why the technicians could not take control back on the breakers because They could not operate them and they had to go manually to close them back. Another thing the Russians did was to use a software called Kill Disk, which just killed the computers of the administrators, wiped them out, and they could not then reboot. So again, you had to manually operate these computers to gain back control. and if this is not enough damage. So we had a little bonus hack. The Russians DDoSed the call centers of the electricity companies so that when angry customers wanted to call them and say, "Hey, where is my electricity?" Well, no one answered them because they were too busy
answering these fake phone calls. We have a short video here, but I'm not sure we have time, so you guys can definitely look it online on YouTube. This is a video that showed that one of the technicians in Ukraine was filming from his phone. You can see the mouse cursor just moving on his screen and the SCADA system being operated by someone who is not him. And everyone is like shocked and saying, "Oh, what's going on here?" So let's move on. Yay, another great operation. Super well synchronized. This is very, very unusual and I pay my respect to them. They understood very well the target. They brought in professionals to operate the power grid. This
is definitely not a hacker's job. And we can talk a little bit about the after effects of such attack. We can even talk about, you know, things from last week, like what's going on in Venezuela these days. I have no idea if this is a cyber operation, the lack of electricity in Venezuela these days. They are blaming the United States to do it. Honestly, it doesn't matter if this is a cyber hack or not. Just the fact that a Russian operation from four years ago makes us wonder today if a power blackout is a cyber operation. The Russians did their job. They're causing chaos. They're making us question our governments. We're not feeling safe. We're not sure what's going on.
So they did a great job. And actually on my way here, I had a layover in SFO and I was watching the TV. This is from Sunday, okay? Is Russia meddling in upcoming Ukrainian elections? What do you think? Yeah, I mean, maybe. So I would like to quickly go over the attack patterns that we keep on seeing with state-sponsored attacks. This is not going to surprise you very, very much. So we start with a very, very good reconnaissance. move to take over and establish a foothold so that we can always reinfect our targets, do some more privilege escalation and then ongoing recon and lateral movement and getting more credentials and getting more persistence within the network
that we want to be in. And then, yay, we've completed our mission. We found the Holy Grail, the secret information that we wanted to get. But one very, very important thing you need to remember in state-sponsored attacks is phase zero, before the initial reconnaissance, we need to choose our target, we need to choose it carefully, because we're going to put a lot of efforts and a lot of money and a lot of engineering time on that target. So we need to put the right place that will get us the most intel about what we wanted. Now, the question you've all been asking, what can we do about this? Is there anything we can do against these attacks? And the short answer is no. There is
nothing you can do about it. I can stand here for hours and talk to you about network segmentation and MFA and logging and identity and access management and intrusion detection and anomaly detection, but state-sponsored attack bypasses all of these things. Because they go low and slow and they have the time to learn how you operate, to learn your antivirus, to learn your firewall, to learn how your network management works. They're not in a hurry. They have the time to do it and they do it. There is nothing much we can do, but yes, please start with the long list I just mentioned because it will make their life a little harder. I do think that we're in a kind of
an arm race, kind of like we had back in the 50s and the 60s with the atom bombs. But today we're talking about the dirty worm instead of the dirty bomb, as I mentioned before. And I don't know where are we going with this, but time will tell us. So thank you very much. It's been my honor to be here today. And... Think we ran out of time, but I'm gonna stick around if anyone has any questions. Thank you Hello, my name is Isaiah Sarju today, I'll be talking about how threat modeling made me better at online dating You'll find that's a bit tongue-in-cheek but definitely applies. So a bit about myself. So I am a security consultant.
I do red teaming, penetration testing, all of that. I've taught information security. The most important thing that I want you to take away from this about me slide is my worldview. I am anti-nihilism, anti-security theater, and anti-wasted time. I am pro risk-based security. And so what that looks like in practice is I don't believe that we always have to have the view the sky is always falling. I believe that we can measure risk, we can validate it, and we can handle it. And then we can rinse and repeat that process for whatever our projects are and whatever we're working on, and then we can do that over. So we don't always have to be in a nihilistic frame of mind like we're all pwned all the time,
there's no hope. So that's why I talk. That's why I do what I do is because I believe there are positive ways of making this world better. through information security. And just anecdotally, love chocolate chip cookies. So if you like this talk, feel free to give me a chocolate chip cookie. Alternatively, if you really dislike this talk and you see me eating a chocolate chip cookie, knock it out of my hand. It'll tell me that you really didn't like what I did. So let's talk about who this talk is for. So it's for folks who want to learn what threat modeling is. It's very much an introduction. I'm not a pro at this. I do this in my day
job, but I'm always getting better. And so doing this talk actually helped me become better, get to the core principles of what do people need to know to get started and to make it less intimidating. And so we're going to learn the basic steps. We're going to apply threat modeling to the various stages of development. And the important thing for me for this talk is it's for developers and folks who build and maintain platforms and for end users. Both groups can approach threat modeling to help reach their goals more successfully. And so in the end, hopefully you'll have some tips for doing online dating better. What I mean by that is more securely, more privacy
in mind. I don't suggest you take dating tips from me. That would be very bad for your success rates. So, what was that? Okay, yeah, just don't take it, just the security stuff. I'm pretty confident about that. Other stuff, not so much. So, in the end, you should have a basic understanding of some things I think are key to threat modeling. DFDs, data flow diagrams, applying stride, which we'll talk about, attack trees, and then because this is a talk about dating applications and applications that deal with privacy, you should have an understanding of the principles of nimity and linkability. Cool. So, why am I, like, again, one thing that really got me inspired to do this talk is not all the graphs and charts about how online dating
is growing in popularity, how it's trending upwards and all of this. It's when I just Googled dating software. Google made recommendations for Tinder, of course, Grindr, Bumble, you know, the standards. but they also had PostNuke and MySQL. This was just Google being like, "Yeah, you wanna learn about dating software? "You need to know about PostNuke and MySQL." So our romantic lives have literally been put into databases. So, you know, no real point I'm trying to make here besides I found that humorous and a great motivator to go deeper into looking at how technology intersects with online dating. Cool. How does this apply? I want to avoid security nihilism. If you've talked to me at all throughout this conference, you know I hate
putting users down. I hate saying everything is always in fail mode and there's nothing we can do about it. In the end, if we approach this systematically, we can promote safe and positive interactions. We can build privacy-centric apps and apps that have people's interests in mind. And in the end, I don't believe that security should just be done for security folks. There has to be a greater purpose. And so in the end, we need to protect our users. There are any number of folks who don't feel comfortable using online dating applications because they don't feel like a space has been made for them where they can do it safely. And this applies to other privacy-centric
applications. In the end, we can apply threat modeling so we can make our applications and our platforms more accessible to all those who want to participate.
Great. So let's go down the kind of like sad rabbit hole and then we'll come out with some happiness. So the consequences of bad threat modeling in terms of online dating. There are many. I'm going to specifically talk about check marks research that happened some years ago on Tinder. And then kind of a homegrown example of data linking disclosure. So we're going to start out. I said you'll learn about DFDs. You don't have to understand this diagram right now. I just want it in your head as I describe what a DFD is. So this is kind of rudimentary. I put this together myself as a generic DFD for dating applications. So why we have DFDs
is to decompose application processes, kind of divide them up, understand data flow. Adam Shostak, who literally wrote the book on threat modeling, following Windows Snyder and all of that, but he wrote one of the most recent high quality books, both coming out of Microsoft, for Microsoft folks, and he says that problems usually follow the data. And so we'll look at that and see how that proves to be true. And then once we've identified our problems, we can use things like STRIDE, which is a simple acronym for helping us remember the different types of attacks that can happen against our application or platform to enumerate what are some potential problems we could have. So again, let's revisit the DFD and talk about what that all means. Following data flow, we
see how data is coming in from social media, maybe you have federated authentication, or you're also pulling in pictures, you're pulling in biographical information, it's going into some dating application controlled database. You then also have our end users who are interacting on their phones or their laptops or whatever it might be. They have to interact with some type of front end and you know, these applications are going to monetize our data outside of just paying for subscriptions or whatnot, they're probably selling that data to third parties or back to Facebook or whatever it might be. And so they're scarfing up all this user data that will never probably even get presented to the end users. But they're able to scarf that up and then they can ship it
out to folks who want to buy it. It's probably buried deep in their eulogies which I have not chosen to read. So if you have and you want to provide me with a little snippet I can put in future presentations, please send that to me. Cool, so we have our DFD and now we can start thinking about what are some risks to our data and our infrastructure. Stride is the acronym for spoofing, tampering, repudiation, info disclosure, denial of service, and elevation of privilege. And so this is just a great acronym that you can start out with when looking at a DFD or thinking about any of your processes. You know, can I spoof who I am on an application? Falsely claiming to be Isaiah Sarju
on Tinder. I don't recommend doing that. Again, it would probably lower your success rates, but if you wanted to just for fun, be my guest. Then you have things like tampering, so changing the presented profile picture, being a person in the middle who is tampering with that traffic and changing a right swipe to a left swipe or something like that. Then we have repudiation, being able to accurately say, you made that swipe because the device digitally signed the swipe or something like that. Information disclosure is the one I have highlighted because that's the one I really want to focus on. And so, you know, disclosing who people are matching with, what are their conversations that
they're having. You have denial of service. If it's a Friday or a Saturday night and the dating application goes down, folks are going to go to another one to, you know, reach their end goals for whatever that evening might hold for them. And the last one actually helped me do some learning. I didn't think of elevation of privileges that much, you know, lateral, vertical, whatever it might be, in online dating applications until the ability to log in to other folks' Facebooks, until that vulnerability came out like a year ago, maybe slightly longer than that at this point. And so the ability to view as somebody else and then exploit that to use federated authentication to authenticate to applications. So if we go back to this DFD and we
realize authentication is depending on a third party. What protections do we have built in if that third party is compromised or if their processes are compromised? Do we have some level of maybe notifying a user you logged in on a new device even though it was through federated authentication? Do we have our own kind of MFA gateway when a new device is signing in for the first time? So I didn't even think about that until that happened with Facebook. And I could have looked at my DFD, applied stride right here, and thought, oh yeah, that is something we need to think about if we're concerned about folks using our third party for abusing the federated
authentication. Cool. So let's talk about the check marks research. This was done some years ago. It has a good, it's a good example of information disclosure. So what they were able to show is that Tinder was not sending images encrypted over the network so they could sit on the broadcast domain and sniff those images and see them. And so this was influenced or encouraged by, you've probably all heard of like, I think it was called Image Drift or something like that. People used to use it all the time at CTFs. They'd have it running so any sites that folks were visiting for their research or whatnot, all those images would get shown up on a board. People quickly figured out they could go to whatever site they want and
that would also show up at the board, you know, for the lulls or whatnot. And so what they showed is that they could see the pictures, and then by looking at the size of a response that Tinder servers sent back to the application, they're able to identify if it was a right swipe, a left swipe, so a yes, a no, or a match. And so... Even though the responses were encrypted with SSL, TLS, whatever it was at this point, even though they weren't able to see the exact data within that returned information, they're able to look at the size and identify What what just occurred on that application and so again if we if we go back over here and we look at our DfD And we
go look what could be happening right here. That's one thing I recommend when doing Threat modeling and working with DfDs start by looking at the change of context and so we're going between the end user and The front end of the application what could be happening right here is there information disclosure could there be denial of service and that and so in this case it ended up being information disclosure based on the size of data returned so again I said that I'm not about saying the sky is always following there has to be some way to deal with this so if you've ever if you are in risk management this is very elementary to you
so bear with me as I as I kind of level set for everybody in the room we have what I consider four main ways of dealing with risk we can accept it and We can say, yeah, that's just, it is what it is. When I go outside, I don't wear a helmet all the time unless I'm biking because, you know, something could fall off a building, but it's very unlikely. I just accept that risk. You know, maybe something will fall off a building and hit me on the head, but I accept it. I could avoid it, you know, just not go outside ever. I could transfer it. I could have somebody walk around and hold
something over my head, so I'm paying somebody else to mitigate that risk for me. Or we can mitigate... slash reduce ourselves. Some folks divide up mitigation and reduction, put them together. I believe you're either mitigating partially or fully, and that is being done to reduce that risk. So that risk is slightly reduced, majorly reduced, or reduced completely so that you don't have to worry about it. So I put those together. So let's look at this information disclosure, let's approach it from the app side and from the user side. So from the app side, they could have used HTTPS for the photos. Every time I talk about this, I'm surprised I have to say this, but just start encrypting your channels. There's
no cost associated with that anymore. That was an excuse five, six, seven years ago. Just do it. But on the user side, we could use a VPN. Mitigating in our own way. If we're at a coffee shop and we want to get our swipe on, then We don't have to worry about people in that broadcast domain seeing what's happening. We still have to worry about our VPN provider who's apparently no logs these days, but that's a whole 'nother talk. When it comes to disclosing what happened, they could standardize the response size Or they just don't care. They're like, "We make too much money. "People are gonna use our app anyway, "so we just don't care." That's often what we see organizations
doing, especially those who don't have privacy built into their bread and butter and their development life cycle and their core beliefs. As a user, we can only swipe at home. That's a good example of avoidance. All we have to worry about is our nefarious roommates and our ISP. And so we've avoided some of that risk of strangers out in the world seeing what we're doing. Or we could just say we don't care. And that's kind of the nihilistic point of view that folks often fall into. But I want to give us some tools to get out of this I don't care, we're all pwned all the time mentality. Moving on to attack trees and talking
about how that applies to nimity and linkability. This is gonna be another example of how we can use threat modeling to kind of protect ourselves if we're trying to protect ourselves. So let's say I'm concerned that I'm gonna get stalked by somebody. That's not an ideal situation. So I can use something called I can use something called an attack tree to help brainstorm different ways that I could get stalked. So you usually start with the end goal that a malicious entity would have, so to stalk me. I then have the different routes that they can take. And this is a very simple attack tree. When I use these for my offensive work, they usually intersect, so if I get stopped at one route, I can use some of the
information and gains I've had to go to another route. But this kind of demonstrates the overall idea of it. They could have sub goals that help them reach their objective of stalking me. They could learn where I live. If we match, they could just be like, "Oh, I really like your profile. "Where do you live?" Maybe 90% of the time I'll just send my home address right to them. No, I don't recommend you do that. And so they could just ask for my location and I could give it. Or they could try to learn where I work. They could, any number of people put exactly their company where they work on their profiles And if
you are that single company in a geographical area, you've now told them where you are from 9 to 5 most of the time. They could just again ask for the location, be like, "Hey, you're really cute. What's the address of your employer?" That's probably also not, like I said, I'm not good at giving advice on openers or anything that'll make you better at online dating for success. Security, yes, okay, let's keep going. And then what I want to talk about, I think describes linkability, is if you don't say your location, but you say kind of what you do, you disclose your real name, and there's an idea of where you are in a geographical region. Because as an attacker, I can spoof wherever I'm coming from.
So I can make overlapping circles of distances from any point, And I can start to get a smaller and smaller area where you might actually be. And so combining these using linkability, which we'll talk about, can lead to data linkage, which can lead to information disclosure. Cool. So here's an example. We have over here someone's Tinder profile, and they just disclosed what type of work they do and their real name. I was able to find this person on LinkedIn, on their Instagram, and all of that. I have that stuff blurred out. Because with just their job title and their, I forget what, it's just their job title and I believe their location, so the city that they're
in, able to find this person. And one thing I want to stress while I have a soapbox is when you're doing information security research, disclosure does not equal consent. So this person is a real person. They've decided to geofence who they're displaying their profile to to a limited number of folks and they don't expect you know everybody in this room here in Vancouver to see their profile so when you're doing this research take the time to think about what are the consequences to the information that you're finding and don't be braggadocious with it and like look at this random human that I pwned. It doesn't do anything for you and it certainly doesn't do anything for them. So thank you for listening to my
soapbox, back to the talk. So NMDI is the amount of information about the identity of participants that is revealed in an interaction. And so what that is, is the opposite of anonymity. If I give you my name, that's, you know, that's nimity. If I give you where I work, you understand my city based on where that app is, that app interaction is taking place. Then that is, that is nimity. That is the amount of information I'm disclosing. Linkability abuses nimity. So you can have nimity coming from all these different sources. That could be your online dating profile. That could be your LinkedIn profile, your Instagram, your Facebook. All these things are providing nimity. And if you're able to then take information from disparate sources, you can then link
them. So the clearest example you'll see is when there's multiple data breaches, but there's a shared key across all of them. It could be your full name, could be your address, could be your email. Or it could be between separate breaches, there are shared keys. And so what you can do is a virtual join on those keys. So my email was in this breach with my full name, my full name was in this other breach with my social security number, and my social security number was in this other breach with my address. You now can link across, if you have access to all of those breaches, you can link across all of those and you
now understand a bigger picture of who I am. So when you're doing threat modeling, it's important to think about how much nimity are you providing and how could that be abused? Or as an end user, think about how much nimity you're providing. There we go. So again, we're all sufficiently scared and all of that, but there is hope. So let's talk about what the app developers could use. they could allow users to choose what information they disclose, and they do that. It's a form of risk transference. As a user, we could just not date. I don't think that is practical. Online dating is a modern phenomenon that's here to stay. Instead of burying our heads in the sand, let's say, how can we do this more safely and more
intentionally and protect ourselves, and if we're developing apps, protect our users? A great example of kind of reducing nimity is Tinder used to, Tinder used to disclose Instagram handles and Bumble did not. So they both, you could both on both. You could link your Instagram and show, you know, recent photos from that. Give people a better idea of what you do in your curated life. And so Bumble, I'm guessing that there was a lot of harassment from folks taking people's Instagram handles and then going and engaging them with them on a platform that they have not consented to have engagement with on. And so I gave this talk at B-Sides DC last year. And in between that time and now, Tinder stopped doing that. I don't believe
I had any real impact. It was probably just users complaining about being harassed. I like to think of myself as a small piece of advocating for more privacy in this larger new world that we live in where it's in flux. You could do things like fuzzy or delayed location. Tinder gives it to you in kilometer or full mile increments while Bumble within, I think under 10k or maybe like something I've ever think of maybe under 10k gives you to you in tenths of a kilometer and so that is a more accurate location that if you can spoof you can more accurately tell where that person is in real time and so doing things like fuzzy or delayed location I know OkCupid has greater increment kind of gradients 1, 5,
10 and so you can say yeah this person is within my city if I wanted to meet them in the next week So you can know that without knowing exactly where they are. And so it's a more privacy aware approach to that. And so then as the users, we could also do things like not disclose our employer, don't connect Instagram, or prevent real time location sharing. And this one right here I want to highlight. So I'm going to go back some slides, ask, okay, what in this is controlling the disclosure of the location? It's not a trick question. First, just shout it out if you know it, if you see it. Yeah, the client, the end user, the client. So, as an end user, we go, when we're doing
our threat model, we go, we have control over that. We can control that. So, and we're here and we're back. So, we can choose which devices we use. iPhones let you state, let you set permission saying, only let this have access to my location when I'm using it all the time or never. Android is just like, you either have it or you don't. And so... being able to choose the platform that you engage with folks on. Saying I'm only going to use my online dating on my iPhone and only allow it to access my location when I'm using the app and I'm only gonna use the app when I'm in transit. So there's never a
time when I'm tied to a specific location that somebody could use to harass me. You could spoof your location for some form of mitigation, or you could just not care. And so that's also an acceptable choice if you don't believe that you have any threats in your threat model that require any of these other activities. Great. So, my love of threat modeling. So, this helps builders build better. I was talking to somebody at the, I think on Sunday about this, and they asked, "How can this apply to agile development, "or iterative development, whatever you wanna call it, "not waterfall?" Because you say you do this in the design process, how do you apply this iteratively? And I go, "You can do this high level."
You can maintain that DFD, you can enumerate threats when you're doing additions to your application or to your platform. You can say, okay, there were threats that we enumerated that were not relevant when we first did these exercises. Are they relevant now? We've now, you know, we went from chain, we did a database migration from one platform to another. Are there availability concerns about this new database that we didn't have to worry about with this other database? Did we move it from on-prem to cloud? Who's responsible for maintaining the integrity of these databases or preventing denial of service attacks? We used to be responsible. Now are we depending on our cloud provider or some type
of middleware that's sitting there preventing us from getting DOSed? So, it helps builders build better from the very beginning when you're thinking about what you want the application to be, and throughout that entire process. You can check back, you can enumerate new potential threats as your threat landscape changes, as your application changes, as you get feedback. It helps defenders think intentionally. So you can look at the various contexts, and you can go, where are we most likely to be attacked? Is it our internal database that's sitting in a sitting in a, is it a physical attack in our data warehouse that's protected by an armed guard? Or is it our front end, people trying to abuse our API? So you can prioritize where you're looking for these attacks, you
can prioritize where you're implementing defenses. I, as a red teamer and offensive security person, it helps me prioritize attacks. If I have an idea of what's going on, I can say, okay, what are the different attacks I can do here, what's gonna give me the highest rate of success, and then using things like attack trees helped me kind of brain dump and be methodical and intentional about the paths that I'm taking. And in the end, it helps everyone make intentional decisions about data responsibility. Once you make your DFD and you identify that, oh yeah, all the data is sitting on our servers, You don't have an excuse. You have to say, we are going to handle this risk, we're going to
transfer it, we're going to mitigate it, or we're going to be intentional and say, we just don't care. And then you become responsible for that decision. And then... As end users, we can think more proactively about where our data is going and how it might be flowing and being sent to places that we don't necessarily consent to or desire and make us question, do we actually want to engage with this or how do we want to engage with this to a level that makes us feel comfortable. Great, so I also want to take another soapbox moment. Another talk that I'm giving is on red team, red teaming and why red teams shouldn't be actually that
special in an organization. There's a whole bunch to that, but I'm looking for stories that, you know, of security teams, red teams, or third party pen tests that have caused some level of breach against an organization. I just need a couple scare stories. Feel free to talk to me after. I can keep you anonymous or attribute the story to you. You can also just DM me on Twitter. They stay open. And so, yeah, this is just something I want to take the time to ask folks who are around this every day. I have a few of my own stories, but it's kind of on the fence if I can talk about them or not. So
if you have any good ones, please share them with me. And with that, if I have time, thank you, and I'll take any questions. Great. I'll be around if you want to talk to me in person. Thank you. I'm showing on. I still got, ooh, hello. I just wish I was a rap star or something.
How's everybody doing today? Doing good? Second day of B-sides? Just two more days to go? I got a big line of people coming in. All right. There's plenty of heckler seats. I usually don't throw too many things. So I have my infamous... Swag Hunter shirt, so if you got some cool swag or something cool, show me. You can have one of my cool Swag Hunter shirts. I do do prizes throughout the year, so I tell people my Twitter handle is on here. All you have to do is tweet a picture of the shirt, either on or someplace, anywhere in the world, and put your name in for a drawing for, you know, Raspberry Pi, something. I don't know what I'm going to
give away this quarter, so something cool. well we'll wait till this this group's coming in so we'll wait till that group comes in and we'll start how about that yeah i got time are we live yet we're live already man i'm a youtube star now all right hey youtube all right y'all are so quiet why is everybody so quiet i mean the people on youtube are you know louder than y'all right now awesome awesome so who wants a swag hunter shirt oh we got a first hand look at that hand already came up out there Good. All right. Let's see if we can make this all the way. That's close enough. It made it to the AV guy. So he might
have to help you get that. All right. So welcome. Day two. First session after lunch. Hopefully everybody got something to eat. That was really, really cool. I personally had, what did I have? General Thai chicken instead of General Tao's chicken. You know, it's okay. Let's see here, turn on my little dealie here. I tend to walk around so the YouTube video might show my head being chopped off. Oops. Okay, so I need to fix this slide, but there we go. Okay, so I was informed yesterday that this is indeed not a beanie, that this is a toque. So who wants a toque? Good Lord, there we go. Do I have any in the back? Man, hopefully I have some. Okay, there we go. Look at that. Okay, I'll give
some more out. We got some more. I'll pass those out. I think, I don't know where my cohort is. They're supposed to be helping me pass them out in the back. So we'll, oh, look at that front row. Look at that. I got to go to the back because the front always gets them. Are you trying to take my beanies from me? Man, you're not touching my beanies. Come on now. It's a toque. See? I got corrected again! No, you don't get one if you can't say toque. There we go. Okay, so hopefully we've got this settled. It's called a what? Right. Good lord. So, uh, oh. And then we're so, this is just a demo purposes. So this is my demo for my entire presentation
where I demo the use of the toque and then this scary guy. I'm not sure where he, he was in the audience earlier. I'm not really sure. It's a B-sized terrorist or something. I don't know. We'll go with something like that. So, welcome. I'm Dave Balcar. I'm a security strategist with Carbon Black. There's my Twitter address. I'm a security researcher. I also go by the crazy title of cyber profiler. You've heard of FBI profilers and stuff. I profile advanced attacks, mainly specialized in APTs. My background is around pen testing, incident response, and forensics. I've lived a colorful life, to say the least. So my talk here today is kind of a hodgepodge. So please ask any
question you want. Just they better be good. All right. So, you know, you've heard the talks about threat landscape and stuff. So let's take another look at threat landscape. First of all, I like to celebrate birthdays. Anybody's birthday? Nobody's birthday? Wow, that's pretty rare. That's really rare. No one will admit it. I know, right? You know, I take funny pictures of security cameras around the world and this guy was having his birthday. Sometimes you put up the awesomest security program out there and what do people do? They drive right around it. Then you put your important documents in the wrong drawer. This is another use for duct tape. I took this at a subway near my dad's house. And
it was funny, I need to go take another picture of it because my dad calls me about two months after I took this picture and he goes, Dave, you'll never believe that they replaced the camera, but the cable was hanging out of it. So they didn't even hook up the camera. Sometimes you hire a security company and they don't know how to do security. Sometimes you go get your oil change at the car place and yes, that is the real password. I cannot make this stuff up. That is my name on this invoice. It's a little fuzzy, but yeah, I did not give her my credit card. I did give her cash, you know, for sure. And I proceeded to tell her
to get rid of the password. If you're ever in Brazil, this is BYOF. This is Bring Your Own Fire Extinguisher. I'm just saying. Oops. This is the math question for the presentation. OK, who can dial 999? Anyone? Anyone? I gave this similar talk to PhD candidate students at MIT last year. And I showed this same picture. And I got blank stares for like five minutes. And I'm like, guys, it's a simple multiplication problem. Let's keep it easy. Let's keep it easy. Again, a phone with pure security can only dial 911 on this phone. If you're going to put a firewall up, make sure it covers your entire environment. You never want this sign in your security department. Hence the title of my talk,
"Security is a Mission, Not an Intermission. Don't Take a Break from Security." Sometimes you're flying down to, I think I was going to Brazil at the time, and I quickly grabbed my phone and I'm chasing her down the hallway as I'm trying to take this picture. If you can't spill security, you probably should not be in security. the things I find. If you want to know where the best noodles are in town, ask this guy. I'm not sure if they're at the food court downstairs, but this guy knows exactly where the best noodles are. And I will pause for dramatic effect. I hear from the audience that you've been to this building too. This guy's
got the best DR plan in the industry. He's got a private contract with the fire department to make sure he can get his servers out of the building no matter what. Really, really, really cool. Well, cool. So what do we have out there? Right? You've heard talks the last couple days. There's lots of scary stuff out there, right? Well, you know, everything from the human factor, cyber crime as a service, ransomware as a service. I got this one site that for $175, they'll produce the ransomware. they'll push it out there they'll set up your bitcoin for you they'll keep 10 of your bitcoin and provide 24 7 tech support for 175 bucks really really cool my biggest
thing here is attacks on third party right in targeted attacks which is my specialty go and detect it for a long long long long time i've seen targeted attacks go on for four years in an environment I know there was one just recently where the attack was in the environment for two years at a pretty large corporation. And we've definitely seen ransomware. You know, how many people have been hit by ransomware? Again, that's like the birthday question. No one wants to answer it. I'm not sure why. But I got one, one guy. So during this, here's a deal that just came out. This is for the, and I cannot pronounce this, the dark vishma, vishna? Yeah. Yeah. So not only do these
guys do social engineering, but this is more of taking physical security and breaking into stuff. Now I know when I do pen testing, I do lots of physical security stuff and I'm always going to get in, right? So I'm not going to dress like this. I might walk in with a three-piece suit. I might walk in with a brown shirt on carrying a box. You never know how I might get in. But these guys got in and used basically off-the-shelf stuff, right? Raspberry Pi, this little device. Yes, I do have it set to steal passwords in 13 seconds or less, even if your machine's locked. So if anybody's got their laptop out, anyone? You don't want to test it? Okay. There's usually at least one, at least
one that will test it. So they got in, they also hooked it in. Once they got into the institution, they plugged it in and then they had access to the network from outside the building. Scary stuff. Anybody know what these are? Has anybody seen these? These are cool. These are cloud pets. So cloud pets were cool because you get this pet, you get this app on your phone. And you can type messages, you could speak into it, and the voice, your voice will come out of the pet for your kid. Okay? I take great concern in this. The problem was this firm was hacked. They lost everything. They lost millions of personal messages, 800,000 user accounts. The problem I'm
going to have with this is with the company. Now granted the hackers are bad and I do not call them hackers, cyber criminals. Okay, I'm a hacker not a criminal, big difference. The problem here is the company was stupid. They kept all these messages and these messages were going to kids that some of them were in hospice, some of them had cancer, They were supposed to pay a ransom. They asked for a ransom. People didn't pay the ransom, so they released all these messages. And that's scary stuff. Just think about if you had a child that had one of these and your stuff got released. And I mean, that's personal, right? I definitely would not want it. But this isn't the first time this has happened. In 1999,
there was this thing called a Furby. The U.S. government deemed it. I was in the U.S. military as well, so I saw these signs on the Pentagon that said Furbies are banned from the building. Why? Because they were recording devices. So just imagine if they had, that's the Cloud Pet, right? It's a recording device. What you could do with this. So it's been happening for a long time. How many Intel people do we have? A few people use Intel. Everyone else uses AMD? Okay, I got two people use AMD. Come on now. Show AMD some love. So we've heard about this. This is the dual processor, basically a hidden processor inside your main processor that can take over the
entire machine. And there's Spectre, Meltdown, all this kind of stuff around it. It runs Minix. So if you hear anybody say Windows is number one, call BS and say, nope, the number one operating system on the planet is Minix. Because Minix has shipped with every single Intel processor for the last seven years. are going on 8 now. Here's a USB exploit, nine years of Intel processors and I cannot confirm or deny that my little device here has that code on it but I'd be willing to test it out on any willing participant. Google is actually from, this is a year and a half ago, almost two years ago now, is actually working to remove this from their environment. Why do you think they want to remove it?
the security implications right i mean because it can get full control or it doesn't matter what security you controls you put on they're still going to get it crazy stuff so what else is out there you gotta like you gotta love the spirit of these guys they're in jail you know you lock up this criminals you lock up the cyber criminals so what do they do we get inventive right they build their own computers and hack the prison network so this is what they did They did a few things. Not only were they hacking the network, they were looking up tax fraud, credit card fraud, and you know the little badge readers to get the badges to go through the prison? They were making
their own. Now, you always hear the term stupid criminal. I'm sorry, I'm not a smart guy, but I guarantee you if I'm cloning those HID cards, I'd be like, "Beep, okay, I'm out." It would take literally three minutes and I would be out of the building. They stayed in prison. I guess the food was better. I don't know. I can't make this stuff up. We never want to hear this. We don't want to hear people in the space doing bad stuff. This guy was doing ransomware. This one's bad. This is a piece of ransomware that deletes Veeam backups. How many people run Veeam backup? A few people? You probably don't want to see this in your environment, do you? Because what happens during a ransomware
attack? You want to restore your backups, right? Well, what if you get hit by this guy, you're wiped out. Now it specifically looks for Veeam backups. This one's funny. The Chicago Police Department got hit, ransomware. Here's a 16-year-old, this is a couple years ago, hacked Apple. And I don't understand why companies can't figure out that 90 terabytes or 90 gig is missing from their network. How can you not see that going out the network pipe? Who watches their network for large data transfers? Just a few people. You've got to be able to watch for this stuff. It's not that hard, really. I mean, you can look at these packets, because most of the cyber criminals
out there are stupid, and they'll just dump everything, because they want to get in and get out, right? They just want to dump everything they can. I mean, look at the Echo Hacks. Look at them. They lost everything, right? I mean, that was terabytes of stuff. And you would think a big company would be able to see that, but obviously not. This one I threw in there, this is from American Eagle Credit Union. So if you drink Analyzer Bush beer, this is their credit union. The thing, the reason I put this up there, because of this statement right here. It's still not clear how much money total was stolen or how many depositors were affected.
The bank couldn't even tell you how much money was missing. Or how many people it affected. What does that mean? They weren't monitoring their systems. Very bad access controls. Okay. Which one? This is another backup one. This is from, I'm actually from Texas, so I actually throw this one there. They lost everything in this ransomware attack. Everything from their body cam videos to everything. But they had a backup. Did the backup work? Yeah, they only had one backup. Literally, every day it would do one backup. And guess when the ransomware hit? They backed up the ransomware. All their files were encrypted. So they lost everything. I mean, you can't make this stuff up. So let's take it to even the
next level. Let's go nation state stuff, where I like to live, right? Stolen certificates. This is the Jmicron certificate that was stolen several years ago. This is happening more and more because what is what is the OS believe right if you've got a valid certificate? What's it going to do? No problem. Just go ahead and install right? What is most anti malware software look for if it's a valid certificate? Do you think they scan it now ask your vendor ask the people that are helping your network if they're if it's not scanning stuff? Don't look just don't trust it zero trust. Oh Okay, this stuff up on sale. How much do you think this stuff is, these certificates are being sold
for? Anyone want to take a guess? Come on, you're on YouTube. A thousand bucks? That's pretty close, but not quite. Anyone for a Swag Hunter shirt? No, he was the closest so far. He's like the volunteer. Wave. Twelve hundred? Who said twelve hundred? Where's the hand? Okay, we'll get over there. Actual? Get that over there. To a hunter on the dot. I've seen them go as high as 2500 for code signing certificates on the black market That is more it costs more to buy a code signing certificate than drugs guns or passports That's really scary Why because they can use that to break into your network because if a piece of malware is signed with a code signing certificate
It's just gonna run. So the most sophisticated malware ever was the from the equation group and Back in 2015, actually this has been going on for quite some time. This one's really scary because it's still out there. This was malware that affected the firmware of your hard drive. And guess what you can't do? It doesn't matter what OS you do, you can't format the drive, you can do anything you want to it because it's in the firmware. So it constantly just re-infects your machine. Okay? Cyber espionage tool. This tweet came out a while back ago. This is a Cisco switch that was bought off the shelf and look what was inside of it. That's a cyber espionage tool that redirected traffic to another nation. How many people
open up their equipment when they get it? There's three of us in the whole room. At my house, because I work from home, I have a 15-year-old son and he thinks it's awesome when the UPS guy shows up because he thinks it's Christmas because we get to tear stuff apart and look for stuff like this. Pretty scary stuff, but most companies don't do this. They don't open up their router. They don't open up their switches because guess what? There's that little sticker on there that says void your warranty if you open it up. Granted, they changed that law in the US, so they can't void your warranty anymore, which is good, but scary, scary stuff out there. I'm sure everyone's heard about the
Supermicro bug, which is right there. It's about the size of a grain of salt or rice grain. I love that Apple came out and denied it, denied, denied, but who do you believe? the researchers behind it, the Bloomberg report, or what Apple, Amazon, and Supermicro, because they don't want to admit that the supply chain was hacked. How do you prove that the supply chain wasn't hacked? The supply chain is very, very vulnerable. I love this one. You get these pop-ups. On one of my test phones, I saw this pop-up in December that on Google Maps, it was getting spam on it. Google still hasn't released what was causing this. It's like, how is this stuff happening? Crazy stuff. I love
this one because I like Tesla. This Tesla was stolen in Minneapolis at the Mall of America. They actually have a rental car business for Teslas. What the guy did, he'd rented the car about a month before. He took down the VIN number. All he did, called Tesla and said, "Hey, I've got this VIN number, can you attach it to my account?" And they did, without verifying who he was. He got on the app, unlock, and drove off with the car. He disabled the GPS from the rental car company, but he couldn't disable the tracking that Tesla can do. He made it a thousand miles before they got the car back. Yeah, scary stuff. So as in Canada, Y'all have some crazy
people here. They decided to rob a Bitcoin place because, you know, Bitcoins are actual gold coins. Can't make this stuff up. They did not leave with any money. They tried to get transfer Bitcoin out, but it did not. This is in Ottawa. So that didn't go anywhere. How many people use Adobe? A few people. You better patch your Adobe or turn your speakers off. I can't make this stuff up. Oh my God. 80% of Canadian IT pros said their firms were breached last year from a survey. This is in March 13th. So, let's take a survey. How many people were breached? Again, it's like the birthday question. Nobody ever wants to... Okay, we'll do it this
way. How many people have Sony PlayStations at home? Okay, you've all been breached. You've lost all your stuff. Okay, I have a PlayStation home too. Sony got hacked 13 times in two years. The PSN was one of them. They stole 77 million accounts. All the credit card addresses, all that kind of information. They can have mine. I don't use my real name on it. I don't use a real credit card. I get one of these reloadable cards that has nothing to do. So I lost like $3 when Sony got hacked. HP laptops, how many HP laptops we got? Again, we're back to the birthday question. You know, this is going to be, this is a recurring theme with this crowd. From the factory, key logger was
in the audio driver. How many people reload their computer from scratch when they get it from the factory? Good, good, good, good. Everyone that does that. You know, just for that, you know, front row right there, come on, you'll get a shirt for that. That's awesome. But I'm not going to leave everybody out. Let's pick a Lenovo. Hard-coded password to bypass the fingerprint reader. And no, I'm not going to give you the password. See me after class. Okay, leave out Dell. If you're going to reset your password, don't use the word potential security breach. Because obviously if you're resetting everyone's password, you know there's a breach and it's not potential. So what are we doing to protect our data? Are you pushing it off-site?
Air gapping it? How many people air gap their data? Just a couple? Okay. Which media? Who's using tape?
I'm the only one still, oh I got one that's still raising his hand? Oh my god. Okay, so throw those back there to him, please. Take that back to him. He's back in the back row on the right hand side. Raise your hand. You get a toque for that. That's awesome. How many copies of your data? I believe in the 3-2-1 method. Four copies there. Anyone else? Four copies, that's awesome. That's pretty much a trend. Anybody only do one copy like that police department? I saw someone raise their hand back there. You know who you are. Tape is still really good because guess what tape can't do? It can't get hit by ransomware because it's offline, right? And you can air gap your backup strategy that way so
you can restore your data. Because what happens when you use a NAS or something like that to backup nearline? What happens? Because a lot of people use shares and it just gets encrypted and dumped and you're toast. Okay. So, if you've never heard of the IRTC, it's the Identity Theft Resource Center. It's a great place to go check out breaches. In 2018, they reported over 1,200 breaches. As of February 5th, there's been 84. Oops. Just go to idtheftcenter.org. Really, really cool place to go check stuff out. And I have not seen a time, and I still have time, I think. Good. So, I'm going to leave you with Dave's top 10. All right, guys. Always treat your network hostile. Always think that you've been breached. How
many active threat hunters do we have in the audience? Just two, three? You've got to be threat hunting your environment. You're just relying on the software that you've got to find stuff, but what if it's so buried that you can't find it? You've got to be some active threat hunting. Nobody's immune. Have a post-breach strategy. How many people have DR plans? How many people have a breach plan? About the same amount. That's good. A few different people, but that's good. That's good. It's still only like 20% of the entire crowd. Don't forget about the insider threat. Remember, if Dave can walk in your building and plug this in or any one of my other devices, it can get really scary. How many people use card readers, badges, to
get in their building? Everybody? In the palm of my hand, real easy and small, smallest RFID cloner out there. I can clone four RFID cards within six seconds. Really, really cool. I'm fun at parties. Okay? I'm telling you, it's funny. Okay. Always do number two. Okay, we'll come back to that one. Know your network and log everything. Who's logging everything? Just a couple people, like using a SIM or something like that? Okay, Syslog even? Yeah. Because as an IR person, when I come in to do an IR, I ask a few questions and I need to see. We've got to go back, right? We need to go see where patient zero is. Backup, backup, backup. Test your
backups. I wouldn't believe to tell you how many people don't test their backups. I've seen companies backup for one, two, three years, and I always ask them, have you ever tested your backups? Yes. And they go, no? Why do we need to test our backups? Crazy. Train your security staff. One of my biggest pet peeves is not enough training. Thank God you're here. Thank everybody for coming to B-Sides. Good training. It's moving everything in a positive direction, for sure. Security awareness from the top down, because who loses more laptops at airports? The executives. I had this one executive in a two-month stretch. It was a senior director, senior vice president, blah, blah, blah, of a bank. And he lost his laptop twice
in two months at the same airport. I got, okay, I'm doing good. Patch, oh, here's number two. So number two is patch, patch, patch. You got to stay patched and test those patches. Okay. Again, refer back to number seven. It's a programming joke. I put you in a loop. Okay. Okay. See, I get much better reaction when I give it to a DevOps team, you know, for sure. And I'm sorry Ryan Reynolds was not here for this talk today, so I'm doing the best that I can. Any questions for Socks, a toque, or a shirt? Something? Are you all still Twitter? Are you still out there, Twitter? Talk to me. Anybody? Yes. Oh, hold on. We need to get you a microphone. That
way the awesome crowd on Twitter can hear you. I just want you to comment on the Huawei deal. What's going on with that? There's always one. Being an investigator and done what I've done over the years, show me the proof, okay? That's the key, right? You've got to show me the proof. And there is some proof there, unfortunately, for some things. And it's hurting the entire industry. And not just them. There's other security companies that have been dragged through the mud. So I don't want to see a world where we get to a point where we're bashing like security companies that are trying to do the best out there for people. And then, oh, you're blocked from this country. You can't go in that country.
What's that going to do? You know, that hurts everybody. Because then you'll get to a point where, okay, I can only buy software made in Canada from a Canadian company, blah, blah, blah. Or I can only buy from the US, or only buy from France, or only buy, you know, that's scary. That's my biggest point. It's a slippery slope. Now, if there's like direct evidence of fraud, back doors, all that kind of stuff, then yeah. I mean, there's a certain manufacturer of routers and switches. That has been caught many times with hard-coded backdoors and you don't see them getting dragged through the mud Which I totally disagree with that should be dragged through the mud anyone else We'll get you a microphone
so far. It seems to me. We've been playing catch-up and in fact in a lot of cases Hackers know everything we're gonna do anyway and already have in place and what I've found in the past two days is most everybody is way behind and So can you address that are we doing enough are we not doing enough your top ten are good rules But I learned those 15 20 years exactly. I'm just I'm preaching to the choir, but I gotta keep preaching, you know Yeah, I mean you could say that the the offense versus defense right the defenders don't have a lot of the problem is that the offensive side has the newest coolest stuff They've got the financial backing Because that's their their financial risk right the
risk reward from a defender's perspective It goes downhill because the executives at the top view IT is a cost center until they view them as a Integral part of the business like I can't do my business and I had this discussion with it with a CIO and a VP of a company because they had issues with their They yelled at us because they're like well your spam filter blocked one of our our contact a big contract and I'm like the spam filter blocked a contract and I asked him what's that what how much was the contract and they're like a million dollars Why would you send a million dollars contract through email first of all and call say hey? I didn't get that you know and
this was a company that was getting about 40 million emails a day and they like had one false positive and So that's not their fault. The security team's trying to save them. That's the big thing. And then I ask them, okay, well, what if you go, this IT team walks out today? How are you going to do your business? Well, we do everything on a computer. That's the key thing, right? It's that double-edged sword. Until we change the hearts and minds of the executives so they really realize how important security and the IT groups are in general, then yeah, it's a losing battle on the defender side, unfortunately. What makes it worse is when I'm
talking about it up here, like the nation-state stuff, well, guess what's happened? The cyber criminals are getting those nation-state tools. You know, if you look at the shadow brokers, Vault 7, Vault 8 hacks, that stuff's out there. I mean, a script kitty can take this stuff and bash your network. Look at WannaCry. It used not one, but two, but three exploits from a nation state, which is crazy. You know, you saw how fast that spread. We're just waiting for the foot to drop again because there's still more advanced tools out there for sure. Next, anyone else? Yes, sir. Think how much time do we got? Do we got some more time? We're still good. Okay.
We're not going to let the other guy speak. Hi, YouTube. So if you've got a small shop with not a huge amount of resources and you basically just got a big mess of AWS, local laptops, desktops, Mac, Windows, and it's all just like a big mishmash of everything, where do you start? Log everything. Male Speaker: Asking for a friend. Male Speaker: Yeah, asking for a friend, yes for sure. You got to start with the logging piece. There's some good free tools out there to do some log correlation and know your environment. Those are the two biggest things, right? You got to know your environment. You know, I don't want to walk into an environment and
I say, "Oh, what's your network look like?" And they look at me with strange eyes, you know, because they don't know what's out there. I can't tell you how many pen tests that I've done and found stuff on their network that they had no idea was there. But if you've got the logs, then you can start, you've got something. Because you can see, oh, I'm being attacked. This is the kind of malware that's getting in. This is what's going on. Why do I see this crazy traffic from here or there or there? You've got to be able to see that stuff. And you can do it pretty cheap. There's some good stuff out there that's
low cost and even free. Search GitHub. You know, personally. Or you can ask me afterward and I can help you a little bit. Anyone else? Yes, sir, in the back. We're going to run a microphone back to you real quick. And I still got some toques and some socks. Thank you for the presentation. Just a question. What are your favorite sources of trusted information security? Twitter. Information. Coming from the intelligence community, there's all kinds, right? You have to take it with a grain of salt because there's not one source. There are some really good articles out there on open source intelligence gathering. You've got to take a little bit here, a little bit there. It depends on
what you're trying to actually get intelligence on. Is it malware? Is it network attacks? Is it web-based attacks? Is it SQL injection attacks? Is it cross-site scripting attacks? Is it Dave walking through your front door with a bash bunny and taking your network down? There's all kinds of open source technologies that you can use to get that. Okay? Look at what's happening out there. Look at some of the places, slash dot dark reading. Brian Curbs, you know, that's reporting this stuff. So, I mean, start there and then you could just build up a list and, but you need to tailor it for your environment. Because if you're in healthcare, you've got to look at one set of intelligence. Now, I deal in ones and
zeros because it doesn't matter what the malware is, it could attack, but there is vertical stuff that's attacking you and you've got your own set of problems. Medical, you've got insulin pumps problems, you've got heart monitors, you've got MRI machines that are still running Windows XP, SP1. Okay, I won't name that hospital, but you see what I'm saying does that does that help you're welcome welcome welcome anybody else Think we got time. We've still got time for a couple more questions not a problem for a toque and nobody wants a toque I got two left nobody well. Thanks everybody if anybody needs anything. I'll still be sticking around for a little while Thank y'all very
much. Hey guys check check there. We go everybody. How's it going? I have the after lunch slot, also known as nappy time, so hopefully you all stay awake. My name's Josh, Josh Sokol. I came all the way up here from Austin, Texas. So yeah, I'm that American. You guys are missing out on an awesome opportunity to troll President Trump, by the way. If you guys built a wall, that would be amazing. Give it up for the conference staff. This is an awesome conference.
I've thoroughly enjoyed my time here thus far. This is also an incredible city. So I'm here with my wife and my family. We're here on spring break, and we're taking the opportunity to explore the awesome city that you guys have to offer. I did some hiking yesterday. I'm going to go check out some whales. So thank you all for having me here. I really, really appreciate it. As I mentioned, my name's Josh. A little bit about me, I'm a former OWASP board member. I know Tanya talked a little bit about OWASP in her talk this morning. OWASP Foundation is an awesome organization. You guys have a chapter right here in Vancouver. If you want to
learn more about application security, secure development, DevSecOps, whatever, it's a great resource and everything is free. I'm also the creator of a free open source tool called Simple Risk for risk management. So go check that out if you need anything to manage your risks. And then in my day job, I actually run the security team at a company called National Instruments, large company based out of Austin, Texas. And I've been working there for over 12 years now. This talk actually started out as kind of me theorizing about what life would be like if I had a green field. So NI is a company that's been around for 35, 40 years and a lot of things
are kind of set in their ways. It's really hard to change things once you get going in a certain direction. And so I started thinking about what would it take to move us to a new space? The cloud is an amazing opportunity for us to do that, for us to start over, to start new, to create new ideas, new concepts, but the whole idea of moving forward also means that you have to recognize the mistakes that you made in the past. It means that we have to figure out what those mistakes were so that we don't repeat those same mistakes again. So to start us off, I want to talk about what is the cloud?
Because everybody has different definitions of it, and I want to make sure we're all on the same page here. And it really depends on whose shoes you're walking in, right? If you're the customer, cloud is just another term for software as a service, right? It's running in somebody else's data center. It's doing the thing that you're paying them to do, but they're managing the infrastructure. They're managing the security. They're managing all the things around that. If you're the hosting provider, it really just means I'm running in somebody else's data center. And so the challenge here when I started SMPWRIS and I started to do software as a service work on the SMPWRIS side was how
do I take this tool that was originally built as something that I installed in my own environment, the free open source tool, and how do I turn it into something that's available in the cloud? How do I make it this big thing? And so, SimCourse was released open source, but this hosting business came as a result of all these customers who said, "I don't want to host it myself. I want to basically transfer all the risk to you." And so the cloud in this case was leveraged for agility. It was something where I didn't have to buy my own servers, I didn't have to create my own network, and I could just build all this
stuff in there. But because of the data that we're hosting, this is risk data, data that belongs to the customer, security was a huge concern. We wanted to make sure that that data was safe. And so the simple architecture is pretty basic. It's Apache web server. In some cases we use engine X as a proxy. Some cases you can use IIS. The code was written in PHP. And it uses a MySQL database back in. I know what you guys are thinking. This is a security conference. You're like, eh, PHP, right? The reality is, Tanya was thinking it, I know that. The reality is it really doesn't matter, right? So I had an interesting study back
in 2014 where they basically said, what's the number of security vulnerabilities we find based on the different types of languages? And what they found was PHP really wasn't any different than all the other languages, .NET, Java, ASP, it doesn't matter. They all have security vulnerabilities. And really what it came down to was the person. It came down to the developers who were writing the code. And for me, it was really just, that was the language I knew. Whitehead also said, at present time, we can say that all these aforementioned items, the language, the industry, the organizational size, the development process, whether it's waterfall or agile, it doesn't matter. And if they do, it only slightly
varied under very specific conditions. So they basically determined that it really doesn't matter what language you're programming in. So in my case, why did I use PHP? Well first of all, the same code worked across many different platforms. So I can support my simple as code in a Windows environment, in a Unix environment, Linux environment, and it didn't matter. Secondly, installation was super simple. It was app.git install php, right? So super easy to get on there. Lots of documentation for the things that aren't so simple. Probably worth reminding everybody here, don't just copy and paste code from Stack Overflow, that's bad. Make sure that you know what it's doing, make sure that you understand why
it does what it does. And then I knew PHP better than any other language. So I had originally kind of poked around with some other things like Python and whatnot, and I was like, dude, why are you doing this? You're idiotic. Why don't you just work in the language that you know? So that's what I did. That said, generic coding best practices apply. So when I was writing Simple Risk, it was things like input validation. It was outputting coding, hash-insulted passwords, using only comparison, industry standard crypto algorithms, don't write your own crap, authentication, authorization checking. These are the things that you find when you go to an OWASP meeting, when you learn about OWASP, when
you look at the OWASP top 10. These are the kind of things they tell you. So I'm not going to tell you guys about all the best practices for coding. OAuth has great resources on that. What I am going to tell you guys about is the cloud, the environment, the architecture that I used to build this application on top of. And so really what it started with was the architecture. The architecture was this web application that we said is secure. It's a secure web app, and we know that's where I am PHP. So the first question that you guys should be asking is, how do I keep this secure? And so we have to make
sure that we're updating the application. So one of the things I did is I kept the application in a private GitHub repo. And I control access. Access control is extremely important. So the people who can push changes into that repo, they're people that I know, people that I trust, people that I pay to do things, right? So access control in this cloud environment is going to be incredibly important, and we'll see that in a few other places as well. Next thing is audibility over who makes changes and what changes are made. because you want to make sure that if something bad happens, you can track it back to the bad actor. You can track it
back to this was the code that screwed everything up. And then the third thing is easy rollback of changes that break things. And I'll raise my hand and I'll say I've broken things. Other developer was working on that, I was working on some other code simultaneously, I checked my commit over on top of his, and all of a sudden things don't work and it's Josh's fault. So we want to make sure that we can always roll back, we can always revert the changes if we need to. Next thing is automation. So Tanya talked a little bit about the DevOps stuff. Automation is key when you're doing anything DevOps. It's how do we make this so
that things happen automatically, so that the security checks happen automatically. And so in our case, we use automation tools to check for updates and to do deploys. So the servers know when new code is made available. The code automatically gets rolled to those new servers. It minimizes any deployment mistakes. There's no manually copying and pasting stuff and oh crap, I forgot that file. And it ensures that the server configuration is consistent. If I need to roll a new server for whatever reason, it's just a matter of running the script to pull the stuff and it automatically just works. You can also have different dev tests in production environments because we all know that's a bad
idea to run dev code in production and production data in dev. So at this point, our architecture looks something like this. We've got our web application server in the middle. We've got GitHub, which is where our code is coming from. And then we have to have some sort of orchestration server, that server that's actually going to make sure that things are happening appropriately on that web app server. And we might have multiple web application servers, might have multiple customers, things like that. So we have to have some way to kind of automate and orchestrate that. This leads us to hardening our servers. We need to make sure that our servers are secure. And that means
ensuring that the routine patching of the OS and applications is happening. It means turning off unnecessary services, enabling host-based firewall with default deny rules, and administrative access that's only allowed from a bastion host. Now, some interesting things that I've learned since I originally put this deck together, the server stuff, usually when you're working in the cloud environment, the idea is that those servers are immutable. You can kill a server and stand up a new one, and that new one should stand up in exactly the same way. What this means is you're not really patching, per se. What you're doing is creating an image that's already patched. And so when you stand up a new instance,
it should come up using your image that you know is secure, that you've already scanned, vetted for the vulnerabilities, pulled out all the issues, and you say, this is a good image. So when you spin up that new server, it's automatically running with the latest, greatest patches, application, whatever, already on there. Default deny is also critically important here. The default deny basically says, I know what's gonna connect to my application and what my application is gonna connect to, right? Because it's an implicit rule that says, this is the service that I need. And administrative access only allowed from bastion host, the whole idea here is that we're limiting who can do administrative access on that
system. So if I say this is the only host that's allowed to communicate via SSH to that server, what I'm actually doing is I'm saying only the users who have access to that system can access that system. I've created a choke point. And what that means from a security perspective is that you guys can focus all your security controls, all your attention on that system. So beef it up, make sure you're having multi-factor authentication, make sure that you're enforcing the access list, make sure that the users that are on there are being audited or removed automatically. You focus all your efforts on that system and now you're providing any other access to the rogue systems.
So our architecture now looks something like this. We still have that GitHub repo, we still have the orchestration piece, but we've now introduced a new bastion host. That bastion host is the only thing that's allowed to communicate over a secure protocol to the web application server and to our orchestration server. Now, the Simforce application uses a database. It uses a MySQL database. Again, it's free, it's open source, it was easy to use, easy to integrate. And so our next step is to integrate that database. So database security rules kind of apply here. Using long random passwords for database users, right? That's something we've been preaching in information security for decades. Least privilege, so making sure that when you apply permissions that you're saying that user only needs read-only access
and it doesn't need anything else. So why would we give it write access or why would we give it the ability to execute stuff on the system? So least privilege. and then restricting the host for the user to only application servers. What that means is when I configure the user, in my case the simple risk user to be able to communicate with the database, I say this user can only communicate from these application servers. So nobody can connect to my database from outside, nobody can communicate to that server, to that port. I use the MySQL NAV rules to prevent that level of access. Only my application servers can communicate with the database. Now there's another
layer on top of that, and hopefully you guys have learned about defense in depth, right? We want to build layers of security into our system. So the next layer is on the AWS side, or on the cloud side, we can actually do database security as well. We can use native security groups with inbound rules for the application servers. So what we can say is when this application server communicates with this database, the only thing that it's allowed to communicate is on port 3306, which is MySQL, and it's only allowed to go from this application server to this database server. So what we're really doing We're narrowing the attack surface. We're trying to minimize the amount
of points that attacker could target us and by narrowing that attack surface We're actually narrowing the scope of what we have to do in the security perspective, right? Where's the entry points? Where's the exit points? We're now able to shrink the area that we have to monitor which means we can focus more time more money more energy on those paths and The other thing that's critically important here is making sure that our databases are not publicly accessible. And a lot of times the default configuration when you launch a database in Amazon is going to say, hey, this is a publicly accessible database. And you want to make sure that you uncheck that and then you
say, no, this isn't. And then you explicitly enable the specific rules that you want to enable there. So at this point our architecture now looks like this. We've added a new MySQL server. That MySQL server needs to be able to communicate with our web application server. But note that there aren't any other lines. It's not like my GitHub can talk to the database or the orchestration server can talk to the database. This is because of that default deny rule that we have in place and because we're enforcing the fact that the only thing that communicates with my MySQL database is that web application server. Right? So we're building security in implicitly into our process and
we want to make sure, or explicitly into our process so that we're making sure that our attack surface is super small. And I've said attack surface a couple times now so let me dig in a little bit to what that actually means. I like to think of attack surface kind of like a castle, right? The attack surface of the software environment is the sum of the different points or the attack vectors where an unauthorized user or the attacker, the hacker, can try to enter data to extract data from the environment. And so if this picture represents our castle, I'm sure you guys can think of a thousand different ways to attack this, right? I could
throw bricks at the windows, I could kick in the door, when the person leaves the castle I could, you know, shoot them with an arrow, whatever, right? This is a fairly unprotected castle, and that's a problem. But our goal in this situation should be to move from this to something like this. Because when you look at this, the difference is the attack surface. How many ways am I going to be able to attack this, right? Unless I have like sharks with laser beams or something like that, right? It's really difficult to get at this guy. Even his front door, like I assume that's the front door there. Like, I don't know, do you have to
take a boat to this thing? It looks really, really hard to attack. And even if you were to get to that front door, there's probably archers sitting up there and pots of boiling oil and ducks, apparently. Ducks are scary, man. Yeah, so our goal here is to minimize the attack surface. We want to reduce the amount of stuff that we have to do, and we have to increase the effort that an attacker has to take in order to attack our environment, our system. And so in the case of our application that we've created here, this is what our attack surface looks like. All that stuff that I described before, we're gonna throw that in what's
called a VPC in the Amazon environment, or a virtual private cloud. The idea of a VPC is basically that you're creating a kind of private environment with your own IPs and things like that, and that private environment is just that, it's private. Nobody can access any of the stuff in there, it all kind of runs in its own little container. And since then we've also kind of broken this out so we've created multiple subnets here. So what we've created is what we call a public subnet, a private subnet, and a sensitive subnet. And by doing that, we're actually able to say, well, these public things are systems, they're not really public, right? But they're systems
that somebody could potentially access from the outside. So think load balancer, right? That's something where you have to have some ports open in order for somebody to access the application, like a port 80 or port 443. The private is the systems that that load balancer talks to. So think the application server in this case. So our load balancer needs to be able to communicate to those. So we can now explicitly say, this system, this load balancer can talk to this system, the web application server, on this port. narrowing that attack surface. And then we take that step further and that sensitive is our database. So our database is actually multiple layers into here. So for
somebody to actually reach that database, they'd have to be able to find vulnerabilities across the whole stack. They'd have to be able to find something in the load balancer, they'd have to find something in the web application server, and ultimately they'd have to be able to use the web application server in order to hit the database on the specific port. See how we're narrowing that attack surface? So at this point, if we talk about what's outside of this environment, we'll add some stuff to the VPC for connectivity in a sec, but if we're talking about what is our attack surface right now, right at this very moment, it's two things. It's GitHub, right? If somebody
could stick stuff into my GitHub, then that code would end up on those servers. And it's Amazon. If somebody has access to my Amazon console, then they have the ability to get this environment. Now, we're gonna make that bastion host more public in a second, and we'll have that load balancer in a second, but right now, this is all that it sits. So would you guys rather protect that, or would you rather protect all this stuff? If I only have limited money, limited time, limited people, which I know we all do, I'd opt for this, hands down. So the next step here is, as I alluded to, we have to be able to enable access. Customers need to be able to access the environment, and our administrators
need to be able to access the environment. And in order to do that, first thing we need is a load balancer. So our load balancer we put up there, the load balancer has access from the internet. So a remote user can hit that system. That's the first time now that we've allowed something into our environment from outside. And because it's a load balancer, it's a very limited number of ports. It's 80 and 443. It's limited protocols. This is HTTP, HTTPS. It's limited destination. It's only going to that load balancer. So we're narrowing that attack surface. We also have to have some form of SSH access, so we'll say SSH to that bastion host, and you
can actually improve security upon that by saying, eh, forget the bastion host access, we're gonna create a VPN, and the only way into this environment is VPN. So now you have to VPN into the environment, right, use your multi-factor and all that, and then know where the bastion host is and how to connect to it, and then you can hop to one of these servers. So now you're improving on this, you're creating multiple layers of defense so that the ability to access this is really hard. From an administrator it's hard, right? Usability starts to go down, but security goes way the heck up. We also still have our Amazon console and we still have GitHub.
So there are those attack vectors into here. All right, the load balancer in this situation also acts as a proxy. So you can actually do scaling. You can create N-tier architectures here where that load balancer, it's not one web application server, it's 10 of them, or five, or whatever. That load balancer allows you to scale within your environment. So attack vectors at this point. First one is insider access. I think the gentleman before us was saying don't trust the insider. And so insiders who have access could potentially compromise the confidentiality, the integrity, the availability of our application. Next one is the attacker compromises Amazon. So they get access into our Amazon console, they can change some of the configurations for the environment, delete servers, whatever. Next one is
the attacker compromises GitHub and they can push malicious code into the environment. the attacker finds a vulnerability in the application in order to gain access to the environment, or the attacker finds a way to access the bastion host. These are the attack vectors. These are the ways an attacker could potentially get into the environment. And once we're at the point where we can explicitly say these are the things that we're concerned about, now we can figure out how to mitigate the risk in this environment. We can figure out how do we address each of these. So we move on and we look at that insider threat. And the first thing to do with insider threats
is something we've been doing for a very long time, which is background checks. Make sure that the people who have access to this environment are people that we trust. They're insiders, but are they trusted insiders? Do we know that they have the skills to be able to handle this environment? Do we trust them from a security perspective? Have we read their background? Make sure there aren't any financial difficulties where they would be incentivized to do something nefarious in there. And then we also want to limit the people who have access. I don't just give my sales guy access to the Amazon console. That would be ridiculous. I want to make sure that the people who
are accessing into this environment are people who I know, who I trust, who should have access because that's their job. The next one is the Amazon attack vector. So here again, we want to limit access. We want to make sure the people who have access into this environment are people who should have access. The next one is role-based functional authorization. So what we can say in Amazon is we can say this user has access to these functions because that's their job. So to give you as an example, I had somebody who was building out part of this environment with Chef and Docker and doing some cool stuff there and he needed access to be able
to create DNS records. Okay, cool. Does he need to be able to launch EC2 instances? No. Does he need to be able to delete DNS records? No. So when I started looking at what permissions do I give him, the answer was I don't give him everything. I give him just the permissions, the least privilege to do his job. Long and random password, that's something we've been doing for a long time. And then multi-factor authentication. So how many of you guys, interesting question for you, how many of you guys use a password manager? Majority, I'd say probably 50, 60%, something like that. For those who don't, Your takeaway from this conference, I'm giving you homework, is
to go and look up KeePass. And KeePass is free, it's open source, and it's a password management tool. And what KeePass allows you to do is to create really, really long, really, really complex passwords that you don't have to remember. You remember one password to log into your tool, you can have it use a second factor that's like a key, and then all your passwords are in that KeePass database. Now what this allows you to do is when you have an environment like this where it's super secret and you want to make sure nobody can hack that password or whatever, you want to create a 100 character password with uppercase, lowercase, and special characters, go
for it. Super easy, you just click 100 characters, check the boxes for upper, lower, password, hit generate, and boom, there's your password. So anytime you're doing something like this, longer, we've all known, longer is better, random is better, use a password management tool and it'll help you get there. Multi-factor authentication, obviously the other thing. If somebody were to get that password, now they have access, even if it's 100 characters long, they're probably not going to brute force it, but maybe they do a phish or something like that. So multi-factor makes it so that they can't just use that password in order to access the environment. They have to have something, whether it's a phone or
a token or whatever. GitHub. So again, access control is key. We want to limit the people who have access to this environment because this is a critical environment. Access to GitHub means access to the source code, which means that you could potentially dump stuff on this system. So by limiting who has access, we're limiting the number of people that trusted in CIRES that we can infer don't do nefarious things. Ensuring long and random password requirements, again, is incredibly important. Requiring multi-factor authentication. And then auditing. So if we have people who are checking in code into our environment, even if they're trusted insiders, even if they're people who we feel like, man, I've known that guy for a decade and he wouldn't do anything bad, we still need to
make sure that we're reviewing the changes. So code reviews are incredibly important, not just from the bug side, right? We always want to be able to find the bugs and fix the bugs and whatever. We also want to make sure that people aren't sticking back doors into our software, or having security vulnerabilities. Even stupid things like, I had a developer who included this package that wasn't, he thought was a free package, right? Open source, but it wasn't really open source. It had some licensing restrictions for business use and things like that. Even things like that could be disastrous for a small company that's trying to do things like this. So making sure that you're looking
at the code and that you're sorting out things like that, providing security vulnerabilities, providing functional bugs, inclusion of open source with licenses that don't work, things like that. For the web application attack vector, I talked a little bit about the beginning with the OWASP stuff, incredibly important. Make sure that you're securing your code. Moving to HTTPS by default is also extremely important. There's a bit of debate about do we keep HTTP open and redirect to HTTPS or do we just turn off HTTP altogether? And honestly, my preference is use HTTP. There will always be people out there, the lowest common denominator, who are hitting HTTP, and we want to send them to the better path as opposed to them just, I don't know, just hitting HTTP
and going, well, application's down, right? Moving on. Securing your HTTPS configuration. So if you haven't found it, SSL Labs has a great site. You can enter your website into there. It will go, it'll test the SSL configuration for your web application and it'll come back with here's how you're doing. It'll give you a letter grade. Everything from an F, hey you failed, to an A plus, right, you're doing awesome. And it goes through the different ciphers, it goes through the protocols, It goes with perfect forward secrecy. All the things that you would care about SSL, SSL Labs does an excellent test to show you those things. Intrusion prevention, WAF, web application firewall, important there as well since this is the primary way that both customers and attackers are
probably going to attack you. Tanya actually in her presentation talked a little bit about web applications being the number one way that we're being attacked. So making sure that we're focusing in on the web application front and that we're able to detect and prevent those types of attacks is critical. And then as I mentioned, the application security side. And lastly, the remote access piece. So remote access, we want to make sure that when people are accessing this environment that they're the right people, that they're the people that we've trusted to bring into here. So long and random passwords for SSH users is important. Multi-factor authentication is incredibly important. Key-based authentication, so it's not just username
and password, but somebody has to have an actual key to be able to log in there is good. Restricting IP ranges, right? So making sure that only certain people have access to talk to certain systems in the environment. You can do that. And then disabling SSH unless you're VPN-ing to the VPC. So I talked about that a little bit earlier, but you want to make sure that even access to that bastion host, even access to... to ping a server, right? You have to VPN into the environment to do that. So by doing all this stuff, we've kind of narrowed our attack surface, we've made it so that here's the little bit that somebody from the
internet is able to access, and anything else, they just straight up don't have access to. So I'll leave you guys with some recommendations. The first thing is when you're doing this, determine what is or is not in scope. There's going to be a lot of bits and pieces to the applications. Applications are big, complex beasts. And you have to look at that and say, well, this is the stuff I have control over. So this is the stuff that I'm going to focus in on. And then you can hit the vendors up or whoever controls those other things. And there's other ways to push security onto them. Look at what you do have in scope. Once
you have those things that are in scope, I think it was Isaiah that was talking about today, talking about data flow diagrams. Look at the way the data is flowing through that application. So it's going to start, it's going to hit a certain place, HTTP, HTTPS, it's going to flow from there to the load balancer, to the application server, the database. So you have a data store. He talked about stride, the spoofing, tampering, repudiation, and whatnot. So look at those things and determine what this application actually looks like. Once you have that data flow diagram, then you can dig into the attack surface and you can say, okay, well here's where I would spoof a
request and okay, what do I need to do about that? Here's where I would tamper with that or whatever. Look at that guy, figure out how you would attack this application and then once you've done that, you start looking at the threat vectors. Okay, well there's where my problems are. Here's how somebody's gonna attack that Now you get to plan the mitigations. Now you figure out how do I reduce the risk of these threats that I face for my application. So with that, I'm open for questions. No questions? Is that good? We got one right here. All right. I don't have hats and tubes to throw out to you guys. Testing. Hello. OK. Hi. How does this approach change when
dealing with more public and open cloud services like the various PaaS and SaaS services, which tend to be open to the public and not able to be locked down the same way? - So I think the approach is exactly the same. I think you need to figure out what you do have control over. What are the things that I can manipulate and try and rethink about that in terms of what needs to be public and what doesn't. And the more stuff that doesn't have to be public, you kind of bring in into that VPC or that private environment and try and limit the front facing stuff. So for example, you probably have a ton of servers that are running HTTP and HTTPS. Well, the
very nature of those servers being front facing, the very nature of them having public IP addresses is a risk. And so an additional attack surface because if I don't have a firewall or whatever and there's certain ports that are enabled on there, now we have additional ways that we could attack that system. So if you pull those systems inside, use a load balancer that front faces that, now you're creating an extra layer of complexity, you're creating defense in depth, you're limiting who can communicate with those other servers on the back end. anything that you can do to shrink that attack surface is gonna be beneficial in the long run. And so I think the approach
doesn't change. How much you have on the public facing side may change, right? There may be things that you can't pull in. But you're still looking at how do I minimize this? So using things like host-based firewalls, limiting access to those systems, IP restrictions, limiting SSH access to a bastion host. Things like that will go a long ways even if the system has to be public. Yeah, other questions? Got one there, one there.
First, thanks to talk. Just a question regarding the-- you see in the VPC, there's maybe you have an application server, you have a database server. But in order to prevent or restrict that your application server can only talk with your database server, do you always recommend to have a host-based firewall, or you would try to deploy some different VLANs to say, even in that VBC, you have a different VLAN for different servers. Yeah. So host-based firewalls are awesome. And if you use Ubuntu, which is what I do, UFW is a fantastic tool. Absolutely. Use host-based firewalls as much as you can. I'm a huge fan of defense in depth. So just the fact that you enable that and you say, here's the ports that I want running on
the system, it'll take you a long way. In terms of the specific piece about the database access, probably the best tool that you have in your arsenal is the native security groups. So you can apply a security group to that database and say the only way to communicate with this MySQL database is from this system and this system. And you just specify the rules for that application server and that application server can communicate via port 3306. The other thing is MySQL has database access that's built into it, and so in the users table you can actually specify these users have access from this IP range. And so in MySQL itself, you can say from these
IPs I'll allow access. So there's actually multiple different layers that you can restrict access to that database. I'd probably take it a step further, and as I mentioned, you can create a subnet, put the database in a separate subnet, and the cool thing once you start doing subnetting is you can actually use the cloud security logs, and you can actually see any time data traverses one subnet into another, you get that in the log. And so you can see every bit of data, everything that's trying to communicate with, it makes it way easier to kind of separate out the data You know, the good from the bad. Yeah. There's a gentleman back there.
So you talked a little bit about this application that you have gone through the effort of securing. Can you talk a little bit about what this application does? Yeah, absolutely. So trying to stay away from the sales pitch, but basically Symporus is a free open source risk management tool. I've done full talks about it in and of itself. But the idea kind of came about-- I was working at National Instruments. I was asked to start a formal risk management program. and it kind of came down to using Excel spreadsheets and paying half a million dollars for a big GRC suite. And I took a quote for half a million dollars to my VP and she
laughed at me and I was kind of stuck between a rock and a hard place and I decided that, she said, "Your budget is zero, go figure it out." And I decided I was gonna write something. So I put my CS background to use, I wrote in an application, and I decided, hey, if it's valuable to me, maybe other people will get use of it as well. And so SimForce was released free and open source back in March of 2013 at the B-Sides Austin Conference. So if you have risk management needs and you're using spreadsheets now, look it up. Other questions? Gentleman right here. Blue shirt, yeah. - What's best practice for securing databases? So database encryption, disk encryption, application level encryption? - Yeah, so
from an encryption standpoint, Amazon actually limits the types of databases that you can do full disk encryption on. I think you need to have at least a medium database, if I'm right, in order to do that. So you're paying more money for that capability. There's a couple different ways to do it. One, you can do the full disk encryption using the Amazon native capability. Again, paying more money for that. The other is you can build encryption directly into your application. Basically, if you keep a key locally or if you use a key store, you can basically use that, encrypt the data, and then insert it into the database already encrypted. That's actually probably the more secure way to do it because at that layer, anybody who has
access to the database itself still wouldn't have access to the data, whereas the full disk encryption really only prevents somebody from pulling the drive and then using it that way. The black shirt back there. Any suggestions for SSH key management for Amazon? I've got a ton of SSH keys for a bunch of servers that I'd rather secure somewhere better. Yeah, from an open source standpoint, Vault is a pretty good product. They have a free version and a paid for version. I think it's vault.io if I remember right. At National Instruments we use a product called Secret Server which is pretty good. There's other platforms out there that are paid for services as well. - It really just depends. SSH key management really isn't any different than password management in
my opinion. And so it really comes down to if it's just like one user or a couple people who need access to it, something like a key pass is probably fine. If it's something that you're gonna end up sharing with a large team or an enterprise, then you probably need an enterprise password management tool or an enterprise secret management tool. So it really just depends. Anybody else? Cool, thank you guys.
All right folks, can you hear me all right? Nice, thanks. All right, so let me start by giving you a brief overview of the security threat landscape. In January alone, 64 million people in over 232 countries or regions all around the world were infected by 48 million never seen attacks. Brand new first seen attacks. What's worse is 60% of these attacks were over within the hour. In the security field, we think a lot about response, cutting out the response time. But we're getting to a place where response doesn't matter. So we serve about half a billion customers. They are counting on us to get this right every single day. To be able to correctly predict those 48 million threats and protect these users at
first sight. If we don't do that on the first encounter, we might probably never get a chance to pick on them again, and we failed. And there's plenty of attackers out there who want us to fail. They want to defraud these innocent users and make their millions. So, in this world where response doesn't cut it anymore, we've moved to machine learning for its proactive detection. We want to leverage its predictive power to catch these new threats in real time. But as humans are susceptible to social engineering, well, machine learning is also susceptible to adversarial attacks and to tampering. Hi, I'm Jagal Parikh, and I'm part of the Windows Defender Advanced Threat Protection group. And this is what keeps
us up at night, keeping these people safe. So my team is called the Threat Predict Research Team, and our primary focus is to leverage machine learning to predict threats. So I know we have different levels of machine learning knowledge in the audience. So people who are experts, bear with me for a few short minutes while I walk through a quick primer to get everyone else on the same page. So at a high level, we use machine learning in two different ways: supervised as well as unsupervised. The first one, which is supervised machine learning, in this case you have experts or malware researchers that help us create labels for files. Now this could be a sample, it could be a behavior, it could be a combination of all
of these things. They create these labels and they help us distinguish a malicious activity from a benign one. We then use these labels, create a data set, and feed that into our machine learning pipelines. The machine learning model then learns from these instances, and they're used to predict on future data, future unseen data. Unsupervised learning doesn't have labels, so there is no way to tell what is good and what is bad from unsupervised learning. But it's really good to overcome that bias or those blind spots that your experts or automation systems might have. Unsupervised learning would be really good to cluster those similar looking behaviors or those similar looking files and to help you spot
some of the pockets of malware that maybe you haven't seen before. It might tell you what kind of clean files look so similar that you haven't labeled yet. Unsupervised learning can also help you detect anomalies. So think about looking at spikes in your incoming telemetry or your data and if there is an abnormal spike, maybe something wrong is happening that you probably need to look at. So for this talk, I'll be talking about adversarial attacks on supervised machine learning models. Now when it comes to machine learning, there are two major areas where machine learning can be deployed. It could be deployed on the client as well as on the cloud. On the client side, along
with researcher expertise and your signatures, you want your machine learning models to be super fast and very light. Now this is really good to detect things that are very obvious. So say if an attacker is using a pattern that is clearly known to be malicious in the past, client-based machine learning models are really good to catch that. It will also be useful to catch commodity-based malwares where the attacker, you know, they're kind of getting sloppy and they don't use as sophisticated techniques or obfuscation techniques. especially if these threats are high in volume, client-based detections would be the best. For more advanced sophisticated malware, we tend to move to cloud-based models since these are more difficult
to evade and we can see exactly when the attackers are testing us and then when they look at our detection systems to get a verdict to constantly evade our detections. So there are multiple layers of protection in our cloud. The first one being the metadata-based models. So when the client examines behaviors and patterns and if it's, say, not quite sure if something is clean or malicious, it will send those signals to the cloud. When I say signals, I mean hundreds and thousands of attributes that are collected which are related to that activity. Our cloud-based models then looks at these attributes and provides the verdicts within a few milliseconds and directs the client to either block
the file or don't do any action. So, if our cloud-based models don't have an answer, or if sometimes they are not very sure in something being malicious, they will ask for a sample of that activity that should be uploaded to our cloud-based systems. Now, the way this works is metadata models will look at this file and say, "This activity looks a little suspicious, but I'm not very confident if it is malware or not. Hey, client, put a hold on that file, send me the file, and let me do a little more analysis." So when we upload that file, additional properties are extracted. Things that would be a little too heavy for the client to handle. And then we run a deep neural net model, which is a
lot more expensive, and we send our verdicts back to the client. So the client says, "Okay, I hold the execution for a couple of seconds. You better get the answer to me right away." Once that is done, the sample-based models will give you the verdict, and then the client can either block it or let it execute. So in terms of real-time protection, These are our production layers. We also have some other layers in the cloud-based system. We have detonation-based ML models where the samples are run in a sandbox environment. But as you know, attackers have these anti-emulation and these loop-of-doom techniques that they like to play around with. So it's not always guaranteed that you
would definitely see that malicious behavior on the VM. And then lastly, we also have big data analysis, where we correlate many different signals from, say, something like Office 365 or other data streams that we incorporate into the cloud system. With big data analysis, it sometimes takes hours for the label to get propagated. So it's not a good candidate for real-time protection, but it definitely helps us pick up the slack. So, for the purpose of this talk, we'll be focusing on the top two layers, which is the client-based and specifically the metadata-based models. So, there's a couple of pros and cons when it comes to using ML at the client or the cloud. Now, this is really important when you're thinking about adversarial attacks here, because one thing that
we have seen in the past when it comes to client-based protection is when you put all your protection capabilities on the client, Well, an attacker can just take your client, test it forever, and then they can learn all the answers to the test they want to run on. They know what you detect as clean, they know what you detect as malware, and then they can use this knowledge to go attack people in the real world without you ever having seen that they've done some testing like this. So that's a real disadvantage when it comes to client-based models. Now, if your environment is truly disconnected, or say your client cannot talk to the cloud for some
reason, having some protection level at the client definitely does make sense. But if you're talking about adversarial examples, you're at a true disadvantage. On the cloud side, if you require the attacker to talk to the cloud, they can't do this private brute forcing. To test your protection, they have to talk to you. That means you can see them, and you can figure out what to do in response to that. you can maybe give them false verdicts. So maybe not detect some of the samples that they are testing, but when those samples go live in real world, you can start detecting right away, basically confusing the attacker on what's happening. You can also add other controls that prevent the attacker from being able to thwart your protection capabilities. The other
benefits here is there is minimal impact on the client. So you can have a lot of heavy duty models on the cloud because on the cloud you can have a lot of compute power and that means your client can be super lightweight. So no interruptions to the end user as well. Put that power on the cloud and put your all deep learning models onto the cloud. So when we talk about adversarial attacks, This can be categorized into several subcategories. We can talk about model inversion, membership inference, data poisoning, or targeted misclassification. For this talk, we'll focus on data poisoning as well as targeted misclassification aspects of adversarial ML. So now I want to walk you through some theoretical attack vectors
through the ML pipeline. And most of these attack vectors require some sort of insider knowledge. So keep that in mind while I talk about them. So the first layer in your ML pipeline, of course, would be your sample set. Your clean files, your malicious files, and other data sets that you use for training. Now, if an attacker can get access to that, or even a subset of that, well, they can basically complete, they have a complete access on the ratio of clean to malware files that you're using to train a model. They can flip the labels to basically create a lot of noise, and that can have catastrophic problems to your model. Now, I'm sure
for folks who have done some sort of modeling, we tend to focus on what sort of learner should I use? There are 50 different learners, so many toolkits. What gives me the best performance? There are so many parameters that I can tweak. What sort of parameters should I use to get that extra performance? But when it comes to the data set, often it's ignored. But trust me, using the right dataset has a much more powerful impact on the effectiveness of your model as compared to any particular learner or any parameters that you might get. So, as I said, the sample set is quite critical. If an attacker can get access to that, that can have
catastrophic results on your model. What I also mean by this is you should look at Your other telemetry pipelines that you consume for training your model have certain inspections and some gates to make sure exactly what you're training in. There's a very common saying in machine learning, if you feed in garbage to a model, you'll only get garbage out of the model. So, the other thing that the attackers can do is they can mess around with your features. So if you figure out what sort of features are more important to the model, they can submit samples to trick your model into learning the wrong features. Now, most of the papers that you read in machine
learning, adversarial machine learning, talks about taking these adversarial examples by adding a targeted perturbation to your input feature in order to flip the detection of your model. So you can add some perturbation and have a model classify something as clean, even though the model initially said it was malware. Then, when it comes to your model parameters and your model Well the attacker can basically just take that, replicate it on the backend, and basically try to find ways or find vulnerabilities in your model. They can try to find different ways to evade your model. Say, lastly, if the attacker doesn't really have any access to any of these things, but all it has access to is your verdicts, whether you say something is clean or something is malware. That can
also be significantly harmful. What the attacker can do is basically go to WwiseTotal, they can mine a bunch of clean and malware files, and then train a separate model that learns on the predictions of your machine learning system. Once that model is a closed replica of what your machine learning system provides, the attacker can just develop an adversarial example on that model, and because of the transferability property of adversarial machine learning, whatever, you know, exposed that model would now expose your client machine learning model as well. Okay, so that was a bunch of theory. Let me talk about a couple of real world attacks that we've seen in the past. So, the first attack here
is an example on our automation system that happened a couple of years ago, but essentially the same method could have affected our ML systems. So, what these attackers did is they took our client and they tested it offline. In fact, it was not just our client, this happened to the whole AV industry. There were many other vendors who were victims of the attack. So what the attackers did is they found out specific features that our signatures or our models were latching onto. So say, imagine you've got this binary that connects to a malicious IP. And some parts of that malicious IP is used by your thread expert to determine if a sample is clean or
malware. Now, if you want to write a signature for that, you would not just write a signature for that malicious IP, you would probably also add some signatures on the header of the file, because you want to detect the file before it gets a chance to connect to the IP. So what the attackers did here is twofold. One, they figured out exactly what sort of malicious strings, what sort of malicious components the thread experts are latching onto. And two, they figured out a way on how our automation systems are automatically writing signatures for some of these files. So they took this knowledge and they took a clean file and they infected these malicious fragments into the clean file, leaving the header of that clean file intact.
And then they anonymously uploaded this file to VirusTotal. Like many other vendors, we use VirusTotal to collect our samples. And when this sample was processed in our automation systems, because it had that specific malware fragment, our automation system started triggering on it. And when the actual signature kicked in, it started catching on to the header of the files, which was actually clean. This resulted in a massive false positive issue to actual customers. The second attack was on our cloud-based systems. So this was an attack that tried to trick our certificate reputation system into trusting signed files that were actually malicious. So the way this worked is certificate reputation systems work on some very easy features. It looks at things like, you know,
your previous traffic, your age, prevalence, how long has it been into existence, and so on. And the attackers figured out exactly what features we were using. What they did next is they started sending a bunch of telemetry onto some of the malicious code to increase their prevalence and to show, you know, to basically make us believe that this is being used a lot in the wild and have us trust it. But the attackers were slightly sloppier. The incoming telemetry that we saw was so obvious that our telemetry sensors picked it up right away. And we could see that this huge amount of telemetry is being unclassified and it's tricking our systems to believe that, hey, something is clean rather than suspicious. So, the way we overcome this is we
created another model to try to classify these attackers. Once the model classified this telemetry as coming in from the attackers, we basically separated that telemetry from our trusted telemetry base, and we only used telemetry from our trusted base to train further models. So, this is a perfect example on how a combination of two models can help you prevent data poisoning. So, a lot of early research has pointed to combining models together through an ensemble as a way to combat these types of attacks that your singular models are susceptible to. Let's take a look at how we can generate these ensemble models. Before that, let's take a quick AI break. One of my favorites here is, why don't neural nets mind the cold weather? Well,
because they can just add more layers. One person who got the joke? Two, three? Okay, that's good. I'll do better. I have one more joke in place. All right, so a quick show of hands here. How many of you folks know about stacked ensembling or model stacking? Good. There's a couple of folks there. And how many of you here have actually used that technique to catch emerging threats in the real world? Nice, I don't see anyone here. So by the time I'm done, I hope everyone here understands this technique in details, learns about its benefits, and then perhaps try to reproduce this infrastructure into your own environment and see what sort of results you get. I would love to learn
more about that. So stacking is an ensemble learning technique that combines the strength of several different base learners. The individual classification models are basically trained on the entire training data. And then the outputs or the predictions from these models are used as features to train the stacked ensemble. The output of the stacked ensemble is then taken as the final verdict. Now for people who've looked at Kegel, this is a very commonly well-known technique to squeeze out that extra performance in ML competitions like Kegel. But how does it scale when it's used to catch emerging threats in the real worl