← All talks

BSIDESLV 2018 - Ground Floor- Day Two

BSides Las Vegas4:59:08440 viewsPublished 2018-08Watch on YouTube ↗
Show transcript [en]

Las Vegas ground floor track. This talk is Applied Quantitative Cyber Risk Analysis and your speaker is Michael Rich. A few announcements before we begin. We'd like to thank our sponsors, especially our inner circle sponsor Rapid7 and our stellar sponsors Amazon, Oath and Simil. It's their support along with our other sponsors, donors and volunteers that make this event possible. Um, with your cell phones, these talks are being streamed, streamed live. And as a courtesy to our speakers and audience, we ask you check to make sure your cell phones are all set to silent. Uh, if you have any questions, uh, throughout the talk or at the end, uh, please raise your hand and we'll bring the mic around to you. That allows people on YouTube to hear

your questions. Uh, with that said, I'm going to hand it off to Michael rich. Great. Thank you. All right. I don't typically put my certs on my slides, but in this case I want to put OSCP up there for a particular reason. And that reason basically is that when you go to events like this, I think it's really important to seek beyond what you typically go to. As an OSCP, I mean, I love popping boxes. I love red teaming. You know, it's super exciting. But then I went to this talk at B-Sides Los Angeles and kind of got a feel for this book. And it's all about what we're gonna talk about today. And it

was interesting to me, you know, okay, so if popping boxes is like up here, quantitative risk is, okay, it's not there, right? It's not popping boxes in this microphone. It's really gonna kill me, isn't it? It's not popping boxes. You know, I don't do a jubilation dance when I quantify a risk, but it's very interesting to me. So I just wanna encourage everyone to reach beyond their normal knowledge when they go to these conferences, don't go to the same things all the time. And that's basically that. So this is the book. It's all about Monte Carlo and quantitative risk analysis. Has anyone here read this book besides me? I got a couple, a few, all

right. Does anyone here know anything about Monte Carlo analysis? The same people that read the book and a few more. OK. I just want to judge how much of the basics I should go through, and I'm going to go through them pretty close then. OK. So we're going to go through that. Then I'm going to go through what I learned as I started applying this at my actual job, where I work at right now, and kind of the issues I ran across. So Hubbard's book is great. It's fantastic. It comes with a free set of tools you can download. It's got Excel spreadsheets you can work on. I found I couldn't ask questions of that

spreadsheet fast enough. So I wrote the whole thing in Python. And that's all available online on my repo, which will be at the end of the talk. You'll see it. So you'll see all my tools are available, including the diversity composition spreadsheet you're going to see and my sample risk that I'm going to show you, the whole model that I used. So if you need to get into this stuff, the book is a great place to start. And you need to start there. and then use his tools and then I think you want to move on to another set of tools very quickly. And I'll talk about my application of what I really did in

my current job and my results of my first risk I modeled like just two weeks ago. And then I'll go into what I'm doing next. Okay, so the idea. We all know what risks are. Risks are an event that has a possibility of happening and it has an outcome we don't want. On a qualitative risk analysis, we use this heat map. We go, hey, we're going to subjectively, qualitatively assess the risk as far as how often it's going to happen and how high the impact is. And we plot it on a chart. Quantitative risk is the same. It's a subjective assessment of the risk, both the probability and the impact, and you and you model that and you plot it on a chart. There's really no mental

difference between how you do it. I'm not going to go through this chart you're seeing right here in detail, but it's the same thing. The biggest complaint I hear when I start talking to people about quantitative risk is The numbers aren't there. It needs to be exact. It has to be based on data. We don't know the data for breaches. None of those things are true when it comes to modeling risk with Monte Carlo and quantitative risk analysis. It is subjective. And the difference is, instead of just picking a category like high or very low, we stake a claim. We assess. the risk based on a measurable, observable scale. And we're going to go over how you do that. And it's absolutely possible and absolutely

can be done. And the best part is when the data changes, you can remodel very quickly. And you can easily compare your risks because they're on the same scale. So if you get risks from source A and you get risks from source B, this has happened to me. I'm sure it's happened to all you guys. If they're on the same basis, you can combine them. When they're on two separate heat maps with these linguistic terms, you can't combine them. You really don't know. You don't know how many of those risks in the green add up to a risk in the yellow. There's no way to know that. But with the quantitative risk done properly and

on a proper scale, you can. And that's why I really like quantitative risk stuff. As an engineer, it super appeals to me. OK, a quantitative risk. First, you start with the probability of occurrence. Instead of saying it's likely or only a little likely or never going to happen, you have to pick a number. You have to sign a probability. And this is tricky and difficult. And we're going to talk about the impact it has on your in your models later, but you do have to pick it. And you can express a range for uncertainty. You can say there's a 12% chance, or there's a 9% to 14% chance, or any other kind of distribution for

probability you want to use, you can use it. All right. And then there's the loss. Hubbard talks about loss almost solely in the terms of log normal. This is a standard distribution for real life activities and processes that exist in the world. And I'll show you a graph of that in a second. And basically, you have to pick two numbers. The lower bound and the upper bound. The upper bound is you're saying, hey, 95% of all events will have this impact or less. And I'm going to show an example of this in a second. And then the lower bound you pick, hey, 95% of all occurrences of this event have this impact or more. You pick those two numbers, you use the log normal distribution, and you

get a very close to real life distribution of what kind of impacts you can expect. And the interesting thing about log normal is this great tail out here. Oh, look, I can use my finger. This great tail out here shows, you know, low probability but high impact events. And we call those, you know, your black swans, right? LogNormal captures that for you. And it's super useful. Now, do LogNormals exist in real life? This is a graph from Blackline. Blackline is a financial services company that exists in the valley where I live. I don't work for them, but this is on their website. And this is, it's saying average days for assignment completion. And you can see it matches the log normal distribution. Most of them are clustered

around this four day mark. You've got some that happen very quickly and you have other ones like when your paycheck gets released, happen, you know, 20 days later, right? So it kind of shows the black swan effect and it shows the log normal. So that's just to kind of show you that log normal is a real thing and there's lots of like real life processes that mimic this distribution. And that's why we use it for our quantitative risk assessment. Okay, picking the numbers. How do you pick your numbers for that lower end and upper bound for things you don't understand and don't really know? Like the impact of a breach, right? Very difficult to pick

those numbers. One thing I didn't cover real quick is I'm gonna do all of my breach effects in cost. I use almost nothing but dollars because that's what businesses work on. However, in a previous life, I worked for a defense contractor and I was working on an Air Force weapon system and I attempted to start tying cyber breaches to the root mean squared error that that weapon system may expect with a cyber breach. The point was it's a measurable observable effect that you can estimate based off of however you want to estimate it and you can use it. So it doesn't need to be dollars. If you work in a world that doesn't require dollars,

then you don't need to, but it needs to be measurable observable on a proper ratio scale, a real number scale. You can't just have a made up number scale for you to make it work. Okay, so back to this. So how do you estimate your numbers? So Hubbard goes into this in detail and it's called calibration of the experts. And it turns out there's plenty of research that shows you can take a person, you can expose them to things they don't know anything about, lots of things they don't know anything about. You can learn how to estimate the lower bounds and upper bounds in a way that you are right 90% of the time that

the real answer is in between. It's called calibration. His website has four calibration tests you can take. I've taken one and I came out 70% confident, which means I'm actually overconfident, which means the numbers that was right

was out of the bounds I picked too many times, all right? You want to be 90% confident, which means I need bigger ranges, bigger, lower, and upper bounds. So, and it works like this. You get a question on something you have no experience with. What is the state of capacity at Wembley Stadium in London? And you start thinking about it. Okay, we can go absurd, we can go the lower bound is at least one, right? At least one person can fit in Wembley Stadium, right? The upper bound, a million, you know, there's no way more than a million people fit in Wembley Stadium and live, right? Okay, so we have this giant range, but now we got to think about bringing that range together because that's too

much uncertainty, right? It's not useful information. So then you say, okay, I don't know, 10,000, the minimum is 10,000, at least 10,000 people fit in Wembley Stadium. and maybe the upper bound is 500,000. Okay, you start getting to a smaller range and you hope that the answer is right in there somewhere. And the point is, you do what he calls an equivalent bet. All right, it doesn't even matter if it's real money or not. You say, okay, for 1,000 imperial credits, would you rather spin that dial and land on the green or see if your answer is right? That your answer, that your range includes the actual value. And the point is, you keep reducing

the size of your range until you don't know if you want to spin the dial or you want to find out if you were right. So, I mean, we can play with that. We can say, okay, we said 10,000 is the lowest and 500,000 the highest. I know that the answer is in there, right? I mean, I don't think any stadium is 500,000 people. All right, so I can bring it in. 50,000 to 250. What do I want to do? What do you guys, do you want to spin the dial or do you want to get, see if your answer is right? What's the crowd? What do you guys think? Crowd's dead. Nobody cares. What's

that? 200,000, okay. So he's playing the game with me. He's saying, okay, we're at the 200,000 and we're at 50,000. Now, do we think the dial is right or do we think that our range includes the answer? We keep doing that in our head over and over and over again until we get to a point where we can't choose between the dial. In this case, the capacity is 90,000 stated on the website. And you do that for all these things. So if you take Hubbard's calibration tests, there are a ton of questions there that you have absolutely no possible idea of knowing. And you have to get used to the fact that when you're faced with something like that, you put very big ranges in place. And

that's the point of risk measurement and measurement as a whole is the reduction of uncertainty. So if you don't like your ranges you come up with, you have to come up with a test or a measurement that will help you reduce them. It's really simple as that. OK? So you can do this, and you can start getting some pretty accurate ranges. All right, so here's a quantitative risk example. I live in Los Angeles. If a traffic incident occurs on my morning commute, I will be delayed. probability one out of three days I see a wreck, simple as that. The impact between five and 60 minutes, I'm saying 95% of impact of wrecks are going to

delay me by an hour or less or five minutes or more, right? That's what I'm saying in that range right there. Some will be much longer, some will be shorter, but 90% of them are going to end up right there. Monte Carlo is you take that, you take that probability of occurrence and you take those ranges in a log normal distribution and you start plotting the occurrence values and how many you generate. So this is a snapshot from Hubbard's spreadsheet. You put the probability of the occurrence, your lower bound and upper bound, the number of trials you want to do, and you go through it. So trial one, what you do is you take a

random number from zero to one, zero to 100, whatever you say. If it's less than your probability, then it occurred. Trial one, it didn't happen. Trial two, I got below 30 and my delay was 14 minutes. So on and so forth, you see a 50 minute there at one. You get all those. So the question I came up, I want to know, how many iterations are enough? Because Hubbard's spreadsheet is actually limited to like 10,000 because of the way he does it in the Excel spreadsheet. And I was like, is that enough? I don't know. So of course, I wrote my Python code and I did a test. So you can see with 100 iterations,

you can get a general idea of the shape. With 1,000 iterations, you get a better shape. And then between 10,000 to a million iterations, there's not a lot of change in that curve, right? Those are those curves. I feel like I'm attached to a core, but I'm not. I got used to that. You see these curves here? This is the 10,000, and then the 100,000 and a million are on top of each other. Now, this isn't a lot of computing power. I do this on this laptop. And that takes, I don't know, 20 seconds to run. So it doesn't take that long to get a million iterations. And then when you have multiple risks, it

takes a little bit longer. We get into the detailed risk decomposition stuff that I'm going to show you in a little bit. It does take a little bit longer because I'm running a lot of log normals. But the bottom line is you can run these on a laptop easily. You don't need big computing power. Now also this greatly, the probability event greatly impacts how many runs you need to do. So if you're talking an event that has like a tenth of a percent of a chance of running, you need to run a million events or more in order to get a smooth curve. But it's obvious in the data. So when you print it out,

you get this stair step stuff. You get these stair steps and you'll know I need to run more rounds, as simple as that. So you turn it up a little bit. But in general, between 100,000, 100,000 is good enough to get you a good feel of where you're going. So it doesn't take that long. That's Monte Carlo simulation. Any questions on that first? Only a few people here had studied it before. Okay. All right. It generates a loss exceedance curve. A loss exceedance curve demonstrates how much you can affect, I'm sorry, you can expect your impact to be. What you do is you take your min value and your max value. So min value is one, max value is somewhere out here. This is a log trout, so

it's somewhere like 250 or so. No, that's over 300, isn't it? And you do an equal number of intervals and you calculate the number of events that equal the impact or greater. It's always a or greater. So at the one here, so one minute or greater, 30% chance. And you keep going down the path and you'll see where you get to. So we're saying I have a 30% chance any given day, this is my traffic one, 30% chance any given day being a minute or late or worse late for work. Let's say I'm not doing so good with my boss, right? I need to make sure I get there on time. I need to lower that to 5%. You can use the loss exceedance curve to find

that out. So 5%, go over and down. You're talking need to leave somewhere around 35 minutes early, 36 minutes early in order to make sure that you are on time at work all but 5% of the time, all right? That's how you use the loss exceedance curves. And this is what the risk generates every time. All right.

Okay, now let's go over practicalities. That's kind of the theory and then we start trying to think about how you want to use it, you start running some problems very quickly. The first one was curves are pretty, but I need to rank them, right? If I have curves like all over the place on there and they all intersect each other, which one's my greatest risk? I don't know, but certainly my boss wants to. So you reduce the curve to a single number and that can be done by calculating the area under the curve. This is exactly how the catastrophe insurance industry does this. They, when they have stuff like earthquakes and hurricanes, things that they

don't really have, you know, actual knowledge of what's going to happen, or good actuary tables, they draw these up and they calculate the area under the curve and that's your premium. That's what they charge you in general, right? I'm simplifying. For all intents and purposes, the area under the curve is the mean of all of your data, all your results. I did, I have a thing with like, it's five, like five risks and I check the difference between the means and the area under the curve and it was off by like three thousandths of a percent. So really for all intents and purposes just calculate the mean of your event. That's all you need to do. And besides the area of the curve is approximated anyway because you can't

just, you have to calculate with an approximation. So for my commute on average I'm going to be seven minutes late according to this math, right? Which is exactly why I ride a motorcycle every day so I'm not seven minutes late. And then but,

But you have to watch out for the black swans. 241 minute max impact. So how do you want to rate your risks? Do you want to rate on the max impact? That has less than a, this is a either 100,000 or 10,000 scenario run. The max impact was 241. That's a one in 10,000 or one in 100,000 chance of happening, right? Having lived in LA, when the Sepulveda Pass caught on fire, it happens. It took me four hours to get to work. In fact, I didn't even go to work that day. I turned around and went home. I'm like, I'm not, no, the hell with you. I'm not doing it. So you have to watch

out for that. So you need to choose how you want to rank your risk. It's probably not reasonable to use the max impact because it's such a small chance of occurring. But maybe that's what really concerns you. So there's that. And then I started having a lot of questions about Monte Carlo, about how do I use it. There's three independent variables that I'm using. I've got probability of occurrence, I have upper bound, and I have lower bound, right? Which ones affect my outcome the most? I got concerned about this. I mainly got concerned about it because it's really hard to pick a probability for your events, right? And we're going to go over that how I did it, but it's very, very difficult to pick a probability.

And so I was hoping that probability maybe wasn't that sensitive for Monte Carlo and I could get away with it. So I went over it and did models. Probability, lower bound and upper bound. And it turns out Monte Carlo is a precious snowflake. So for probability, you can see, I'm losing my audio, aren't I? Can you hear me? I'm good. Okay. Probability, you can see 30,000, if I modify the probability by 30,000%, the result modifies the 30,000%. So it's almost one to one. So it's really important to not get that wrong or to take that error into account. You can see lower bound, almost no impact. No impact on the actual results that come out of the model. And upper bound has some impact, but probability, man,

okay. And then it was even worse than I thought. So these are order of magnitude bars. So like between 1 and 3 percent, you can see it, you know, the change 200 percent and the result change 200 percent. It's almost one to one. So not only is order of magnitude a problem, but within that order of magnitude, I can make this risk live or die as an important risk just by picking the probability incorrectly. And so that was a little interesting. And you can see it right here in the graph, like I told you, the result, The average expected loss is the area under the curve. So when you change the probabilities, you get these

big differences in the curves. And so, of course, the area underneath them is the big change. So that got me really concerned. I was like, okay, well, I'll handle that. I'm going to take care of this problem, this snowflake. I'm going to add error to my probability bars, right? So instead of a 1% probability, let's make it a half to 1.5%. That's a 50% error bar. So great, that should do it. Turns out it has almost no impact because over the course of 100,000 events everything reduces to the mean probability. So I stated it at 1%. I moved it up and down. Honey Badger don't care. It's going to do the math, right? So I got to this point. I was like, oh, okay. So I kept playing with

it. Here's some bigger ones. Now I have a huge error bar, 5% with a plus or minus 4%. I mean, that's an enormous error bar. And then I put these other probabilities in there. We're going to talk about beta in a second. And then 5% fix is just a 5% flat. And you can see the differences are not that big. So once again, because I said 5%, I'm picking my results. And I did some more. Here's overlapping error bars, 4% and 5%. And you can clearly see when I pick 4% or 5%, the math just goes right to it. The LAC don't lie. So that really got me really frustrated. I'm like, what am I going to do? And then I decided I am probably lost in the forest

here. I'm not seeing the trees, I'm not seeing the forest because of the trees. And so I came to the conclusion that yeah, this is a big deal. I need to be really careful about it. But the point of this is not to compare small variations of one risk to each other. It's to compare multiple risks across my company and identify the ones that have the most impact, the least impact. If I get to the point where an order of magnitude change of my probability is how I'm switching my risks, I think it's actually a win. Now I know which ones I need to focus on. And I also need to say, hey, okay, Yeah,

they're flipping back and forth, but maybe the risks or the losses associated with that are not even fiscally relevant. Maybe they don't matter to my company's bottom line. Maybe they're just accepted risk. Maybe our insurance covers it. And then we also can get the qualitative differences at this point. I've now identified things that need to be interested in the quality of the loss. And what I mean by that, you're going to see a lot of this in a second, is how do I decompose my risk? Some losses are cost of opportunity. So your security engineers are there. They're working hard. We have a breach. Whatever they're doing before the breach, they are no longer doing.

They're only doing the breach, right? That's a lost opportunity. But then there's actual cost. There's HIPAA fines. There's things like that that actually cost the company money. Which one does the CEO care about more? That's up to your CEO. But that helps you figure it out. So I still consider it a win even though we get to this really delicate position with probability. All right.

Since probability is so important, I found myself considering like how do I think about probability? How do I cage my thoughts about it? So I started looking up like life events, statistically equivalent probabilities. So at the 150% chance of an event happening, I don't call that a risk. That's something you need to plan for. That's a cost of doing business. Because if you take that out, so say you do six month windows and you do aggregate risk across multiple six month windows, 50% probability there's almost no chance you're not going to see it in the next year, right? And you can check the math on that. So we're not going to talk about those. But let's talk about 50 to 10% probabilities. I call these my weather probabilities.

And they're still really big, but they make a difference in your thought processes. For instance, since I ride a motorcycle most days, I check the weather every day in the winter, not in the summertime when it's 110 and there's no clouds. But in the summertime, I check it. If the probability of rain goes over 20%, I won't ride. That's my risk level. If it's 10%, I will. So the point is there's a distinct feeling, intellectual difference between 20% risk and 10% risk and you guys need to bring that in. And then you need to think, hey, someone says this cyber risk is a 15% chance of happening. Now you're talking like 50% chance of rain,

you know? Like rain happens, right? So you basically need to be prepared for that risk to occur at that level. 10% chance of the next person you meet having a left hand, being left handed I mean. We all have left hands, 100% chance of that almost. Being left handed. How many lefties do I have in here? We're well under 10 percent, but anyhow, that's a statistical life equivalent probability. So is this probability of a cyber thing happening more or less likely than a person with a left hand? A person being left handed. All right, 3 percent chance that the next guy you meet does yoga regularly. Do I have any men yogis in here? There's one, two, Yeah, that's about right for

this room. All right. So you need to think. Someone said that we're going to get breached on this website. Is that more or less likely than the next guy that I meet does yoga regularly? All right? That's a way to think about it. 1.5% chance of having twins. So I don't know about you, but twins are fairly likely. I mean, you know about them. But let me reset. I've got to reset. I went too fast. Okay. I don't know anybody who has twins personally, but I know a lot of companies that have been breached. So if someone tells you that the chance of a breach is under 1.5%, you might want to be skeptical because there's so many companies that are breached. 1%

chance of getting six winning hands in blackjack in a row. If we all played blackjack last night, hopefully 1% of us had faced that. Did it happen to you? Six hands? No. No, no, no. Oh, okay. My boyfriend's blackjack. Oh, okay, yeah. You know these odds. Yeah, all right. So, 1% chance of that happening. 0.8% chance of being audited. You know, I don't personally know anyone who's been audited by the IRS, but I know lots of companies have been breached. So, if you start seeing numbers like this, you got to be skeptical. And finally, a 0.02% chance of getting a perfect score on the SAT. Life equivalent probabilities. Just a way to think about things when you're faced with picking probability for your events or you look at

data that gives you a number, which I'll show you how I do in a minute. There's that. Okay. So instead of just guessing and making up numbers, you can also use a thing called a beta distribution to come up with your probability distributions. And this is actually how I'm doing it. If you have a set of cases that are relevant to your task at hand, your risk at hand, you can decide, you can come up with a probability distribution by counting the number of cases that count against you and the number of cases that don't. I do this on a date basis, and I'll show you how I do it. But basically it looks like

this. A beta distribution like here says I have no information, none at all, about what's going to happen. So any chance between zero and 100% chance is likely for this event. But then maybe I know 20 companies and one of them got breached. So now I have one hit out of 20 and I get a little bump at the 5% mark, right? But when the beta distribution delivers a probability, it'll still pick along this line. You're just going to have a much higher chance in the 5% mark. And you can keep adding data. So as you meet more companies, you meet more people, more relevant cases, you can keep adding to it until you get

a very high probability spike at the 5% mark, assuming that's the way it goes, right? So this is a great way to keep your data fresh and your models fresh. And it's discussed in detail in Hubbard's book. All right, that's kind of the theory of all this stuff. Now I'll tell you how I did it after I drink my water. OK, first I need to introduce you to my company. This is Motion Picture Industries. Anyone else work at a Taft-Hartley Multi-Employer Plan here? I didn't think so. So what these are, consider them continuity benefit plans. So in the entertainment industry, there's all these people that work and they go from Sony, they go to Paramount, they go to Disney. When they move, their

contributions come to us and they maintain their benefits as they move across jobs. It was put in place for craftsmen. There's many of these in LA. There's, we handle, here I'll show you. So we don't handle stars. That's like Screen Actors Guild basically. There's a Directors Guild, a Producers Guild. We handle everyone behind the camera. All right? So we're the largest, one of the largest half Hartley-Moulter employers in the country. We get a lot of attention because of that. You can see what we have here. We have 48,000 participants, 18,000 retirees, counting dependents of 130,000 people that are subject to our healthcare plan. And we process, you know, almost 2 million claims a year, healthcare

claims. That's what we do. So we're a healthcare company, right? So that's the basis of my thought process. So what are my risks? Obviously I'm a HIPAA shop. So I have to deal with that. So, you know, I've been subject to audits and compliance checks and consultants come and they give me a list of stuff. So I was trying to come up and they give me, they call them risks. But I go through them and they don't meet my definition of a risk. They are factors that contribute to risk. I'll give you an example. So we got hit for because someone saw someone moving PII to a cloud-based, you know, application, right? So it was

authorized, it was fine. But that was something they saw, hey, you're doing that. That's not a risk. One, because it happened, so that makes it an event. And two, it really just contributes to the magnitude of loss I can expect. So work a thought experiment with me. Let's design a fake computer system that is fully air gapped and only can transfer data through three and a half inch disks, right? That's it. 1.44 megabytes of disks. There's probably 10 of those in the entire world, right? And probably eight of them are at DEF CON right now. computer system versus my real computer system which has internet connection to PII unfortunately, my exposure is much higher. So it's not a risk. The PII, the fact that

I use the cloud thing isn't a risk but it contributes to the magnitude of my exposure. So I include it in my risk calculations but it's not a risk on its own. So then I thought about hey let me look at all the applications, all the in-house applications we use or the external applications we use. In fact this is what Hubbard recommends as your best practice. But I have done full audits on all my applications and I know which ones comply with HIPAA security rule. And I've already stated that audits don't result in risk, they result in factors that contribute to risks. That includes my own data, my own audits. So and they all basically touch the same data on the back end anyway. So I decided that it

wasn't doing an application basis that mattered. Instead I decided to go with the CIA triad against my data itself. the data pools and this is my one, the exact one I'm gonna model for you right now. So I said in the next six months, you gotta pick your time band, risks are time banded. And I picked six months, you can pick a year, you can pick a day, whatever you want, just make sure your numbers work. There's an X percent chance of medical claims data confidentiality breach of 500 records or more. 500 records is the trigger point for health and human services. You go over 500 records, that's when the pain comes. So that's when

I want to activate most of my my cost centers. So that's why I picked 500 records or more. And the impact of that breach will be Y dollars. All right. So that's a nice statement of a risk, but now I got to get down to business. How do I calculate these risks? Okay, starting with the Y, the risk decomposition, because the costs are easier than probability. So let's just start with that one. Basically, I started with my cost centers. Watching the security process that I run, I was like, okay, I know when something happens, my engineers do this and it takes them this long. And then I started thinking about, okay, if I had a

big, a big loss of data probably multiple business units across my company would probably get involved in this. So I wanted to estimate that. So first I did that. I broke it down to these cost centers based off of the actual department and I estimate the lower bound, upper bound in hours of time. I think it will take to handle the risk and then the lower bound, upper bound of cost per hour. All right. This is just how I went about doing it. The cost I use, they're rough, they're not, they don't match our salaries. I can't, HR would kill me if I put our real salaries on here. So I'm not doing that. But,

and I just estimate the hours. Basically same for security, I'm saying, hey, a small breach, small problem, my engineers will fix it in about five hours. A big one, two weeks, I can't see a full time, anything longer than four weeks of full on time fixing a breach, in my opinion. It might be wrong, but that's what I picked. And then the salaries, I know how many people are in those particular divisions. And so I can cap the salary cost per hour. I did that for every department in my company. And then I moved on to what I call my real costs. These are things that we'd have to like end up doing, right? So

if our codes have shown bad, we're gonna recode it and we tend to use contractors for that. So that's gonna cost us real money. We have an instant response retainer that I'll have to burn. Our legal consultants are ridiculously expensive. So I got to account for them and their arm retainer. Well, and they're getting a new tech control, right? I mean, everyone who's ever been through a breach knows you end up buying a new tech control. So we put that in there. So I call these my real cost centers. And so I kind of went through this and I decided, okay, this is how I'm breaking down my risks and my costs. Each one of

these ends up being a log normal distribution. You can see that's why I use upper bounds and lower bounds and a cap. And it does the math. And I decided, hey, I need to compare this to people who have really looked at the cost of breaches. So, you know, You know, Ponemon, Sands, lots of people have done this. And I discovered that the cost centers I picked are actually really close to the cost centers that they bring up in their reports. And so although the costs are unique to company, I don't think the cost centers are that unique. And is anyone else here subject to technology business management from the CIO? Anybody? Nobody really? Okay.

I'm not going to talk about that, but they have defined cost centers in that project as well. And they match up to the Ponemon cost centers. The point is picking the costs for an impact of an event that affects your IT department or country, it doesn't need to be black magic. It doesn't. It's kind of defined for you. You just need to dig in and assign your numbers that you think make sense for your company. So that you can get to a good risk decomposition that way. And Hubbard goes through all this in his book as well about how decomposing the risks. And I don't have to do this, right? I could just say, hey,

I think it's 10K to 5 million. Claims breach of 500, record to more, 10K to 5 million. I can just make that assessment, but I don't know if I have a real basis for that assessment. So I wanted to break it down. And I used the log normal distributions for everything except for HIPAA fine data. This is a statement for what a HIPAA fine can be, right? It's pretty big, 150K per year, max penalty of 1.5, okay, but you can find HIPAA fines that exceed 5 million. You can find HIPAA fines that are a lot higher. So how do they, so what's this max penalty thing? I was like, you know what, instead of guessing,

I'm going to try and model it on real data. So we're looking for real data. The Health and Human Services website has lots of information on current investigations, but almost no information on charged fines. But this website here does, this compliancy group, they have about two years of data, there's 26 samples in there. I was like, all right, I'm taking this as my model. I put it together and I was like, okay, I'm going to make that as my, I'm going to call it a standard distribution, you know, like the bell curve. but the numbers didn't make sense. The mean and the standard deviation didn't work so I plotted the data and that is not

a standard distribution. It just isn't, all right? So I was like, okay, now what do I do? I might be able to guess what that distribution is. I might be able to make something up but I did some Googling and I ended up on this site here, the Stack Overflow, this guy, his name may or may not be Timothy Davenport, you know, it's hard to know. He wrote this code that'll take your data and run every distribution in SciPy against it and come up with the one that fits best, and then give you the parameters so you can use that distribution in your code. It was amazing. I loved it. I've generalized it a little

bit, and I made it so you can use other things besides what he did, and it's called modelfit.py. It's in my repo. You can use it, and it'll generate the fit for you. This is what it did for the HIPAA data. I don't even know what I'm looking at. I'm like, okay, that's basically a blank graph. Thank you, but it gave me those numbers up top. the settings that make the power law work and it says hey it's a power law data system. That's interesting. It's a little dangerous because power law can get really, really big every now and then. But I was like okay well here's what I'll do. I will take 26 samples

out of this new random distribution and I'll plot it next to my actual data and we'll see what it looks like. So here's the original data. That's what it looks like when you plot it. Here's my model. I'm like all right. Okay, I mean, you know, is it Gordon? I got his name wrong. George Box? Someone says, sorry, last name is Box, I'm pretty sure, says that all models are wrong but some are useful. I think that's a useful model. So I use this to model my HIPAA fines for all my events. Now, all right, so I got my costs. Now let's talk probability. So we talked about this already, that I had to be careful about this. Like I said, I can make my,

My risk live or die based off of this number that I pick for this event right here. And that frightens me. That's a lot of pressure to get right. And I started thinking about it and I was like, okay, the absolute value of my risk probabilities probably matters less than the consistency in the way I pick them. If I'm consistent across all my risks, then my risks can be compared against each other. So if I'm just yanking stuff out of the air, every time, then I'm probably not gonna be consistent and I can't compare my risks equally. So I decided to go with beta distribution. Basically, I'm saying you pick a time window, I said six months, you have your data associated with dates, you can see these red

marks here, this is a sample of fake data. You can say, okay, I had a breach in January, a breach in March and a breach in November. And then I moved that six month window. Okay, this is a hit. This is a miss, or this is a hit, because it got that one in March in there. This is also a hit, so I'm up to three hits. And now I get to April and I have a window where I didn't have a breach. So now I've got three hits and one miss. Same here with May. And eventually you do this through all your data. I did it through my breach data. And I had one

hit and 20 misses. I use that in my beta distribution as I show right there. And that's the way I pick my probability based off of MPI's actual history. Now we actually haven't had a breach at that point. of that magnitude, but when you start from a zero-knowledge baseline, like I said, that means you start with one hit and one miss, and then you have to add your new data on top of it. So you always have one hit. This is the kind of way it works. So I wrote a piece of code for that. You can give it the time window you want. You can give it the sliding time window. You give it your dates of events that meet your criteria, and it'll calculate your

beta parameters for you. It's right there in my repo. And it's not exactly perfect, I'll be honest. There's some weird edge cases it doesn't handle right, but it's close. Okay, so I got to the point now where I'm ready to code. So I coded it up. First I do my cost of opportunities. These are my manpower costs. Then I do my, these are my real costs. You can see what I did there. I put it all in a simple model and then I ran it. And all this code is available. I put the whole claims risk thing in there. I think it's claimsrisk.py. It's in the repo, so you can see exactly how I use

my own models and how you incorporate them. And this is the result. Okay, any questions? All right, no. All right, so basically I'm saying I have a 5% chance of a $10,000 loss or more in the next six months, okay? And then you can get down there at like, you know, the million-dollar mark, I'm at like a 3% chance. at the 1% chance I get down to some, you know, pretty big numbers, especially for my company, like 2.5 million. Now, is this good or bad? That's up for the CEO to decide. And you have to go through them, go with them and have a conversation about risk appetite. You go, hey, CEO, 2% chance of a $3 million loss. You happy with that or not? And they have

to decide. That's their job. They have to decide whether they can, the business can absorb that kind of loss or not. And you can plot their results on top of this and see where you need to impose mitigations. on top of that. I haven't gotten to that point yet. I'm just, the CIO is happy with this. I haven't gone to the CEO yet. And one thing I didn't cover, sorry, on the probability. So the beta distribution is giving me my baseline probability, but what about new factors? What if I say, hey, I'm exposing a new public API in the next six months. I'm bringing a new code contractor with an uncertain history. All these things

affect the probability of my breach occurrence. And there's a whole section of Hubbard's book about doing a mathematically consistent way using Bayes theories to include that data in your math. I didn't get there yet, but I am heading in that direction. So I encourage you to read that section. He has a spreadsheet you can download to incorporate different pieces of information. You'll notice it doesn't look like this. This is what my original risks I showed you look like. And they have that nice little fancy tail. That's because I capped all of my losses. I chose to do that. If I turn those caps off, I will get the S tail and you'll get the big, black swan. You need to decide whether you want to do that or not.

But bottom line, this is what we ended up with. And it's kind of the setup we're going to move forward with. So where am I going from here? And I'm short on time. I'm not short on time. I'm moving too quick. For me, I need to do the integrity and availability of the same data. And this requires a fairly in-depth interview with the business units. Like, hey, if the integrity of your data is messed up, what is the actual impact to the business? The availability is messed up. What's the impact? Like I've discovered that if my claims data is not available for two weeks, then we end up paying 100% of claims, not 90%. That's

a substantial monetary impact. So I have to look into that a little bit more and model it. So I need to do that for my claims data. I need to get into the considering other factors on the probability. That's important. I need to model my other data stores. I've also got eligibility data. I have retirement data. I have lots of other information I need to look through. And then the big thing to do, I need to carry my manpower versus actual costs through. This slide here, I don't know the makeup of this. When I talked about the qualitative review of is this mostly real cost or is it mostly cost of opportunity? I don't know

because my model doesn't carry that through. I can't tell. So I need to kind of add some hooks in there so I can see that data. I have more ranks to do or more risks to do. And then I talk about mitigating them. Then I need to formalize the process. If I'm going to get this through a HIPAA compliance review, this has to be formalized and documented and all the fantastic things that make life so fun. So I got to do that. For you guys, I recommend going to get Hubbard's book. It's on Amazon. Go get my code. There's the repo right there. I'll leave it up there for a second. And then think about

how you want to decompose your risks, how you want to calibrate your team, and then do the models and simulations. I think it's really worthwhile. I think it's really worthwhile. I like this method way more than the heat maps, the standard NIST managed ways of doing this risk. I can add new data, I can compare new risks, I can do everything I need to do to make this work for me and my company. Stuff I was never able to do with your basic heat map risk work. And that's kind of where I'm at. So I'm a little early, but I'm open for questions. Yes, sorry, wait for the mic. I'm so sorry, I got here late. But did you cover

reputational risk or attempting to quantify the softer costs or You can if you think that's relevant to your company. So Hubbard has a whole section on that. Okay, is there any company that's not relevant for? He will show you that the math shows the reputation isn't affecting bottom line on companies in the long term. I've heard that like beyond six months, like even stock prices like recover it. Like even in the book he talks about target, the target loss and the dip in target was indistinguishable from stock noise within months. Thank you. So there's that. We have one up here.

I had a question on residual risk and being able to track after the mitigation of whatever action has taken place to reduce that and how you're tracking a risk past the initial point of identification. Right, so you're saying you identify a risk, we get the CEO's curve and we don't, we're too high so we need to reduce some risk on that. you have a series of risks that you put together that help identify things. Again, you were kind of mentioning this whole idea that the probability of this one thing, you can kind of get into the minutia of how do I rate that, but it's really valuable on the meta scale of all these risks put together. However, after you go through that and maybe you change that environment's

situation, that attack surface, and now you reduce a specific risk within that, how are you tracking the residual risk risk in terms of kind of applying this. So risk is not a it's a point in time. Yes. And so kind of how do you how do you continue to model that through. So the only thing you can do it. So if you're going to do a mitigation it needs to either reduce the impact or the probability of the event by definition right. And so you have to make an assessment if you're going to buy a new blinky box you know you say my blinky box is going to reduce my risk because I'm going to

cut out 17 more attack vectors, so probability goes down. Something I haven't gone into yet that I'm planning on is I'm also a MITRE attack matrix kind of guy. So I want to show, hey, currently I'm not detecting these things. But now if I buy this blinky box, I'm going to have these more things I'm going to be able to see and stop. And so I'm planning on attempting to use a beta distribution to show how that reduces the percentage of probability. But I'm not there yet. Bottom line is we lots of times get offered solutions and services that don't reduce probability or impact and they're really quite worthless from a risk perspective. And so you have to take that into account and make an assessment.

You need to estimate what you think the risk reduction will be. If you're not convinced there's a risk reduction, then don't buy it.

Yes. So from your modeling, it looked like the probability was one of the biggest effectors of all of this calculation. Is there any sort of movement to standardize the probabilities of incidents across industries? I can basically say no, right? I mean, nobody wants to share their data, right? So you have to base it off of your particular industry. Like I can look at healthcare companies and say, all these companies got breached. Are they relevant to me? Do I focus on Taft-Hartley multi-employers? Do I focus on the entertainment industry Taft-Hartley multi-employers? There's only like six of them. That's not gonna give me very good data. And so the answer is kind of no, right? You kind of have to grade the data how you want. And

even though there's a lot of like there's ISACs and there's Cyberhood Watch with the FBI, still the sharing of data on actual breach and how things are breached is really, really sparse. that would be super useful. And people talk about, hey, the cyber insurers have good actuarial data, but I don't think that's been proven to be true either. So there's a lot to think about on that perspective from industry perspective. Yes. So there's a lot of talk about velocity associated with risk assessment. How fast is you're going to realize that probability? Is that factored into your model at all or talked about in the book? Not, no, I didn't, I haven't gone there with that. So I think you have to talk velocity if you don't talk

about time windows. So you're like, you know, you're like, hey, I'm setting this to my six-month time window. And you're like, this is my risk for that window. If the factors change, then the risk changes. But he doesn't discuss velocity in his book.

Any other questions? Yep. Oh, God. Oh, God.

So you modeled a lot of this stuff, at least in your examples on your personal life. How much earlier do you leave for work now? Because I ride a motorcycle? I don't. That's the way I think about LA traffic is I do the white line exploiting through the LA firewalls every day. So it basically takes me 35 minutes no matter what's going on except for complete closure of the freeway. Anyone else? All right, thank you guys very much.

Welcome to B-Sides. I mean, it's probably not the first welcome, but here's one more. Thank you for coming. Welcome to the ground floor. And so this talk is gonna be the Invoke No Shell. And Gal Bietensky is gonna present it. So he's got this fun title, I like it. It's like Senior Malware Psychologist. So I don't know if he does the psychology of the malware or the creator, or if he just messes with the world of brains. Sounds good. So just a few proper announcements. So we'd like to thank all our sponsors, especially the Inner Circle sponsors, so Rapid7 and our stellar sponsors, Amazon, OAuth, and Semo, and VirusTotal. It's their support, along with all of our sponsors, donors and

volunteers, that make this event possible. And that's really important. So this is being streamed live. So if you could just make sure your cell phones are on silent. I would really appreciate that, especially as everyone would on the stream. And that's also why when we do the questions, please wait till you get the mic to ask so that people could actually hear the question on the stream. So thank you. Thank you. Well.

Hi guys. It's a pleasure seeing a full house today at B-Sides Las Vegas. We are here for invoke no shell, all the power with no shell, as in PowerShell. Once we are leaving this place to eat our lunch in 25 minutes, we'll be able to actually take payloads easily, put them in a malicious document, and execute them without powershell.exe, which is nice for us since we're kind of good guys trying to act as bad guys many times. Without further ado, let's move to me. That's me. I'm working for a company called Minerva Labs. But this is not a sales pitch, so we don't care about it. I have background at FRET Intel. Background as FRET Intelligence Analyst. I did some work as dev and

full-stack researcher, meaning that I tried anything from SCADA and Modbus to this crappy PowerShell stuff. I hate PowerShell. I'm sorry. I'm about to swear about it like tons of times. I have many open source projects. This is my repository in GitHub. I have stuff there for bypassing sandboxes, like a cuckoo, not the sandboxes in browsers, and a project for copy-paste malware built from scratch, and all kinds of cool stuff. Just browse there, and you'll be good. So yeah, my name is Gal, by the way, which is just like Gal Gadot. That's like the best thing she ever did was being famous because now I can actually explain my name really easily. It's a unisex name so don't worry about it.

It actually means wave in Hebrew which is kind of a cool name thinking about it. Yeah. No worries. Feel free to follow me on Twitter, on GitHub, on I don't know, even on Instagram if you really want to. Yeah, there will be tons of cheap effects in this presentation, so stay tuned. OK, let's move to the outline of the presentation. We'll start with a bit of a background about PowerShell, what it is good for, absolutely nothing. But for blue teamers and red teamers, from the different point of view, is why PowerShell is so good or bad. We'll move then to the actual tool to invoke NoShell. and how does it work and why it is better than what we have right

now. And then we'll introduce how it actually works and performs against real-life AVs, which is kind of what we are doing in our daily life. OK. So let's begin with the Blue Team point of view. What is PowerShell for Blue Teamers? Well, it is kind of a programming language, mostly for Windows, now for Linux and Mac, I guess. I don't know who uses this kind of stuff on Linux, but good for you. It is really powerful. It is actually really easy to learn and use, which is good since we lack trained IT personnel. So this is one of the reasons it is so popular. Even I can code in PowerShell. It has all of the power of .NET. .NET can do anything in Windows, really anything,

even more than the people designing .NET intended to, I guess. So PowerShell packs all the .NET Power inside, which is good for blue teamers because they can do anything. It is compatible with WMI, Comm Objects, which is, again, really good because it enables us to do tons of stuff, maintain the list of whatever in the cloud, for example. And it has better security nowadays. It has logging and AMSI, which you can send a buffer to your AV provider, which is really nice. However, it is not perfect. Remember that, for example, yeah, this is a recent thing from a blog of AndySec. For example, you can see that they do have better security now. And if you try to execute the system

management automation, Ampsi, Utils, and get field, et cetera, and try to disable it, they will detect it. However, if you break it, with a plus and you concatenate strings, or you just use double quotes instead of single quotes, you bypass it. So it's not perfect. We still have some improvements to make, but it's a better situation that we used to have. OK, and it is so common that I did a survey among 80 different friends of mine how many people actually use PowerShell in their daily life. And 97% of the people use PowerShell. Some, like me, regret it. really regret it, but it is very common language for a good reason. And let's continue to the red team's point of view,

which is, I don't know, how many of you are red teamers? How many of you are blue teamers? Okay, so it's good. It's about 50-50 and maybe 20-20 and 60%, which are kind of anonymous, but just okay. No worries. So for red teamers, And surprisingly, the advantages for blue teamers are the same for red teamers alike, because we also lack well-trained red teamers. And it is easy to write bad stuff in PowerShell, really easy, almost too easy. It has all the power of .NET, which means we can do almost anything in malicious PowerShell, because .NET is so powerful. And it can communicate with WMI and com-object. You can use WMI, for example, to launch processes without having the parent process as the original one, which is

kind of good if you want to fight EDRs, for example. Again, really nice. Just like a bit of a different advantage for Photoshop for Red Teamers is the fact that it is fileless. Well, this buzzword, I hate it, sorry. But it is an advantage. You can actually execute Photoshop scripts without writing to the disk. Again, nice. It assists you to evade some detection tools or to make blue teamers live way more miserable. The blue teamers in the crowd, I'm sure, will agree. OK. Also, a key benefit of PowerShell, at least for red teamers, is the fact that tons of frameworks were already written for PowerShell. For example, PowerSploid, Invoc-Offication, Veil, all of it, I think, is compatible with

MetalSploid, which is good for me as a kid. I just want to generate my payload, click on this exploit button, and set the C2 server. And I have a working backdoor out of the box really easily. And well, we are all lazy. Well, you might say, not lazy. Maybe we are trying to be efficient is the word. But PowerShell is a good way to achieve powerful backdoors in no time in a reliable way to bypass AVs

partially because of those frameworks. I did a quick survey about this as well. Only 50% of the responders were actually using PowerShell for malicious purposes, so I treated the results with respect only to those using PowerShell maliciously, and 83% of those using PowerShell for malicious purposes actually successfully bypassed AV. The rest, 17%, I don't know, maybe you can take a private class with me afterwards.

60% of those using PowerShell for malicious purposes use frameworks. The remaining 40%, I guess, well, maybe they're the kind of guys that invent their own cryptography or something. Use frameworks. It is good. It is really easy. Just use frameworks. They're really good. But, well, the life of red teamers with PowerShell are not that good as well. We do have some issues with PowerShell, like restricting PowerShell. You get 90,000

results on Google. You have some ways to restrict PowerShell execution. They are not making it impossible, but they make you struggle for a bit. And you have those annoying package activation office thing, which means that if you try to, let's say, put in an OLE object, in a document today, and somebody clicks it in a fully up-to-date office, that's an important thing, you'll get this screen which blocks any kind of a batch, PowerShell execution if it is an embedded object. Again, not perfect. It can be bypassed from the registry, for example, but it makes our life a bit more tough. And this is the most annoying thing, at least from my point of view, when I'm using PowerShell for offensive purposes. I

use it often from a document. So I have a blank document and I start thinking, what will I do now? I'll launch it on open and close when you use the clicks. Let's say I go for an onclick. And then I need to write all of the VBA ugly code. And then I need to decide whether or not I want to use the EP bypass or not. Again, I think about, well, let's do the EP bypass. And then I generate my final combination of traits. Like the first trait is the one click, and the second one is the minus EP bypass. But I do all of it manually, which is really time consuming and really annoying since VBA is even worse than PowerShell. Yeah, you're all

agreeing on this. And it's awful, awful, awful, awful thing. And I hate it. There is a framework called Lucky Strike, which is nice. It does solve some of the issues I'm struggling with, but it wasn't good enough for my kind of need. You'll see in a sec how I solved it out. So to sum it up, what we are missing is basically having a method to overcome restrictions effectively and to work at scale to generate all of those traits and combinations effectively and easily, at least from my point of view. I'm kind of a red teamer. So in this stage, I want to unveil invoke no shell, which is a, yeah, I promised that cheap transitions between

slides. Yeah. I wish to unveil invoke no shell. Sorry for staying in the way. Well, I don't want to kind of unveil directly. I want to unveil via a story I had by a case I had. My boss comes to me one morning, and he says, well, we have a potential client, but he says he has 100% protection against any PowerShell attack. Nothing can do. Yeah. Exactly my response. Exactly my response. I totally get it. And well, I Googled for a while, and I found This thing. This is the core of invoke-north-shell. You might laugh, but this is PowerShell ISE, for those of you who don't know. And after googling for a while, it turns out that if you place your PowerShell script in this place, which

resolves to this path, you can actually get your script executed by PowerShell ISE instead of PowerShell.exe. It is nice. It has a UI, which pops, but it's You can shut it down, you can hide it. It is a really nice way to execute it. And PowerShell.exe is never invoked. It is really nice. It overcomes many AVs which restrict PowerShell, but not PowerShell ISE, because why restrict PowerShell ISE? It can never execute PowerShell scripts. It can. Well, just need to read the manual. And, well, in this stage, I had my first trait and my kind of a path towards the final version of my infected document. But then I realized I need to find a way to bypass execution

policy because, well, I guess that many of you have already seen malicious documents and how they used to have this minus EP bypass all the time. But I can't use the same flag on PowerShell ISC because, well, it doesn't get these kind of arguments. It's not PowerShell, it's PowerShell IC, and I can't use this minus EP bypass. Fortunately enough, although 30% of the users always see this annoying message, it turns out this is execution policy is a restricted message. It turns out that execution policy is broken. It is not a security measure and should never be treated as one. And you just, again, need to, yeah, you can just toss it to the trash bin. And even reading Microsoft documentation,

it was never meant to be used as a security measure. It just should idiot-proof PowerShell for the stupid user who doesn't need to execute PowerShell. But if a user intentionally wants to execute PowerShell, you can't stop him from doing so by execution policy. And indeed, I found this nice little blog post about this registry value. All you need to do is just to set this registry value, which resides in the HKCU hive, meaning that you even don't need to be an admin to set it. And you can set it to unrestricted. And you don't need this mean this AP bypass thing, which is, again, kind of amazing. And this is the second trait which I added to my malicious document, all again

by hand, which is kind of frustrating, and you need to do it in VBA. And it's a really annoying thing to do. But I have my second trait. And now, well, it is not a single vendor. It is never a single vendor you need to check it against. It is the first vendor, and then the second one, and then the third one, and everything by hand. And you have eight different combinations to check of traits. And you do it all by hand. And then you find, after doing all the vendors, you find this perfect combination. But this takes a lot of time. And my goal with Invoke.no shell was solving it. So in order to do so, I've started to think about what is the technology where I can

create all of this stuff automatically and create a malicious payload, a malicious document with a payload from code. And I used PowerShell to do so. And the result was this nice thing called Invoke.no shell. It is basically a PowerShell script Object-oriented PowerShell, which has all these kinds of traits being translated into members and member functions. And it communicates with Office Comm object. It just uses WinWord instance, which means you need to have Office installed. But well, this kind of makes sense. And it just generates on the fly all the different permutations of the possible ways to, for example, to bypass the execution policy or to launch PowerShell, either by PowerShell ISE or by PowerShell, and either to launch the payload on open, on close, or

on user click. It just creates a link in the document that will trigger it. So those are the options. Those are the potential hosts for the PowerShell script. And you can bypass the execution policy, as I said. And a nice thing I added just recently is the option to embed an object in the document. So this setting content MS thing, which was quite popular for the last couple of months, well, it generates just another permutation of all those traits with this thing embedded inside. It is really easy to do this kind of stuff since it is written in an object-oriented way. It is available online. You'll be able to see it, of course. And it is really easy to add this kind of stuff

inside. And once you do it, you just click on a button and you get all of the results. There are actually two different modes to this. You have a manual mode where you select all of those traits manually. And you have an automatic mode where you just press a single key and you have 13 different combinations of traits, which is really easy because you don't need to manually edit the VBA code anymore. You just press on this auto exploit button, and you get tons of different variations of this malicious document. And I am lazy. I really like it. It already saved me time in my daily life. So a little demonstration of how it works. This is me actually executing in-vangular shell. I give it a path to

the payload. a mode, which is manual, to actually enter my traits manually, and the text to lure the victim, click me. Nothing that complex. And my movie will now print the banner, an awesome banner of invoke no shell, which is just like my shirt, a shell. And now I just select two, which is on document close, launch the payload. I want to force execution if it is restricted. And I want to use PowerShell ISE as the host of the script. It is that easy. I don't need to edit any VBA code anymore. You can just take your payload of choice and embed it in a document without messing around with actually writing the VBA. And we're done. On the other hand, you can also

use it in automatic mode This is the auto mode. I even don't use any arguments here like in the last movie. I just insert them by hand. It will ask me questions to get the payload path and the lure text and all of the other stuff. And then, well, yeah, you can see I have a typo here. And then I'll put it in. And again, like, eat me in this case. Because why not? And auto mode. And then instead of asking me which traits I want to add to my, in fact, it will just generate everything. Oh, sorry. Yeah, it's not that it doesn't matter really much. It's just like,

sorry. It will just generate after five minutes all of those different documents, which is awesome. OK. And you can see, it's me.

not really surprising. And I even commented the VBA code it generates.

Yeah, awesome. So it even commented, and well, you can really enjoy and edit it. And we've seen the motivations for using it so far. We've seen how it works. And let's speak about the results. I did a quick test whether it works or not. I used the payload, which is the Saturn ransomware. I just read it as a stream of bytes. I used Invoke Reflective Injection, which is a really popular thing. And I just injected the shellcode to itself. I used it as a payload. Nothing too fancy or local. Even I didn't use base 64 encoding. Really simple stuff. I generated 13 different documents. The victim was Windows 10, 64-bit, five popular enterprise AV software,

next-gen AVs, which is just another AV. Some of them are really good, but it is just another AV. And they were fully enabled, fully capable, enterprise scale, enterprise grade. And well, it's time for checking if you've had great success or not. I defined the success criterion, which is kind of awkward. I wanted at least one of my payloads to actually bypass the AV and at least one to fail. Because if all of the payloads bypass the AV, Voima 2 is worthless because the AV is worthless and you don't need the tool to bypass the AV. And if all failed, the AV is good, but my tool is worthless. So thankfully, I had 100% success radio. Out of those, in 40%

of the cases, actually, all of the payloads bypass the AV, which is kind of awkward. But in 60% of the cases, My tool was useful. So I think I can get the Chuck Norris approved seal. And since we're kind of running out of time, let's go to the takeaways. Well, for red teamers, use this tool. Work in scale effectively. Adopt new techniques really fast. Take it in and use it without messing around. If you're a VBA, just take a PowerShell payload, launch it without PowerShell, and Do it in no time. It is really easy to use. You need no prerequisites. Just git clone and even copy the raw PowerShell script. It is really easy. But I want blue teamers also to take something from

here. Don't rely on this 100% satisfaction guarantee promises of the vendors. Test it yourself. It is really easy as a blue teamer to create many malicious documents or malicious. You can also just like pop calks. I don't know. and to check whether or not the environment actually limits PowerShell as they expect. Don't rely on those snake-hole promises. It is, yeah, it is just like, no, simply no, don't do it. And without further ado, we'll go to the Q&A session. This is a cat. I have a cat slide in every one of my talks. He looks at PowerShell on a Mac. I found this stock photo online. I can't explain it. I have literally no explanation what the actual, we have streamed, so I am not going to say fuck.

And I think we can go to Q&A now.

No questions?

In its current version, is there any anti-sandbox or sandbox detection built in? No, I didn't take the measures to evade sandbox or anything, which in my understanding should be done by the payload itself. I tried to keep it as clean as possible with nothing malicious, you might say, in my framework. It's really easy to add this kind of stuff.

As it stands with this module, could you create an executable that can run on its own, or is it just for creating the documents that contain the payload? In this stage, it actually creates only a document with the payload, but you can do anything in this style. You're talking about taking an executable to launch PowerShell using PowerShell IC? It is really easy. You should do it in no time. I trust you.

So I got the malicious content in that Word document. Was that VBS? No, actually it's VBA, like the macro itself is VBA. And it holds all of the payload as lines, actual text lines inside the macro code. And it just like concatenates it. And in my case, in some of the scenarios, it actually writes it to the disk. So it is not fileless, but who cares? It bypasses all of the AVs. Interestingly, there's a limitation in VBA. You can't exceed 1,024 characters in VBA line, which is very annoying. It limits your payload. And also, you can't use non-ASCII characters, which also kind of suck. But my tool actually checks it prior to the creation of the payload. And it will alert you if

you've broken the VBA. And did you build this tool in C Sharp? No, it is in PowerShell. I kind of regret it. All right. But it is kind of well coded and documented. You can have a look. It's good. If you have questions, feel free to ask.

Hello. This is pretty rad. I work as a blue teamer, and this makes me want to red team. You tested Invoke No Shell against a number of antivirus utilities. Did you try against any EDR solutions like Carbon Black or otherwise? Well, Carbon Black, no, I didn't try it specifically against Carbon Black. Also, I'm not sure if I'm allowed legally to say exactly against who I tested, but it wasn't Carbon Black. But Carbon Black may alert other EDR solutions and this kind of stuff, So, you need your solutions alert on anything. As a blue teamer, I think you can kind of... I'm very aware. Yeah, yeah. Alert. Alert is just like log in this case, I guess.

Hi. So, when you were testing the different combinations against different AV, did you find that there were some combinations that were working across the board more often than not? Yeah. Yeah. Okay. Is that information documented? I can tell you right now. It was the case of using PowerShell ISE and launching it on a user click. You can actually write the payload to the disk and to launch PowerShell ISE from a user click on document. And then you avoid using shell execute. You avoid using PowerShell.exe. And well, they just don't get you. Cool. Thanks. Yeah. That's the magic bullet. I think we can probably take one more question.

I think because that was fun.

Friends by Emily Gladstone Cole. A couple of announcements before we start. We'd like to thank all our sponsors, our Inner Circle sponsor Rapid7 and our seller sponsors Amazon, Oath, and Semel. We'd like to thank them for their support along with other sponsors, donors, and volunteers to make this event possible. Just something before we start, this event is being streamed live on YouTube, so I'd really appreciate it if you turn off your phones or put them on silent as a courtesy to your speaker. With that, let's get started. Awesome. Oh, okay. I can hear myself. Y'all can hear me okay? Super. All right. Well, first of all, it's a thrill to be speaking at B-Sides Las Vegas. Wow. Thank

you all for being here to hear me talk about how to make sure that operations stays friends with security. the almost inevitable survey at the beginning. How many of you have done some kind of operations or IT or SRE type of, oh great, good show of hands. How many of you are in here because it's a semi dark space that's away from all the rest of the hustle and bustle? Okay, I thought there might be a few of those, excellent. And I know that we have at least one person who's here to cheer me on, so thank you. Well, I hope y'all are going to learn something today. I'm hoping to have this be more

of a dialogue. If people have questions, please feel free to go ahead and raise your hands. We do have someone who will be moving around with a mic if that seems relevant. If no questions, hey, awesome. Just have fun. So why am I the one who should be talking to you about ops and security? I started out in ops. I was a sysadmin. I did some IT stuff. I used to take care of, gosh, Irix boxes, Solaris boxes, DEC boxes, and then eventually moving on to Linux and FreeBSD and more Linux and more Linux. But I'd done that for a while, spending a lot of time dealing with I moved over and started working security incidents and did

some security incident response. I've done some security research as well. And now I'm back on an engineering team. I am working for Agari. We do email security and I am working with some DevOps and infrastructure folks to help get our culture and our more security aware and bake in more security. So onto the interesting stuff. Okay, so when I started writing this talk, I submitted the proposal and I said, okay, let's frame the discussion of dev and ops around the CIS critical security controls. Super great. Most people have heard of them. It'll be pretty easy. Well, right after that, I started at this new job, which I have three months today. And I started working

with their infrastructure I realized that there's a lot that just doesn't actually apply. So the CIS, security control number one is all about hardware assets. And number two is all about software assets. And I started thinking about hardware. And my company is all, everything is hosted in AWS. So we don't really have any hardware. So I guess maybe instances or VMs or possibly even containers could be referred to as hardware. But then you get to things like AWS's Lambda functions, you know, serverless. Is that hardware? Is it software? I wasn't quite sure and I decided I kind of needed to reframe things. So, you know, I had to kind of throw out that talk and say, all right, let's think about assets some more. even

more than my dad who was a financial advisor and money planner and spent his whole day talking about assets. So I'm gonna be talking a lot about assets here and I just, I realized I had to completely reframe what I'm talking about. So out goes this agenda. So the updated agenda, I figured give at least a brief intro to DevOps and SRE and I'm gonna talk about some of the principles of DevOps and SRE movement and that they align with good security practices. And then I'm going to spend a bunch of time talking about assets, how to define them, how we want to track them, monitor them, what we want to do with them, then some time on least privilege and logging. I have some additional material

in case I talk really, really fast and I get through all this stuff and y'all don't have any questions that is more about DevOps and what they want. So standard disclaimer, this is my opinions and it does not necessarily reflect those of my employer. Although if I have anything to say about it, it will. And non-standard disclaimer, I do hope you like cat photos. So core principles of DevOps and SRE. Number one is that everybody shares the on-call, the devs, the operations folks, maybe even some of the QA folks depending on, how mature your team is and your model is. Number two, practice empathy, understand where the other people are coming from and work with that. And then finally automate

everything you can, write everything as code if you possibly can and that will make things easier and we'll talk about pets versus cattle. So on call, back in the day, the ops team, is the only ones that were waking up in the middle of the night and devs could kind of just say, no, it compiles, it builds, it's not my problem anymore. And they weren't suffering. So even if something alerted every night because of a constraint that had been built into the code that ops couldn't deal with any other way, it was, no, this is an ops issue. But with everybody sharing the on-call responsibilities, from an operations perspective, people kind of got, I don't want to say tricked into it, but got convinced to do

it because devs wanted to have more ability to deploy code and not just sit there and do the code they wanted to be able to deploy for testing. And then if they were deploying for testing, then why not deploy to staging? And then if you're deploying to staging, why not deploy to production? Then operations said, hey, folks, if you're going to be the ones deploying to production, well, maybe you should be the ones who are waking up in the middle of the night. So that happened. And it gave the ops team a better idea about what the devs were up to. And it gave the dev team a really better idea about the challenges that

the ops folks had been dealing with. And it started bringing the teams together. So that's kind of where we're going with the empathy, which is just understanding other people's perspectives and those shared experiences of being on call, understanding the real world constraints that maybe didn't show up in the works for me, it works on my laptop type of situation really helped.

this whole philosophy of coming from a place of understanding and support rather than having this adversarial thing or possibly even just a waterfall model where things get tossed over the wall between Dev and QA and then from QA to ops and they don't really interact back and forth and give the ops team an opportunity to do something that devs spend a lot of time doing which is writing code. And so that brings me to automation. Obviously, if you automate it, you're only gonna have to do it once. And if you're a lazy sysadmin or a lazy security person, you probably really only wanna do that once and then you can move on to something that's more interesting. It started with CFEngine to

allow you to manage systems and users and config files and even software deployments. And then Puppet came along and there's Ansible and all kinds of that help you manage all of your infrastructure. And even nowadays you can create instances or containers as code in your automation system. So everything that you can do there is

part of the things that you don't have to manage manually. And I just realized that I to delete my pets versus cattle slide. I'm sorry about that. I will just tell you about it. So the basic idea was that either you can treat your systems as pets, which is they get a lot of attention and they're all special. You know their names and you're friends with them and you build up a relationship with them. Or you can treat them like cattle, which get raised in a big herd. have as much of a personal one-on-one relationship with them. And in the end, they go off and do whatever the next thing is, which is maybe feed people or whatever. You don't have a relationship with them.

They're just there to provide a service. And one cow is much like the next cow. But anybody who has a pet, like pet cat maybe, might say, well, no, they're not interchangeable. This one matters and that one matters and they're different. So the thought about pets versus cattle in an automation and an operations context is that you want to have your systems be more like cattle that you don't have to spend a lot of time and energy individually on each one, keeping them up and patching them. If you can patch your systems at the same time after you've done your testing to make sure that the patches are compatible with all the rest of your software, then you're much better off than if you have to manually log into

10, 20, 100, 1,000 systems and patch them one by one. So treat them like cattle instead of pets. And obviously, it's also really great for a security perspective because you don't want to too many special things that look different from everything else, the more that you have everything the same, the more you can use baselines and know what is what's supposed to look like and what it isn't. So that's one benefit of having the cattle model versus the pets. And of course, you're always going to have one or two servers that you have to take special care of. But if you can do as much as you can to get minimize that, better. So these are some

books that I have found are a lot of fun learning about DevOps and SRE. I do have them in the references slide at the end. I am always happy to talk about books. I have a bunch more that I've enjoyed and that seem to be valuable. So there's plenty out there to read. Okay, so on to assets. What even are assets? So I had to think about this, you know, are so many things that in this personal formulation of mine that I'm calling assets. It's not only the physical services, servers, the appliances, the VMs, the instances, the containers, lambdas, even things like S3 buckets, cloud IPs, load balancers, whether cloud or physical. I'm calling domains also

assets. I'll get back to that a little bit later, some more. also SSL certificates. And potentially, you might want to treat your software as assets to

things that you have that you need to keep track of for expiration dates or other things. So it's One of the things that you think about with all of these things is you're paying for them in some way or other. You bought it initially or you're paying a fee for it being up. So that's kind of my framing on this. So that's quite a few lionesses and cubs. And you want to be sure that you know where of them are. I mean, if you're a ranger in a conservation park that has a bunch of lionesses and cubs, you're not going to want to let the tourists go drive around on safari if you don't know where all the lions are. Because if you do, maybe that, you know,

they could get eaten and that would of course be bad. You don't want to lose any tourists or have anything dangerous happen to them. So these are some things you pay for. You do definitely want to track them. So you want to make sure that you're paying for the things that you're using. If you've done your blue-green deployment, which is the model where you have one set of systems that is running in production, and then the other set of systems that's running the new version, and you swap, and now the one is in production, the other one isn't. Well, now you've got a bunch of assets, whether they're instances or containers or servers or whatever that used to be production. They're not anymore. you turn them off

so that you're not paying for them anymore? Do you immediately roll them into the next version? You want to make sure that if you're not immediately using them for something else that you turn them off so that you don't get charged for that. You also probably want to make sure

that you've got, as I said, you've got them all shut down when you don't need them anymore. Another reason to keep track of your assets is you want to make sure that you're working on the right environment. I mean, some people are lucky enough to be able to pen test against their production environment. Most of us have to do our testing against staging to make sure that the customers aren't actually impacted. Well, how do you know for sure that you're testing on staging? If you're like me, you have a lot going on and you can't necessarily keep up with everything that the infrastructure team is doing and what is production and what isn't. Now, the ops side, engineering side

will often say, oh, well, we can just go to our cloud provider and look it up there, or we have this cloud tool that will do all this kind of stuff. But that tool is probably not going to give you any way to show who actually owns the asset. And it's probably also not going to say what the purpose of the asset is, so unless you have really informative host names. know for sure exactly what it's going to do. So a centralized asset database can help you find out all this stuff. And it's a good place to track your assets. Just having it all in one centralized place rather than going from spot to spot

to spot. So story time, I had my first incident in my new company was there was a system that was running TCP dump. and all I had was the IP address. I didn't know what the host name was. I didn't know why it was running TCP dumps, should it be? And I didn't know who owned it, because all I had was the IP. So I went and I realized, hmm, I have five AWS accounts. Even with the command line, it's still a nuisance to go in and look through each of those accounts and try to find that IP. Went to my cloud tool, it wasn't really obvious.

I hadn't ever logged in before they'd given me an account, but I hadn't been able to play around with it. Couldn't find the IP there. I had to finally go to our SIM and I was able to find out what the host name was only in the SIM. Because of course DNS lookups didn't work from my laptop in the office to the AWS infrastructure. I would have found it a lot easier and a lot less. It would have been an immediate non-issue if I had been able to see that this was a testing instance some of the folks were doing to try to reproduce a customer problem. Hey, yeah, of course they're gonna run TCP

dump. They wanna see exactly what's going on and how the packets are flowing. They can do that on their test environment. So that's one case where it would have really, really been useful for me. So I know this isn't an easy problem. Even the NSA is having trouble with it. And I think that's pretty rich given that, who is the former head of the NSA's Tailored Access Operations team, actually gave a talk at Enigma, Usenik's Enigma, a couple of years ago. And one of the biggest points in his talk was, hey, please make sure that you track all your assets. It was a really good talk, and it's basically, hey, get these basic things right. But hey, I guess

it's a lesson in sometimes people don't practice what they preach. So how do we help? So this is an exchange that happened a couple weeks ago and somebody had been communicating with the team that they were working with to do some pen testing and they had found only some of the assets and so they knew they had more surface to explore and then somebody else found 120%. Well, that's awesome. So what kinds of things do you do? So we like to collect data, right? If you have DHCP logs, if they're sending into your SIM, if you're using DHCP, that can be a good way to find assets. And you're already getting that data. The scans that you're doing to discover vulnerabilities or just find out what's

out there may be useful in building your asset inventory.

going to, hey, you can help them by giving them this kind of information to make sure that they know about what all those systems are and what they do. So assets should be tracked and monitored. And so I just talked about some ways that you can help ops track and monitor their assets. One of the things that's really hard, even for people who have got some kind of asset monitoring, is how to update that asset database. Because a lot of places, it's tied into their automation tool. And if you don't boot up the automation tool correctly, if it's shut down, if it's misconfigured, if it's not running, then that host isn't going to check in, and they're not going to know

about it. So we still want to know about these things, because we're still paying for them, because they are still part of the attack surface for our company.

We definitely want to do that. So we are scanning, we're discovering new assets. As I was saying, you are probably doing these anyway to find vulnerabilities, to find systems that you didn't know about and hey, why not share? The other point is tying this back into DevOps. With the dev team now being able to create instances or lambdas or whatever, just as part of their work, they're probably not as good as a traditional ops person in knowing how to harden a system, even thinking about, OK, if I'm going to create an instance, am I starting from a secure one? Am I just pulling some random instance in from the marketplace, which could have Who knows

what in it? I mean, it might already have a Bitcoin miner built in, probably not, but it might. And you wanna be sure that people are doing, everybody, dev and ops is doing the right thing there. So it's good to do some scans and help reinforce what they're already working on. Okay, so other thing you wanna do as you're scanning is you wanna look for outdated rules.

I don't know about you, but I don't get told every time there's a new deployment that involves switching from host group A to host group B or from software A to software B. And so sometimes there can be a deployment where that'll happen and my firewall rules are suddenly out of date. They're pretty good at telling you if they need new firewall rules, but they're really not very good at telling you if they want you to turn off the old ones. They don't think about that. So scans can help you figure out whether or not you still need the firewall rules, you need the VPC configured this way or that way, or the load balancer pointed in this direction or that direction. So once again, that will help you

save money and it'll help keep your infrastructure more secure and it'll help remind your operations team that, hey, they should communicate with you as well as with the devs.

And maybe firewall rules, when load balancer rules are other things that you can manage like assets and track in your centralized database.

That way potentially you can, well, I'll talk more about life cycles in a bit, but if you have an estimated life of your firewall rule, you have a built-in check of, hey, I know it's time to make sure that this rule is current still, or can I turn it off? Obviously, we're doing scans to find vulnerabilities. So some, everybody knows about Heartbleed, I hope. Spent too much time on Heartbleed. So you probably do have some automated vulnerability scanner, but as a security person, if you have to go and say, hey, I got a finding, dear dev or ops or dev ops or SRE team, please patch this in your infrastructure that's putting a lot of the burden on you.

Using the DevOps spirit of automation, if you can have it automatically open a ticket for them so that they'll patch their stuff, that's even better. And then the ideal state, if you can convince people to do it, is have all the infrastructure, the containers, the instances, whatever, automatically updated all by themselves so that your dev team is just using the latest thing automatically. And your CI CD suite, which you can use to automatically deploy a build and test and make sure that everything still runs right, can just do the right thing. And you don't have to touch anything yourself. Automation for the win. OK. So obviously, tying into vulnerabilities, assets are things that should be updated.

means things like life cycles, as I was talking about for firewall rules earlier, but also for hardware type assets. This also means, of course, doing those patches that we were talking about and doing your best to avoid tech debt. So asset life cycles. If you've got hardware, it gets old. It dies. Ideally, you want to replace it before it starts having failing parts. Yeah, I know a whole bunch of people who are still running Ubuntu 14.04, probably about time to upgrade to 16, if not 18. And you want to think about the life cycle of your operating system as well, just in case people aren't. Other things to think about, lambdas, if you're not using it anymore, Anybody who is using them in prod, you

just kind of set them up and there they go. And they keep running and you input stuff into them. And hey, if you've updated and you're using newer Lambda, you may not turn off the old one, but if you've got a life cycle for it, then you're automatically prompted to turn that kind of stuff off. So that's useful. Domains and SSL certs, as I was saying, they do have an inherent life cycle of a year or two years or whatever. And I, you know, your DevOps team should be monitoring for that, but the security team is probably gonna be the one that gets yelled at if they don't get it renewed in time. So it might be a good idea for you to build in some kind of

lifecycle awareness. And as I was saying, if you've got that centralized asset database, if you can put that lifecycle field in, then you're, automatically doing the right thing and making sure it gets renewed in time so nothing bad happens there. So some things about how you sell patching in lifecycle. So obviously tech debt is bad. You don't want to have things building up and building up and building up because if it does, then something, it's much more likely that there will be a software vulnerability if they haven't patched it or if they haven't updated to the newer version of whatever major tool it is. And I kind of think, well, okay, folks, I keep making these tickets for you to patch your vulnerabilities. Well, maybe a little

bit better would be if you build in some automation to automatically do that. I'll stop bugging you. I'll stop coming to your standups and saying, hey folks, now there's 10 things that need patching. Yeah, yesterday it was 10, now it's 12.

And, you know, getting, leveraging OPS's desire to just have everything happen automatically with minimal interaction is gonna be a really positive thing. So let's see. Mm-hmm.

Let's move on to least privilege. So you do want to control the use of admin privileges. Not everybody needs admin or root access to your AWS account. Not everybody needs to be able to log into production. I know that the security team probably does want to be able to in case of incidents, but you probably want to restrict the dev and the ops folks who can and run things on there. One thing that I have noticed is that, well, we're all kind of a little scared of the legal team coming down on us. We're all a little scared of auditors. And if you cut down on who has admin privileges, you can help them reduce that fear and you can also sell it to people as, hey, if you're

If you don't have the permission to do that, if something weird happens, then I'm not even gonna look at you as a possibility because we all know dev folks who would just kind of log in and start making changes on things if they have the ability to try to get something to work better. And then it's not controlled and your auditors say, well, what happened here? And you're like, oh, looks like it was Joe again. Joe, come on, stop doing that. And those privileges also I think things that you probably want to regularly audit. I just started at my company and it looks like nobody has audited that kind of stuff for a while. And so I'm having to start from zero and I really wish they had

loved the things even every six months, if not every three months. So you can help your other security folks out by setting that up as something that is regularly checked on. Logs. I mean, I think we all really do like logs. And while we may not like looking through the logs for interesting stuff, it's really important to have them in case you need to do some investigation.

If they don't already have, if the DevOps side doesn't already have a big fascination with the logs and being able to debug things if things go wrong, well, That's kind of unfortunate, but it gives you the opportunity to preach the gospel of visibility and debugging capabilities. So I don't know if anybody remembers Ren and Stimpy, but I thought I'd throw that in there. So if we help ops to, you know, all the things that we all want to have logging to our centralized infrastructure, Sometimes your ops team will find that they want to use those logs as well. Nowadays, it's easier to give granular access. You can set up your SIMs so that some people can

only look at these kinds of logs and not that kind of logs or these kinds of events and that kind of events. And just a reminder that I always throw in there when I'm talking about logs, I always start with the outbound traffic because it's the most fun. Next thing, data protection, just a couple of notes there because it's been really

a problem lately. So some real world examples of data protection failures and things, how you can make them more relevant for your ops team. So your company's code in GitHub, I think you can also treat it as an asset. There are so many things that you could find there that maybe your ops team hasn't even thought of the security Obviously you're not gonna make all these things publicly available if you think about it, but there's the hackers will look for all these things. Dome 9, that's a cloud security company, check some AWS API keys into GitHub in a publicly available repo just to see what happened. And they found that the first time that somebody went and downloaded them

was three minutes later. So if that gives you some ammo to help convince your ops team that they want to take a look at these things and try to prevent it. So OK, if you've found some of that sensitive data, a reminder to you, just because you may not use git day in, day out, if they just commit over it, then you just go back one commit in the commit history, and you will see all of the information If you actually go in and remove the commit, that's the way that you can clean the history out and the data will no longer be there. Obviously rotate the credentials. Trufflehog, if you want to look into finding stuff in GitHub, Dylan Arie, I

think is how it's pronounced, talked about this at B-Side San Francisco. He gave a good presentation about it and Trufflehog and some things that he's working on. I actually have a link to that in the notes. I mean, nowadays, AWS themselves are looking for AWS API keys in places like GitHub, and they will also reach out to you. But I think I'd rather find them myself rather than waiting until AWS gets around to scanning it. So S3 buckets. I do think that they're assets as well. There is no way in the AWS UI to tag them, unfortunately. unless they're named to reflect what they are and what their purpose is and their life cycle and so on and so forth. You may not know. My company is just

barely 10 years old and I found an AWS bucket that had been created, you know, about a month after the company started and they hadn't been thinking about permissions quite as much as they ought to have. There wasn't anything in there that was, super secret or needed to be secure, but there was also no need for it to be there and no need for it to be out in public view. Nobody had ever gone back and looked at all of the buckets, but fortunately now at AWS, there are a few tools. You can just use the S3 UI. There's Amazon Macie, which will actually monitor not only permissions on the buckets, but also some of the contents, which is kind of cool.

So with that, we are doing pretty well on time. Does anybody have any questions at this point or should I talk more about ops and how to win them over? Good question. Yeah, it's coming.

Going back to your point about automation, I work in a hospital. So a lot of people are afraid of automation because there's a lot of possibilities that things might break, especially if you automate patches. Do you have any suggestions on how to actually make some friends? Yeah. So how do you help people overcome their fear of patching? That's a great question. My best suggestion is to

you can build a test environment, a little isolated network. I know it means that you have to have one of this device and one of that device and so on and so forth. But if you can just pull one out of the rotation and test your patch on it and make sure it works, that's really the only thing that you can do. I mean, I understand. It's so hard in a hospital network because you have the requirements of not only all of your PII and PHI that needs to be secure, but also sometimes the tools just don't get the updates themselves. And you don't even know if the OS update is gonna be compatible with your

tool. And that's important. If you don't have an ongoing support relationship with the vendor, which would be the best thing is making it their problem.

that they have to comply with all of these various compliance frameworks and HIPAA and so on and so forth, then yeah, the best thing that I can recommend, it's not necessarily going to be easy, but you've got to test it yourself. You can't just roll it out anywhere, everywhere, and hope for the best, not in that situation. Yeah. Anyone else? Okay. Beeping.

So, a little bit more about what operations people tend to want from their security teams. I did an informal survey of a bunch of folks that I know who work in operations and, you know, I asked my own DevOps team at work what it is that they want. So, here's some stuff that they came up with. They said, okay, transparency. they don't know yet who's on the security team and what they do, then they are gonna be less likely to just think, oh, well, I should check with the security team if there's something going on and I'm not sure what it is, or I don't know what the implications are of making this change to our infrastructure or to our application

or to our system. I, past colleague of mine, we had a lot of fun doing a security where we would some pots of tea and we would invite people from engineering and operations to come over and just hang out with us and be social and just you know interact on a more personal level and that just gave them more opportunities to get to know us and learn a little bit more about what we were about and you know maybe ask us questions in a less less pressured environment and If they didn't wanna ask something in front of the whole team, hey, you could, you know, they knew who we were, they knew where we sat, they could come over and talk to us.

And so one of the other things is they wanna make sure that they, the ops knows why the security changes are being recommended or are being made. And they wanna really understand what the security team is about and what their main goals are because they don't wanna be with their security teams, but also they do wanna understand stuff. They're curious, just like security folks. So next, ops would like you to be realistic, which is to say, yeah, there are some security changes that are really important and they have to go in. But at the same time, there are some, things that are going to give you just a little bit of win and they're going to impact productivity. They're going to impact an engineer's workflow a

lot. So the more that you can do to help the understanding of what's going on and why and how it's not going to make their work harder, the better things will be. I don't know how many of us said, hey, just have said in the past to our colleagues, hey, don't work from the coffee shop because public Wi-Fi is insecure. Or, hey, we're all going to this big security conference. Maybe you should put your phone into airplane mode for the entire time you're there. That's not realistic for a sales person. They're going to want to be available for their customers. So try to match what is a realistic threat model if somebody doesn't have any private maybe

there's less of a risk of them getting hacked at a security conference. The other thing is if you're making it easy for people to do the right thing, it will be more likely for them to do it. So respect, I was gonna title this empathy, but talked about empathy before. So,

If we all know that security team is supposedly the team that says no a lot. And the more that you can explain why something is happening, I've found and just like treat them like smart folks who want to learn and wanna do better. The operations team will understand, they will cooperate. They understand they'll buy in better. And when you're starting out to make things happen, starting out with small changes, prove that you're not gonna just break everything just for the sake of breaking things can also be a big positive. I know that kind of goes against the model of patching vulnerabilities where you go for the biggest risk possible and mitigate that and then work down the line. But sometimes you've just gotta build a

relationship before you can coming in and giving people orders. So the more that we can know their priorities and help them out and we can help frame what we're trying to do, the better it will be for everybody. As I said, ops does want to learn new things.

A lot of folks in ops and dev nowadays, dev ops, SRE, have come in from non-computing backgrounds. And so they have lots of questions and they've also by getting as far as they have proved that their ability to learn. And so, you know, they, they'll love hearing, Hey, these are all the details or I found this really neat hack, read about this. I'd like, you know, if you want, if you have any questions, talk to me about it. That's a lot of fun. found that taking the time to reach out to people and teach will pay off later because the other thing that goes on, I had to throw up this Swift on security thing that says, well, you know, ops really is doing a lot of security

work. The other thing that ops wants, they want jobs. Everybody's hearing about how security is a lot of fun and how there are a lot of job openings. And yeah, there are also a lot of job openings for DevOps, but Sometimes you find somebody who's really curious and who likes learning new things. You can start feeding them security knowledge and then you can hire them and they'll be on your team. And they will already have relationships with ops because they've been there and that is going to help them out a lot. So final thing, if none of these other things work to motivate your operations team, give them shiny things. I I do prefer to call it a reward rather than

a bribe. But if you just want to be blatant there, you can call it a bribe. Everybody likes stickers. How many stickers are there floating around here? Cube toys, also fun. Although I think fidget spinners are a little bit dated. Chocolate rarely goes wrong. I will just say though, booze is, I mean, I know pretty much all of us here are probably drinking a fair bit this week. But some folks don't drink, so maybe find out a little bit more about them before you buy them a bottle of whatever alcohol. There are some really awesome ginger beers and root beers that are craft and fun that you don't have to give people alcohol. So with that, I'm kind of saying, you know,

these are a bunch of things you can do to work with your ops team and help them succeed. You know, some of them, maybe even most of them. Hopefully all of them are things that you are already doing. If not, these are some more things to think about. So thank you very much.

So if you have more questions, now would be I see a hand over in the back corner there. Awesome.

consider IAM policy to be an asset? And do you let your DevOps team, because it's kind of hard not to let DevOps control IAM today with the functions and stuff. How do you make boundaries? Yeah, yeah, yeah. Pretty much anything that you can audit, I think you probably want to consider it as an asset.

As you're starting out, you're probably not gonna start out with a security person or somebody who's thinking about, oh, I should make sure I review this once my, I have two developers who stayed and then there's the one who left. They're probably thinking more about how do I pick up that person's workload than they are about how do I turn off all of the access that they have? So it's, as things evolve, doing things like making sure that the root, Access to AWS is the main one that is able to set IAM policies. That's useful. There are tools, AWS config now will actually help you audit your IAM stuff so that when you're trying to make sure that people are only doing what

they should be doing on AWS that they have, they do have some limitations.

If you're lucky, you can potentially even check that stuff in to your source repository. And then, yeah, you're definitely treating that as an asset. You know, just only thing I can say is you have to first sell them on the necessity of restricting the control and then and review and review and review to make sure that you're, that they have the right access that they need. I mean, it's a philosophical thing, whether you give them possibly a little bit more access than they need and make sure they can get their job done or give it to them in dribs and drabs until they are able to do everything that they need. It also depends what stage your company is in. If

you're in a bigger company, there's probably a little bit more tolerance for, starting small and then adding permissions versus if you're at a big company, you're probably gonna wanna start with more and then cut back gradually as they are convinced that they don't need this or they don't need that and you can justify it. Just let's wait for the mic. Would you characterize customer data as an asset as well? And if so, what's your point of view in terms of devs? of that versus security ops treatment, specifically in the realm of integrity. Right. So I do think yes, that customer data is an asset you can't track in the same way that you do track

everything else. I mean, the contents of a database,

if you're lucky enough, you're your asset database with your customer data is probably changing all the time and growing because you're getting more customers. So yeah, you definitely do need to track it. It's another one of those growing pains type problems that I've found is when you're starting out, you don't have any restrictions on who can do what on AWS, on who can access the data on how it's able to be accessed. But yeah, then, As you go, you definitely have to pay a lot of attention to it. You have to get yourself a way to think, to control that. And I was torn about putting customer data into this talk, but it doesn't really fit in this formulation because it's

such a special thing and there are so many additional regulations around it.

There's no easy way to inventory it, but you definitely do want to control who can get access to it, how they can get access to it. I'd be happy to talk after about this because I don't think there's the one way to do it, but there are a number of approaches depending on what your situation is and what your industry is and how you're... I mean, even how your security team is tied into engineering, et cetera, et cetera, that you want to think about. So anyone else? No? All right. Well, thanks a lot, folks. This was fun.

Good afternoon, everyone. Welcome to Besides Las Vegas, Grand Floor. This talk is Don't Bring Me Down, Are You Ready for Weaponized Botanists? by Cheryl Biswas. Before we begin, we'd like to thank our sponsors, our Inner Circle sponsored Rapid7, our seller sponsors Amazon, Oath, and Semel. It's along with their support and other sponsors and donors that make this event happen. The talk is being live streamed, so please turn your phones into silent mode. And if you have questions, just put up your hand up at the end of the talk and I can bring down the mic. Thanks.

Hi, everybody. Thank you very much for putting up with the technical setup and thanks to the crew here for getting me set up. I really appreciate it. Okay. here loves a really good, scary story. One that's even, yeah, with sci-fi? Maybe thinking Lovecraft and Po together? Oh yes please, yeah. Exactly. And you know, when something seemingly insignificant happens, but it's actually a foreshadowing of doom, because that's how they always begin, right?

router but unbeknownst to this hapless device the firmware embedded within it was infected okay so quick introduction I'm Cheryl Biswas I go by encrypted on Twitter I'm from Canada and you're all welcome anytime I work as a strategic thread I'm an Intel analyst, I like saying it that way, with a bank. I have a degree in political science. As you can see, I'm interested in a few things. And I am very excited to be part of the DIANA Initiative happening this Thursday and Friday, and so very proud of our team and the work that we are doing. Okay. So this, of course, is the obligatory disclaimer my views, my views alone, not those of my employer, past or present.

So let's talk about the evolution of evil.

Yes, there are many good stories to be told.

We'll question our choices around IoT. We'll find out who's out there. Someone's knocking at the door. Someone's ringing the bell. quick look at the monetization and then a deeper look perhaps at the money trail and then we're going to play a little game of what if.

Okay, I wrote a little ditty about this but botnets have been steadily evolving particularly noticeable since the beginning of 2018 there was no question. What we need to do is to work with the blue team in order to help them understand better what needs to be defended, and with the red team, to help them understand what the attackers are leveraging to shore up those defenses. And my goal here today is to share with you what I've been seeing, what makes me scared and afraid to fall asleep at night, and maybe to help you go back and revisit review your systems and see what it is you might be missing. Because something wicked this way comes.

So let's get started, shall we? When I think of botnets, and I think when all of us think of botnets, it's more in terms of an outage, an inconvenience, a nuisance factor. Am I right? That used to be it, generally. It was temporary issue, something that we could recover from. I'm gonna leave that thought with you. All right, so think back to February of this year. I wish I had candy because I would offer it to you. Who here can tell me what happened February of this year? It was big. Who here uses GitHub?

bad thing happened to GitHub in February. It happened at the size of 1.35 terabytes per second. It was a freaking awesome DDoS outage. That's a distributed denial of service. That is a botnet attack of epic frickin proportions. And we had never seen that. That was twice the size of the Mirai outage. wasn't just the one on the heels of that first one there was a second one now it happened to target a particular group of servers these were memcached servers I'm gonna guess that most people may not know what memcached is am I right would you let me just give you a quick explanation okay I had to learn this one at the time as well so basically cache means memory memcached was something it's a it's

the setup on these servers to enable them to respond more quickly. So it's caching. The problem with this was that, well, I'm going to move to the next slide. The problem with the memcache servers was a configuration issue. My job in threat intel is to be following the trends and seeing all the things that happen on a daily basis. I read a lot of Twitter, I read a lot of news, so I see a lot of things and then I get to connect the dots. I'm weird and I find it fascinating and then I make a report every day to tell people who don't find it quite so fascinating. However, there have been some major misconfigurations and that has been at the heart of some pretty bad attacks.

I would call that a trend and a trend that we have control over. This is something that we can, yes, mitigate, but more importantly, address and prevent upfront. Something we need to be aware of. Along the lines of default passwords being left on servers that are being exposed to the internet or that should not even be on the internet. Case in point, MongoDB and CouchDB, which got massively pwned by ransomware. not so long ago. Hard lessons learned. If you do a showdown search, they're still out there. So are a freaking lot of these memcache servers. And they are out there too, and they're not supposed to be internet facing. How did that happen? So we are in 2018,

and we have discovered the meaning of volumetric attacks. The bigger the better. This is bang for your buck. My concern is it's not just about an outage and it's not just, oh, it's a bunch of stupid servers that got left open. I have to take it the next level. I have to play the what if game and I have to say, so what if somebody wanted to bring somebody down in a very bad way? You find a group of these servers, you go after them, and you have the Eastern Seaboard go down for a good hour, half a day, possibly more.

remind you that we are an e-commerce driven society and we feel the pain in terms of dollars, this is a very real attack. This is hard impact.

So in the terms of those memcached servers, instead of boosting performance, that enabled the attackers to hit at an impact of 51,000 times.

The spike was, as I showed you, ridiculous. It was off the charts. That's not the only time that's going to happen. The people who are tracking this, watching this, the security researchers who know a hell of a lot more even than I ever will, are very concerned about this because they know this can happen again. And they know that it's not just cyber criminals who are looking to monetize it, potentially sell it on the dark web, and part of my job is going down there and taking a look at what's being bought and sold. It's nation states and it's the games that nation states love to play. When something like this gets weaponized at that

level, well we all know what happens with shadow brokers, right? Yeah.

Pretty much everybody here should know what the CIA triangle is, right? Confidentiality, integrity, availability. all matters equivalently. In the case of a denial of service attack, you're losing your availability. That, you know, uptime is everything in business. That's profit and loss right there. Especially when you're talking in terms of internet and e-commerce and losing transactions and losing customers. Very, very big. So the destruction of your network by a massive DDoS attack or the destruction of availability

My question though is, what if you were able to leverage those botnets to come after the other two sides? What if something's messing around with the integrity of your data? And I'm talking financial data. You do not want there to be an issue with financial records, not ever. That's why they run those things pretty much on mainframes. But if you found a way, and I know people who know how to do this, get in and breach those systems, leveraging a botnet. We have a big problem. Or what about confidentiality? That may not necessarily seem like an issue right now, but that's your data, your information about you, and you do not want that out in the open for

everybody because you got breached by a botnet. And that, well, I'm gonna just point the finger right back at Equifax. We'll talk a little bit about the Apache thing. Apache stress, anybody? There's a lot that can go wrong here. There's a cable that connects Europe to the United States under the ocean. If that had to suffer a hit of 1.35 terabytes per second, it would shut right down. It would be like putting gum in there, and then you'd have a big issue. We can't really weather that kind of an outage. a lot of cost.

Ah yes, imagine a zombie apocalypse of crockpots. Welcome to the hell that is IoT, right? Oh my god. Yes, but it's true. This is our world now. And default passwords are de rigueur, right? Embedded system vulnerabilities, the firmware, It's everywhere and you can't fix this. You sure can't ask your neighbor next door to go online and download the new firmware patch. I've tried that. Oh my God, I'd rather pull my hair out. I'm sorry. No. And interestingly enough, for a disposable society, people don't throw these damn things out. People hold on to routers and use them for five years. That is why there are unpatched routers. D-Link and TP-Link and I learned how to say this yesterday, Huawei. Don't use it. And a whole

host of other wonderful things that people bring into their homes to get better signal strength and connect to the internet. All of these are unpatched and they got them from their friend or they got them from their sister or they got them from somebody and the things are four and five years old and hopelessly out of date and hopelessly insecure. But wait, there's more. We have not factored in of devices that are exploding in our developing nations, right? We're not just thinking of the Western world and North America and Europe. We ought to think of the whole world. Everybody is in on the action here. Botnets are no longer an inconvenience and something that script kiddies played with at Christmas time

when they were being the Grinch and they were knocking down Xboxes and Playstations. We are so far past that point. I'm afraid that in many people's minds, they're still seeing it as, oh, it's a DDoS, it's a denial of service attack. It's really, it's a pain, but it's not that bad. No, no, no, I'm telling you today, it is that bad, and I'm hoping I can show you why. Because these have become one more weapon in what I consider to be a digital arsenal for the games that nation states play. There are no referees and there are no rule books. I thought these were a couple of very well chosen quotations about just what we're looking at.

These are people who are significantly concerned about what we face. And we have experienced the impact of DDoS attacks a hundred million to a million strong in terms of devices. Just think about it for a minute. How do you harness a million connected devices? That's staggering, but that's now become the new normal for us. We live in a society where it's consumer-driven, everything needs to connect. The manufacturers are only too happy to comply. That is dollars in their pockets, and the race to the finish line security so far behind it's not even an afterthought. What do we do? Do we regulate it? We can't regulate it. Who wants to be a regulator? Put your hands up. I don't think so. No.

These devices are essentially unmanaged. Like I said, they're old, they're unpatched, and they're still in use. And how in God's name are we going to track all of them? They're in every house. three or four of them or five of them. So what we're seeing is a playground, a playground for criminals and attackers to take advantage of what we are not able to secure, not willing to acknowledge, not prepared to deal with. The Mirai botnet let loose, unleashed an avalanche of and threat that we're just now beginning to realize. We have, and I'm gonna present to you a group of them, botnets, one of which is Wicked, which leverages the Mirai code to a whole new level and is layer upon layer

of botnets. It's not just one.

And we have the attackers the tools and the freedom to be able to create new threats that we're not anticipating and therefore really not prepared for. We talk about ransomware evolving and then becoming modular. It's sold, it's ransomware as a service. I can tell you for a fact, botnets are the same. I've gone digging and probing, yes, they are monetized to the hilt, are available for service, and it is at the lowest common denominator for people to get in and use them. You do not want this.

If you're on Twitter, somebody that I like to follow is Bob Rudis. He goes by Harbor Master. And he is a data scientist, works with Rapid7, tracks things like crazy, and shows some fascinating findings. He's also, for me anyway, a canary in the coal mine, a threat. He does show some things as they are coming up and says, listen people, get off your asses, pay attention. So I'm suggesting him, calling him out now as a good source to be following. All right, let's see who's out there, shall we?

So do you remember where you were in October of,

Who here remembers how they were impacted by Mirai? Were you using Facebook or trying to? Maybe Twitter or trying to? Maybe Amazon, eBay? Trying to send an email going, Mom, the internet's down. I heard that one. It's broken. It's broken. The internet is broken. And it actually really was. There were a whole lot of people all of a sudden saying, the internet's broken because it didn't come back. And the whole eastern seaboard was down for a prolonged period of time. That was a watershed moment.

We had three waves of attacks. I remember following this because it was unprecedented and it was all over. And when I got my internet back up, and I'm in Canada, we were briefly impacted as well. Dyn has a lot of email accounts with it, some of them under Rogers. There were 100,000

endpoints identified in this series of attacks. And it came in at a strength of 1.2 terabytes per second.

This was noteworthy. As they tried to mitigate the attack, the attackers were able to respond and react. Are we prepared for something like this when they come at us at a much higher level? That is something I want to hammer home. It isn't just about an ordinary tech, it's when they really know, and I'm sorry, I'm a huge fan of Die Hard 4, fire sales, and big fat scary nation state problems, but they happen. And I'd much rather we were prepared for it now than trying to figure out what to do with it during. I present to you, Botnet says weapons, only this was Mirai. Could you imagine if this happened tomorrow and it looked like this all at once? We can't talk to each other.

We can't coordinate a response. Businesses, massive businesses that really aren't designed to suffer lengthy outages but believe they are. That's what a DDoS attack like this portends. I have sat in on sessions trying to show people disaster recovery and business continuity rationale to explain to them why you don't just have a once a year, let's all get in the boardroom, let's have donuts, and let's make sure that you've got the call tree, and we're all gonna meet at this place, and is that barbecue on for Saturday, Bob? Yeah, okay, great. No, I wanna talk about we sit down and we actually have a playbook and we don't just dust it off once a year, but we update it on a regular basis because we use

the damn thing. And we use the damn thing because things will go wrong. And this is one of the things that will go wrong that will really mess us up. If you've got a playbook and you have a disaster recovery and a business continuity plan in place and you're actively using it, that is preparation. I'm preaching it.

I present to you the wall of fame.

These are, a lot of these are from this year alone, and I'm gonna show you the ones that I think have the most impact and damage, are the ones that we need to be learning from what they're carrying forward.

Hide and seek botnet. There's a red word there, This was a game changer because up until this point, you could flush botnet malware out of your system by resetting things. Have you unplugged it, plugged it back in again? It's a great solution, or it was anyway, for things like botnet malware. Heck, even with the VPN filter, did we all not get that notice from the FBI to please power off and power on our routers?

it doesn't work with this particular botnet. This one figured out how to achieve persistence. So when you power it back up, it's there. It comes back. The cat came back the very next day. That, my friends, is a weapon. Now, hide-and-seek botnet had a few other really clever tricks up its sleeve. Persistence was just one of them. represented to me the evolution of botnets in terms of offering attackers more than just the chance to monetize or drop crypto miners. In the right hands these could be used in a very bad way, in a very targeted attack. It was complex and it was decentralized. It had anti-tampering built into it so you couldn't even take it back down.

What was also interesting about hide and seek was it appeared first in January, but then it came back new and improved in May with the ability for persistence. Ah, but then it came back in June. Only this time, and this is what I think is important, it could go after database servers. It's not just going after the usual run of the mill IOT stuff, right? If you are an enterprise and you have servers, and you do, you need to worry about stuff like this and you need to be saying, what are my mitigations in place? How am I watching for this? What am I monitoring for against this? Do I have the right IOCs put into the SIM? That's what I want to be able to get across.

Then there's Milobot. Has anybody heard of these ones as I'm bringing them up, by the way? Feel free to chime in if you have. Okay. I said, holy shit, when I saw Milobot. Again, it's the sophistication that is being built into these that makes it totally weaponizable to me. So anti-VMware, anti-sandbox, anti-debug, they already know what we're going to try and do. They're on one step ahead of us. Obfuscation. right? If we can't see you, we can't find you. So it's wrapping things in encrypted files. I'm thinking like, remember Stuxnet and layers of wrapping? Exactly. People were paying attention, I see, using this. It doesn't have to call back home for 14 days. It can be quiet, stealth mode. three freaking

layers of malware, one on top of the other, to engage and activate. But the last one, and this one is juicy, memory resident. Fileless malware is a very bad thing. But this one is also interesting because it's a hunter-killer. It doesn't want anybody on its turf, so it obliterates other botnets. If we forgot the idea in our head to deploy a defensive botnet, already know how to get rid of it. And it is multifunctional. This can deliver any payload you wish. Very nasty stuff there.

And that brings me to one of our most recent ones, VPN filter. But this one changed the game in terms of being a weapon leveraged, designed by a nation state for use against another nation state. I'm thinking in terms of Stuxnet, weaponization at the digital level. We know that Russia has a campaign against Ukraine. We understand this. They've gone after them with black energy. They've gone after the power grid and ICS. This botnet, I believe, is a foreshadowing of the kind of capabilities that they are developing that are putting us at very serious risk. And yes, our critical infrastructure and those warnings I cannot stress them enough. I can't predict enough about what could go wrong there. But yeah, they are

absolutely targeting it. And what if you had a botnet loaded with the right things that we could not bring down that could go after our critical infrastructure? Water. Not just power. Water. What's interesting about filters that they knew to go after the shitty little routers in our homes where there were tons of vulnerabilities that already existed and they knew they were unpatched. Easy to leverage. And then they loaded this up and this is why I think the weaponization is so important. Wiperware and persistence because for me, Wiperware means you never have to say you're sorry. It's gone.

and I hope I said that right, came in at the beginning of January. And that's when I was like awakened to what this year was gonna be because I saw that and I thought, holy crap, why is nobody else getting excited about this? It wasn't just that it was a freaking massive botnet, and that is a freaking massive botnet by any terms. This was a miner. This was dropping crypto mining malware that it didn't belong. It wasn't just going after Joe and Bill and Jane's devices to load it up. It wasn't at the level of the individual. It was at enterprise level. And at this point, enterprises were like, that's not gonna happen to me. My servers are safe. You get this shit on your enterprise servers,

you're definitely gonna have an impact in terms of efficiency. It's a huge resource hog. also got the potential to bring more than just crypto mining malware and that's why we need to pay attention. What was interesting is in terms of defense, it avoided sink calling. You couldn't just bring it down. And another thing is, we're talking a lot about living off of the land and using things native say to Windows and to the operating systems to avoid detection. This knew how to harness the Windows management infrastructure to its benefit.

to you proudly this was such a great graphic I just thought I'd share it with you but it's advanced and it was scary here I'll go back this went after a lot of devices web servers and modems and all kinds of connected IOT devices it leveraged multiple voluntary which comes back to the fact that we're just not on top of patching things. But we don't have control over the things that don't get patched if it's living in somebody's house. It had, which was interesting, it had an SSH scanner that could guess the username and the password of devices to expose their SSH port. It had a worm. Now a worm is a self-replicating piece of malware. it out there and it

just continues on its own and carries through things. We're going to talk about worms in a little bit. And it went after content management service platforms and there are a lot of those out there and they are for whatever reason great targets because they seem to be very vulnerable.

All right I'm just going to talk briefly about banking botnets because this is probably one of the things that most people have associated with botnets. And this is rooted in tradition and I've seen a lot of them. Currently we have something new and it's called the Black Botnet. It's leveraging Ramnet, which is a very notorious banking malware. It's powerful stuff.

It goes after the credentials. It's able to get in and it's a persistent threat for us in the banking world. We had the dark cloud botnet that had Banking Trojan. And from my experience anyway, botnets and Trojans go together like horse and carriage. Bank bot Anubis is a concern because it went after androids. We know how vulnerable androids are. They're also prevalent. They're so widely dispersed and hard to maintain. Trick bot, there isn't a day that goes by that I don't see a warning notification about trick bot. Lockybot, which again is persistent, perennial, very good at stealing credentials. This is typical, this is where we think in terms of monetization, but now we've moved into the realm of miners.

I present to you mining malevolence, and this was out of the gate from 2018. I had a bad feeling about this.

the increase in miners is significantly more than this, it just continues to rise. What's very interesting is that we went from being threatened by ransomware to being threatened by crypto miners. There's been a significant drop in terms of ransomware. Pretty much because it's easier for the attackers to get the crypto miners out there. They're making money by doing it, they don't have to ask somebody a ransom, they don't have to engage with people. It's not an uncertainty, it's a certainty. You are using somebody else's resources and making money from it. Guaranteed, guaranteed return on investment. And they don't necessarily know you're there. Most of the time, they don't know you're there. A firm that I know have reported that they

had blocked 2.5 billion attempts in six months. Yeah.

at it hard and heavy because they know they can make that money. One of the crypto miners that caught my attention was Zealot and I'm going to talk about that a little bit later. Why? Well you saw the word Apache Struts in there right? If you don't know about Apache Struts I'm going to use the word Equifax for you and breach. Also it leveraged two of our well shadow brokers vulnerabilities. When you leverage old CIA digital weapons in your attempts. You know you're gonna go someplace. You know you're serious.

All right, so let's talk a little bit more about Zealot. And this for me is an indication of where we can be trending now in terms of crypto miners because if you get a nation state or somebody who can be hired as a proxy who does as a high-level cyber criminal gang who's willing to work for a nation state as a third party, they can do something like this. Apache Struts is a widespread web framework that's used at enterprise level, and I have had to issue advisories about it to my corporation, because when that goes down, we know it's very serious. We know it's serious because 2017, there were three issued for it. And for whatever reason, Equifax didn't get the memo

and it didn't patch it and then there was this freaking massive breach. And it was staggering because Equifax doesn't just get your personal information, it's got your banking data. You do not want that stuff out there. That's damaging, very, very damaging. You don't get it back once it's out there either. These were the two CVEs that were being leveraged. Eternal Blue, well, it was used in WannaCry. It's designed to help you gain lateral movement through a network. Once you gain access in a network, you want to be able to go through the whole thing and gather all the credentials and all the data so that you win the deck of cards. You own it.

Interesting because it's able to utilize both PowerShell and what we're seeing from an attack trend perspective is that attackers have been living off of the land. So they, for about the past two years, there's been a ton of talk about PowerShell. It's a really great tool for defenders, but also for attackers. And if you're using something that's on the land, it's not going to be as easily detected. Your systems aren't looking for that. Empire as well. It's a post-exploitation tool.

This affected servers, but the fact is you can collect compromised servers into a botnet and wreak very powerful damage. We have to be thinking forward. MicroTik routers. Let's talk about the routers of choice here. MicroTik is a Latvian company

used so much over in Europe, that's not inconsequential to us. We might not use Microtech over here, but we'll be impacted by the botnets and the damage they create globally. And these were collected in massive numbers on several occasions. And there's currently a campaign, as you can see from the data on there, to be worried about. In this attack,

they're concerned about sophistication. I worry when I see the word sophistication because we've usually associated denial of service attacks as something that is a more simplified level. Once you get sophistication in there, you know somebody is targeting you. Somebody is gonna come at you and do some very serious damage. You may not be able to detect it. You may not be able to mitigate it. All right. Are we warmed up? play a game of what if? All right, so my theory was where are the attackers going to go with this? What do we think may come of it? There's a bunch of devices out there that can be recruited into their army. We need to be looking at the questions in terms

of how much damage could they do and how are we prepared to step in at the time of attack if we can't catch it beforehand. how to prepare our systems to really monitor and look for things ahead of schedule. Are we prepared to deal with it after the fact?

Botnets are designed with a purpose at hand. They need to call home, so they are usually grouped with a command and control server, but not all the time. Some botnets have been designed to do a peer-to-peer or

Why is that important? It's easier to find the ones that are in a command and control setup. Peer to peer, they're just talking to each other and it's much harder to track. That's a problem. They're designed with one role. You had one job. They do their job very well. Go forth and infect. One of the things that they are leveraging cases is wormable botnets. So again we talked about worms and being able to propagate without our interference. We don't have to worry about it. We set it out there and it goes and it does what it's told more effectively than any human employee ever could. So while botnets for now seem to have been used for the purposes of creating

denial of service attacks, for creating through dropping crypto miners on things. We've already seen with VPN filter that they can be used as a weapon. And we are expecting more of the same.

In the case of Mirai, for example, the people who wrote Mirai went after Brian Krebs, the security researcher. That was a weaponized attack. That was 2016.

to you great worms in history.

This was interesting to research. Some of us, we may know one or two of these. The Morris worm was interesting because it was supposed to be a prank, but it was a prank based out of pride, so this was human fallibility, and it went terribly, horribly wrong in a hell of a fast time. And the brilliant student who produced it to deal with the fallout and he was scared to death. He did get arrested eventually. He did try to bring it under control, but kind of like in The Sorcerer's Apprentice, only he was a sorcerer. He couldn't bring it under control, not directly, not right away. And trying to warn the people who were in the direct line of fire, the worm

beat him to it and they were unable to receive SOS messages. We have very damaging words, Michelangelo, these wiped disks. These were highly destructive and that was the 1990s, so 20 years ago. Code Red and Code 2 were among the most damaging in terms of, I think, a billion dollars at the end, all told. Then we have Conficker, which is notorious within ICS. SES is industrial control systems. That's critical infrastructure. That's very specialized equipment that is often very old and run to failure and hard to maintain. You get something in there, it's an unpatched system that's just ripe for the picking. Now, Configure isn't known necessarily for doing damage, but it spreads like crazy and it

has the potential damage and be loaded. You can mess with that code. And then of course, the big one is Stuxnet. I'm just going to say there's a lot of great reading on Stuxnet. I told it to my kids as a bedtime story. So I present to you a very interesting perfect storm and this is real. We've got something called the ADB minor that's been active. works on Android devices. We know Android is inherently insecure and that it's everywhere. It's everywhere because it is cheap. People buy Androids. It's been going after port 5555 and Bob Rudis, which I mentioned earlier, put out an alert on August the 2nd to everybody coming out here to please make sure before you leave to set your

systems up and monitor it because he had seen a spike in activity of this thing. the bad guys knew that we were gonna be away. Again, misconfiguration is at the heart of the matter. This is something that we have to understand better as a community to work on because it is an Achilles heel for us. And the thing with this minor is that it can be quietly, silently uploaded through remote access. How do you tell that you're being hit until it's too late?

If I wanted to bring something to play, I would definitely bring in Zealot. If I wanted to build something at a nation state level to weaponize, I would take the ADB miner and I would take what I could borrow from Zealot. I would be looking at enterprise level vulnerabilities like Apache Struts because I know that they're unpatched, they're global and that my payoff is going to be much bigger.

The beauty of it is I don't have to load crypto miner malware. I can load my malware of choice because I can get into that code. Are we playing these games and are we thinking, really thinking like an attacker in this case?

If I want to pick my botnet, I definitely look at hide and seek because of the persistence. to utilize peer-to-peer over CNC for obfuscation. I want the ability to set it forth and not have to worry about managing it. I need that worm replication built in. I don't want somebody to be able to mess around with it once it's out there, so I'm going to have anti-tampering, anti-evasion. And yes, I want this to be multi-purpose so that it is going to drop whatever I need. I have the ability to bring down the eastern seaboard if I so choose. I'm not gonna do that though, Billy.

So. Aw, that's so cute. I present to you Frankenbot. So, I'm really not technical. And I've just played a lot of what if. I'm not encouraging you to go forth and make your own botnets. If you wanted to build your botnet, this was my project to do this talk. I sat down. I joined some dark and seedy places that nice moms like me don't really belong on, but that's okay. I have several proxy accounts. I went and looked at stuff that I had to wash my eyes out from after. really interesting and I'll share some of it with you and I looked to the people who do this for a living to say how would

I go about building a botnet me because we're going to reduce this to the lowest common denominator because that's where we should be scared nation-states might know what they're doing but script kiddies who want to do it for the money don't and as we know if you put somebody who has no license behind the wheel of a car an accident is going to happen and an accident at this level is going to be very costly too even more damaging. All right, so we talked about you need to look at things like peer-to-peer networking. We want to find a loader to be able to infect systems. You need to have a good hosting source. You can't just go to your, you know, Kogeco, or in my case Rogers. You

have to go to a place that nobody knows your name, and preferably overseas. They offer something called bulletproof hosting for that reason. The proof means nobody knows who you are because you pay them enough money and they are silent. They pretty much are in Russia. Then you go to set yourself up on Google Translator and Google Russia. That's why. You're gonna build something interesting called a stub, but that's your infecting file. You need to have a net builder to do this, but I've got great news for you. There's lots of interesting places that you can access online to find these things. I didn't even get in trouble doing so.

You will use a cryptor for your stabbing because you're going to evade detection from antivirus. This is just a given. You do not want to be caught fresh out of the gate. It's like when you play hide and seek with your friends, right? You don't want to be the first one tagged. You're in it for the long run. This makes sense. You're going to go hunting for vulnerabilities. You're going to use showdown to go hunting for vulnerabilities. That seems to work like a charm. a lot of vulnerabilities listed on a daily basis that you can look for. You're looking for the ones that are like CVE 2016, 2017, because those are guaranteed wins. And then you need a remote administration tool, a RAT, that's going to help you

deliver the payload. These are pretty much the typical components. It's kind of like a recipe for cake, just different.

So this was one of the forums I was on that has a lot of really good stuff in it. Oh, thank you. These are not just can you find the things you need, you can find people to teach you. Now, I've also gone on the dark web and they are big on teaching each other stuff. Better at it even than we are. You can get the botnet Bible. It costs money. I didn't want it. You can find out more about who should host you and why. I did go looking and I found good stuff on Pacebin. stub. And then how do you build something? UFO net. This is an entire guide to building a botnet.

And it walks you step by step through the process. And it works. And if you build this botnet, you can load it with things. Very bad things, but it works. This I've included is a very interesting description on how you compromise a router because this The bottom line is you're going to be compromising a router to get in, and understanding how that compromise works was nicely laid out here.

More fun stuff. Did you know you can find Mirai source code on Pastebin? It was released. It's been used in a number of variants since the release, and that's why when somebody releases the code, we should be scared. There's Satori source code. Satori is a very nasty botnet too. get to talk about it because there's only so much time. But you can get the code and do very interesting things. This was another interesting form. Sora. Sora is very nasty, very efficient source code for enhancing your botnet features.

Do you want to put CoinHive on? CoinHive is the biggest distributor of, it is the biggest source of malware out there and mine's Monero and it gets dropped on everything. That's how you find it. That's how it works and that's how you can load it. So I must bid you all adieu. I hope that this was entertaining as well as educational.

I will leave you with these suggestions based on what we talked about.

And when I upload my slides, these are some of the resources that I used that I hope will be helpful to you. Thank you very, very much for sharing this talk.

Are there any questions?

on one of the slides you mentioned something called Black Energy. Could you elaborate what that is? Yes. Black Energy is malware used designed by Russia, APT 28, I believe, when they went after the Ukrainian power grid that was used to help bring it down. So it is a nation state level malware.

When we're mostly talking about botnets, again, we're talking about DW. and where most ISPs or EDGE could drop the traffic if they saw DNS requests coming from IPs that are not specifically in the range. This has been going on for like two or three years now. And again, the increments have gone in. Have you seen any of the ISPs or any of the EDGE providers or actually saying, hey, we're going to stop this because it really takes one entry to basically stop or at least significantly decrease the effective for DDoS? I can say that when they went after GitHub, people who had Akamai in place were protected. They took a hit, but it was a brief hit. Akamai was on

it quick to respond. However, not everybody is able to afford Akamai. And not all ISPs are able to respond at that level either. Big ones, the smaller ones, and there are a number of them, they would fall victim.

Any other questions? Cool. Thanks, Cheryl. Okay. Thanks, everybody.