
Excellent. So, good afternoon. I'm Frank Gortez and Stefan Fry. This is Stefan Fry. I can't introduce him when he does it himself. Hello, we're going to be presenting uh some of our joint research today. Uh defense evasion modeling. Uh this is actually based off of uh research material that comes out of NSS labs. Uh just so I don't have to waste the time explaining it. Uh, who's heard of NSS Labs? Sweet. Uh, for everybody else, we are a, uh, security product testing company. Um, where we measure the efficacy of security products. Um, all of our reports are based on empirical data. There's no marketing crap, judgment, or marketing craft. It's your product worked or it didn't. And this is
on a scale of how it did it. Um the data that you're going to see presented to you today is based on the empirical data that comes out of a lot of our testing. So when we're testing security devices that should be catching and blocking evasions, those being IPS products, next generation firewalls, endpoint protection products, browser security. Um these products are meant to detect when an exploit is being used against your workstation or your server. Whether it is because the attacker initiated it or because the user initiated the attack inadvertently. Um we measure what gets through these these particular products as part of the methodologies for anything that's supposed to. Stefan and I started looking at the
data of the actual exploits that can get through the different products that we were testing. And the presentation that we're going to show you today is the discoveries that we made uh by looking at those uh exploits that completely evade these products or as we'll talk about it, they go completely undetected. So I'm going to kind of glance over the first group of slides because we've got technical people in the room. Um so throughout history we know that new technologies uh have revolutionized crime and warfare alike, right? We always have a paradigm where new technologies are utilized by those who are going to commit crimes at a faster rate than those who are to defend
the public or defend in our cases corporations and so forth can keep up with the new attacks that are being generated and new uses of such technology. Silicon is no different than tanks and gunpowder and and any other means, chariots and so forth in the past uh to do things. Um, we've all seen this. I think since we're all here at Bides and we're Black Hat and DevCon, we probably fit in the hobbyest to expert range, you know, from everybody in the room. Uh, we all know what a script kitty is, so I don't have to explain that. So, I'm already minutes ahead of my presentation. So, when we look at the paradigm, we know that we
have vandalism kind of is the domain of the script kitty. um theft and so forth a little bit out of their reach. We know that expert hackers produce and are the authors of crime kits or penetration testing tools or other security uh you know testing tools like medicine and so forth. We also know that this is the fastest segment that is growing inside the security market. We know that these tools come onto the market and they've basically changed. So our lower level unskilled hackers now have completely automated, completely commercialized, great UI type tools. For those of us that this is our almost 21st Defcon, uh we remember the days when that was not the case, right? So you if
you did get a security tool or you got somebody's sample code, it purposely was nerfed. Uh they purposely removed sections of the code so you had to be smart enough to put them back in. Uh nowadays we see completely commercialized pieces. We'll see a couple examples of that. Anyway, that's really changed things. So now anybody including my sister and mom who knows nothing really about computers can now go and launch an exploit or malware campaign against any company or throw a big net uh without really having to have much skills. We know that the security market's maturing. We see that the products come out. The UIs are fantastic. The quality assurance is actually really good on these products.
Um, you know, they're developed for ease of use and for do-it-yourself. That never was the case in the past, right? Um, they actually come with service level agreements, which kind of blow my mind. Um, so implementing detection evasion. So, we'll talk a little bit about development process of these. Stefan loves these slides because there's one section that really calls to his Swiss nature. Um so we have uh the creation of the malicious tool. So whether they write it, buy it, they lease it, they steal it, they get their hands on the tool. The next thing that they do is they find an invasion that they can build within the tool and then they offiscate. In other words, they create
multiple permutations of that uh particular invasion or they have particular exploit in this case. So to force the evasions they'll take the one thing they have they'll multip they'll create 100,000 versions of it then they do what step of Swiss quality they actually go and QC it right so the people that are planning on attacking and building that are building these kits will sit there and do things that you don't do in your own corporation they run every flavor of antivirus and unlike the rest of us yours is actually updated right They test their exploits against that. They test their exploits sometimes against inline tools. And then the remaining, let's say 5,000 out of 100,000, that's
what they actually go to market with. So this produces this amazing thriving underground market, right? So here are more examples of some of the tools. But some of my favorite parts of this is a they're not all that expensive. Remember when these things used to be like you had to know somebody who knew somebody who wked at the right time and like nudged their friends? So they knew that you were in and then you still had to come up with like 10 10 large and like cash to get their tool. Now they're like 250 bucks and you can use PayPal half the time, right? They have service level agreements that have wonderful statements like uh they will
give you a full replacement warranty if the creation is detected by any antivirus within 9 months. How many of you have antiirus that has the opposite claim? We will give you your money back if anything gets past our product within 9 months. like that's just never going to happen, right? So, and we know because we test them. So, our answer has always been layered security, right? We go in, we say, okay, we have an on- premises section. So, this is our corporate infrastructure. We have servers, we have desktops. We add in a couple layers. Typically speaking, that's our perimeter. If we're really sophisticated, that perimeter and all the inline networkbased security goes all the way through maybe to our core.
If we're really really lucky and have a large budget, we have things all the way back out to the workg groupoup switches. So we have things like our firewalls, our next generation firewalls. We have our IPS's. Then we're down to what's on the host after all of our network stuff is finished. Right? So there we have the first line of defense typically with today's way of getting infected is through your browser is browser security. Right? So you see us testing the efficacy of browser security which blows people's minds when we come out with things like actually internet explorer is one of the best right now at stopping malware just is. Then you have on top of that anti virus
which is your last ditch effort your endpoint protection product. We have two different types of attacks in this world. We have the direct attack in other words the one that everybody thinks we do where we type at hacker speed. So you type three characters and 15 sentences pop up on the screen and we fly through virtual mazes because that's cool. We're actively attacking. It's the closest thing to a kinetic type attack. So it's the easiest thing for a consumer market to understand. The reality of course is that the indirect attack. The attack that's initiated by the target is the most common and the most utilized because of the fact, right? So your end users or your mom or whomever that goes
to a website may even be a legitimate website that has I don't know you know B 64 encoded JavaScript inside the website that launches something through their browser and then starts pulling down and dropping more malware onto their machine opening up command and controls and next thing you know their bank account has been emptied because they have Zeus is the most popular way of infecting systems which means Means when we look at the next big problem that we have from a corporate security standpoint are our mobile users. That laptop goes out. Well, that laptop doesn't have perimeter security at all. Right? Starbucks doesn't have IPS's and next generation firewalls. Neither does the airplane that you sat on. Neither does the Wi-Fi
here. Matter of fact, the Wi-Fi here is probably a testament to whatever you have as hostbased security. Right? So then you're just left with browser and antiirus. So the question comes up, how effective are your defenses? We all have a lot that we work with, right? So we have everything from hackers that we worry about. How do we pardon and defend the mobile machines that we send out? How's our data slipping through our fingers, fishing attempts, you know, spear fishing, casting nets, whatever analogy we're using to talk about how obtuse or acute the fishing attempt might be against us. And then of course you got to deal with like the snobby guy and the security or
IT department constantly. Never wants to cooperate. But that's like our daily lives. So I'm going to let Stefan speak a little bit. We're going to talk a little bit about our threat modeling in and of itself. What you'll see is going to get a show of hands. Who has heard of Maltgo? Excellent. Who's used Malta? All right. Who's a big fan of Malt? You're going to see some cool Maltito transforms. So here are a lot of our transforms laid out. So uh you'll you'll recognize the double arrows which are letting you know of course for the Maltiggo users that these transforms can be run in both directions, right? We can start with an exploit and take it to look at what
applications those exploits target and then we can come back from the application to find exploits and we can go from crimeware kits to look at which exploits are in crimeware kits or look at exploits and find the crimeware kits that have been automated. You're going to see all that being done stage five. Would you like to do this part? So yeah, um we test security products and uh for this talk we have the data prepared for our test from the last 18 months. So we looked at the next gener next generation firewall test 2012 2013 data 2013 was released in February this year. We have the latest IPS results from IPS 2012. The next IPS
test will uh will run in a couple of weeks or months I think and enterprise endpoint or antivirus. So when we look at those group test as we call them each group test has a bunch of products all the commercial products that we that together have a market share of more than 80%. So NSS looks like enterprisegrade stuff that is relevant for enterprises. We look at the top products. So, uh, failure rates we found we've seen in group tests are as high as 45%. We should we should pause and let that number just say no. Failure rates of products as high as 45%. Just that's horrible. Like that's an F in most schools. How about Switzerland? Would
that be like a failing grade? You don't go to school. You don't go to school with McDonald's. Yes. So, uh, you'll you're going to see fun facts like this on some of the bottoms of our slides. uh we'll we'll point some out. We'll just wait to hear you gasp if we're there are other ones. Okay. Then usually we do reports and when you do a report you always have a static view specific point of view where we aggregates like the failure rate of such and such percent. When we started to pull all those data into a database and look a little bit closer. Uh we were surprised to find correlation correlation between what is missed on one device is also missed on
another device. uh you could correlate the data to more meta data like okay those exploits are missed well where can I find those exploits oh surprise it's a metas in the primary kit so we have much more fine grade uh possibilities but it was still tied to our skills to write secret queries which is good for us but you cannot impress management or customers with that because you always have to do the queries to give the answer uh so so I showed him it doesn't yeah I I lost my research partner for like a week because he uh went away and learned how to write the transforms and didn't sleep for like seven days building the transforms that you're
going to see. They've been fixed a little bit. He had some narcoleptic issues but um we can start our demo. Yep. Okay, this be fun. Let's see what we can do with All right. So, you're going to notice uh that we're also running Tombstone, which is cool. who saw Roloff's presentation on time yesterday. New version of Maltiga allows you to do collaboration. Really cool stuff. That was not pitch. So, couple things about the data that we're that we're going to be modeling today just to save you guys some questions and help you follow along. The products that we're we're just we're demonstrating are the products that are in our group testing. Group testing done by us is uh we we don't charge for the
group test. We make the money from our our customers who buy the consumer reports that we put out. This means the methodology and the testing is absolutely as fair as it can be. The vendors don't get a say. Matter of fact, the vendors don't get to say if they're in the group test or not. Uh we'll call them up and we congratulate them that their product is now in our group test. uh we let them know when the group test is taking place and when they can pick during that time period to come in with their sees if they would like to set up their products and uh take place during the first 24 hours of the group test or
first two business days I should say uh after that we formally show them where the door is and continue the testing with them now out of the building um during those two days if they're uh actually working with us on like for example uh IPS products which you'll see over here they tune the IPS products right so who here runs an IPS that isn't tuned everyone like no hands which is good and that's why we let them tune it right so we do the same thing with like uh enterprise endpoint protection they can come in like McGaffy semantic and so forth they come in they get two days to tune for the methodology because nobody
deploys enterprise endpoint protection without tuning it to customize it so it's not interrupting the workstations right some people will look up methodology some complaint complain that we allow them to do that and that's not normal because their mom doesn't tune her antivirus, but we're not talking about consumer grade antivirus. We were measuring and grading enterprise uh antivirus which you can see I've kind of highlighted um actually some of the enterprises over here. So with our IPS we have tuned. The fun part is we actually run the test before they tune it and after they tune it. So here's another fun factoid for you. The difference in efficacy of the devices between their recommended settings and
their best SE spending two days solid tuning their device is 65 to 85% performance increase. So what you're about to see modeled live on stage is 65 to 85% better than their recommended settings. Think about that when you start seeing how many exploits we modeling past these things. The next generation firewalls, they run with just the recommended settings. We do this because we contact our customers and we say, "Great, you're buying NGFWs. How are you configuring them?" Oh, like a stereo system. We check recommended. We plug it in in line and off it goes. Great. That's how our methodology is written because that's how it's actually used. They don't get to tune it right now. If they fixed
their recommended settings before they showed up, fair game. It has to be generally available. like they have to publish that code update to everybody otherwise we call no no and we go back to what is actually GA at the time this is only tracking now static points in time from the tests that we're about to show you what's not included here are other tests that we did like we've just finished and published today our breach uh breach detection uh group test which is like fire eye kombala and so forth so everything I'm going to show you today is inbound Exfiltration. Imagine if I start putting in the breach detection later and I can show you now what gets passed if it's
excfiltrating or opening up command and controls and I know which products miss which commanding controls and which exfiltrations. This gets a little bit scary after this point. This is also the live interactive we are at bides which is totally cool portion of our presentation. So if we start with the 2013 next generation firewalls. I know the screen is small so I'm gonna try and zoom in. Um we have I'll read the names for everybody in the room. We have uh Stone Soft, Sourcefire, Watchguard, 40gate, Stoneoft, Barracuda. Uh Checkpoint is off to the right. I don't know that. Who would like to pick an NGFW to be used to test? Go ahead. I'd like the Palo Alto, please.
For $300. He would like the Palo Alto for $300. Excellent. I don't think you get a conference call with PA $300. But all right, so we have that. Now we're going to take and we're actually going to go pop him into a new clean freight uh frame. We're going to go back to our other where all of our products are listed and we're going to go ahead and we're going to pick Come back here. This is fun with the screen here.
Because of the bad resolution of the display or the uh projector, I'm having a little difficulty with There we go. Would you like to pick an IPS? We're going to we're going to imagine that at your company you have so much budget that not only do you own a brand new shiny next generation firewall, you also own a fully tuned 65 to 85% better than average IPS. Would anybody like to pick one? Don't be shy. Yeah. Source. Source fire. I love source fire. It just got bumped by Cisco. Did you see that? All right. Now we have a source fire. Doesn't matter which one we pick because they all run the same engine, right? The difference is, you know, how
how fast they have to move data to it. What's the difference? Yeah, it would be really bad if they actually had a difference between the different products. They're they actually run the same firmware. Their development is pretty cool. Actually, most of the vendors were like that. All right, great. Do we want to pick a uh endpoint protection product? Anybody like my caffeine Norman Eetabe? All right, I'm picking on the elephant around my hand. I was going to pick Hernky or anything. All right, that's our corporate infrastructure at this point. There's obviously more pieces. We obviously test more pieces. This is all I have loaded on my database because I don't believe in doing live demos
because I don't like being embarrassed because sometimes the inner tubes don't uh they don't cooperate. The first thing I want to know what exploits out of the 2,000 CVE that we ran or roughly thereabouts except for of course the endpoint protection product. How many we running? Well, we around 43 exploits against the uh endpoint protection and around 1700 against the end products. Yeah. So 43. So there's not going to be a lot of correlation with the endpoint products, but when there is, it's kind of exciting. Just so that you understand there, the number is just really small. These are all CVEes. So everybody in the room is on the same page. These exploits are not zero days.
How many of you think that somebody needs a zero day to break into your network? Oh man, I can't bait anybody in this. All right, those are the products. I'll zoom in so you can just see who they are. We'll be fair. This is the PaloAlto. That's everything it misses. It's like a Roman legion. This is probably the check one for the source fire, right? That's what he misses. This is McGaffy. Now, it looks like he has a small group, but please remember that was out of 74. 43. 43. 43. It's even worse. It's like half. So basically we had the for those tests in total we used around 1,700 exploits uh targeting around 200 products and
when we look at the initial vulnerability database those 200 products over the last 10 years so our exploit set covered about 40% of those vulnerabilities cover those products it's massive over sampling. So the beauty of this and we we say oversampling and so forth because this was originally done by Stefan and I as an academic paper uh and then it turned into a product uh that we offer with modeling and so forth. Um so the data that we're showing you today was the data that we used for our white paper which is being given out uh for anybody that would like to download it from these sites. Um, so in this particular case, uh, what we're seeing, I'm
actually just going to walk over to the pool cuz everything's so tiny. So this guy over here, those are the correlating exploits that get past at least two of the devices that we model. Uh, up here we see our antivirus. Uh, and we've got two of the exploits that are actually get past the antiirus and get past everything that we've modeled as far as inline. So this will punch straight through everything that we have. And then we have literally the outliers, right? Yeah. Go ahead. I'd like to go
just also always fun when you're like talking to somebody. This is very All right. So what we really care about in this particular case for the modeling purposes are these guys over here. All kinds of fun and those guys over there, right? And we care about those because they punch through everything that we have. I'm actually going to ask them to come back into a different alignment now because we're good people. These are not the CVE numbers. Those are NSS IDs. We've actually gone ahead and masked the CVE so we can display this in a at here and so forth in Devcon and and not have people, you know, running out at their, you know, iPhone photos of how to get
through a source by Palo Alto McGaffy configuration. Um, so now we understand if we're doing this model for somebody, this is my infrastructure. These are the exploits of the ones that we tested that we know get through all of these devices. Absolutely. You're like, great, but does it actually target something that I have on the inside? Right? So if this is all like half of these are like Java based and we use no Java inside of our company at all, this is kind of superfluous data. No, to worry. Stefan spent an extra night going ahead and plotting which vendors these apply to. So you can probably stop paying attention if you have Macaffy, Palo Alto, and Sourcefire and you don't run
any Microsoft. Um probably don't want your employees using Adion for other reasons, Adobe or Tren, right? Obviously the Microsoft ones are proved that. But let's find out who bad they are because once again Stefan spent an extra night writing another transform for us. And we can go through and we can say great. We're going to actually switch the view to something more fun to look at. We've now gone to the bubble view or the weighted view. In this particular view, all the Altigo users are pretty aware the children or the child objects [Music] which which are now growing in size grow in size by the number of parents that they have. So this parent obviously has
nothing coming into it, but this exploit does, right? It has several things coming into it. Now why is this important? Well, we knew Microsoft had the most exploits that ran through it. We know that Computer Associates only had one. These are the CBSS scores for the CVE that are punching through. Quick reminder, it's a scale of 1 to 10. The amp doesn't go to 11. 10 is as bad as it gets. and everything that's six or above is considered stop uh stop everything that you're doing and find some way of fixing this in your infrastructure. The majority of the exploits that punch past that entire layer defense are from 7 to 10, right? Getting pretty bad.
Now, we know what gets through our defenses. We know what targets the systems that we have. We know that it's bad stuff. This isn't, you know, this isn't like it's going to pop up eyeballs for me to eyeball appear on my screen and just be annoying. This is full on Well, you know what? Let's just ask it what it is. So, we're going to select the exploits again. Let's just ask what is the CWEB 26. Well, looks like uh the majority of them are input validation issues and buffer errors. Probably not important. level. Yeah, it's probably I I wouldn't worry too much about it. And then like shells opening up on my machine and excfiltrating data, installing other
stuff. It's probably not that bad. So that's what we have. Um, which is all fun and good right now. Now we're going to try another transform and with such a small sample set of exploits that are left. I mean, if you guys had picked other vendors like, I don't know, IBM, which doesn't rhyme with anything other than IBM, and some others, I would have a lot more correlation because they have a lot more missed um exploits. Um, and remind you, this is only a sample set of like 1,700 CBDs. If we ran all 60,000 CBDs, I don't think I'd want to see what that look like. I'd probably move on to plan B, which is log cabin mountain in
Montana long range right now. So, solar power. So here we go through and we're going to run one of the other transforms that have been put together here where we go through and we say all right spiffy let me know if any of these are in prime variables. Oh, we have a hint. We have We have a hint. Who Who would like to guess what that product is probably going to be? Metas-plit. You think it's metas-loit? Who thinks it's metasloit? It's probably All right, let's go find it. Normally, I make lots of new little pages, but what color do you think it is? I can't see because the legend
now it's LMR. There's another one. Oh, there's another one. Metasplit. We'll do this. We'll go back. This is what bothers us when uh you are sold ahead of the thread protection. uh there are lots and lots of uh exploits now we have a small 7% that are available in metal so the security event you never get better information that hey it's in metal it's open source so it works script stuff and we just download it and test it but still we find oh yeah sure [Music] yeah semcrocessor they had to kind of really work all my cores
get trapped. What do you think? I think it looks awesome. Just want that print. All right, that's the uh world explosion of uh what's being targeted by the whole set of exploits that are getting past any of these products because the reality of the situation is you probably don't have an NGFW and IPS and then a really welltuned uh endpoint protection sitting behind it. So, we're going to go by exploits
again and let's see what prime marks. We'll do some more fun modeling as we reach the end. We'll you guys can toss out different things. Obviously, they're going to be hypothetical. It wouldn't be the actual configuration you use at work. So, if you know who somebody is and where they work, don't don't assume. It's always the elephant in the room. So there's 866 exploits currently inside of measo. You'll see later in one of our sites is it 26%. 26% of the exploits within metloit generally go completely unblocked by most security products. That's like open book. That's like failing an open book test. Seriously. And by the way, when we run the tests, we're not doing any extra
offiscation. We're using the exploit out of metas-ploit in our testing harness exactly as it's written in metas-ploit. We're not tricky. We're not being mean. Right now, we do have pows on trying to figure out breaking things because it's what we do. But we we don't we don't bend rules like that. If we use metas-plit against you, it's that that fast. Uh there's rules that you guys have probably seen. There's no need to spend 100k on a ser. Yeah. free metloit or if you want to go spend $200 and something there's plenty of choices right here. So we'll go back to the actual presentation. So layered security works really great except for the holes which tend to
align. So why do the defense of Asia model? Why invest in the security products? Well, you're going to hear us harp over and over again a little bit today, and you're going to constantly read it in our briefs that it's people, not technology, that's create the safeguarding of your company, the assets that you're trying to to to guard. That's not just your security staff. That's the education of people that work in your company. I don't care if it's the clerk or it's, you know, the CEO's administrative assistant. But doing this modeling helps you under understand what undetected exploits get past the security products that you're currently using today. Right? None of us believe for a second that these security
products are foolproof. Understanding what targets the prevalent uh applications that you use. So therefore, what are your vulnerable applications? This is important twofold. Number one, it may not be the fact that you need to go out and buy $10 million more of some other inline security product. It may mean that you're now going to go and spend $100,000 in operational costs. You're going to change your policies and procedures and practices and you're actually going to address the problem because you now know that there is a problem. You're not sitting in the dark going, "Man, we got hacked. It must have been a zero day with an AP. China clearly came after my company, right? No, it was like some 12-year-old who
bought Eleanor for 250 bucks, scanned the internet, infected a website that your people went to, the exploit hit every one of your workstations because nothing you had in line protected it, and now they have all the data from your company. They have all the things. And then understanding what's already automated in those primer kits will stop making the job of the 12-year-old who downloaded this stuff that much easier in life. Like right in in that VIN diagram, right in that center, you now finally can have the information of what gets through my security, what targets my applications, and then out of those which are critical. Well, the ones that are critical are the most viable, meaning
the most automated of them. Right? Here's some examples of the the way that we model some of the data. So, we have undetected exploits within metas-ploit or all the green dots to include um these other green dots over here. You'll understand why they're a darker color. Those are the undetected exploits within metas-ploit. Out of all the undetected exploits of all the products of the 866 metas-ploit exploits, as many as 26% can go undetected by some products. Like if you could get your vendors to concentrate on just doing QC with nothing more than metas-ploit, some of your products would be 26% better than they are today. That's pretty steep exploit availability in crime market. So here we took a number of
NGFWs and IPS's. These are the common exploits that they that go undetected past those products that are already in prime markets. So here we point out Phoenix and Elanar. Of the 117 exploits attributed to popular crime kits, 43 are undetectable by 39% of the detection engines that we've tested. Remember that's a representation of more than 80% of the active market. We're trying not to make the math hard. It should be scary. Targeted programs. Here we've taken and the gray dots actually represent the programs. The green dots once again are undetected exploits. They're the software vendors that we've mapped out from the various different CVEEs that we tested. We decided to show you Oracle and Microsoft. I mean, it's
no surprise. It's not that they're bad, but if I was writing exploits and I was automating exploits, wouldn't I be picking things that have a huge install platform? That's exactly what it is. or the opposite. Yeah. Vendor vendor vendor always tell us yeah we we can't block everything but we prioritize according to criticality what is important for our customers. Okay, I understand it can be some exploits against the Vat script that I wrote and published 10 years ago. But yeah, no, I was a backup program that nobody uses anymore. But if you're prioritizing your patching and the security and effectiveness of your product, why are you mostly missing Oracle and Microsoft? It's not like that's not what your
customers are using. There's no zero data here. It's all Yeah. Here we show correlation of detection failures. So once again, we've taken all the products that we tested that are in this database. We've gone ahead and pulled out all the exploits that they've missed. And again, this is the weighted bubble view. So the little tiny green dot that's undetected by one IPS, the big giant green dots, as they grow bigger, the bigger they are, the more IPS's miss that particular exploit. You'll see some pretty big dark. And of course, it's Malta, so the gravity system works. So we just released a paper called correlation of detection failures. What we did, we uh used the data in here from nextgen firewall, IPS
and EP testing and we made pairs of products. So each against everyone. So this is all the combinations. We removed duplicates and we ended up with about 606 pairs of two products and then we measured how many exploits go through. So of those 606 pairs, product pairs, only 19 or 3% managed to rob all exploits with Google. I don't know about you, but I mean I have run security for little tiny companies like Electronic Arts has $5 billion in revenue. And then like Deluxe Entertainment, which if you ever watch movies at the end there's that big red circle that says Deluxe, it's that company. um they're like some tiny little privately owned $4 billion a
year company and I did not have enough money in my budget to buy all 36 IPS's on the planet and then put them in line hoping that I could get you know 3% coverage right so nobody is at that nobody can buy every single device this one is always confusing to me so I'm going to let you explain that okay this is again correlation of detection failures uh here we look at the data from our last uh nextg firewall test. So we have tested devices F1 through F9 and vertical is the top 11 products which collectively had the most exploits not detected. what we see here for Microsoft 62 out of 126 uh no it's a 62 products out of 126 had
at least one exploit that has one of those devices and of the total of 600 exploits 107 were missed by at least one of those nextG firewall products or on the other hand the ones in the red circle they show that uh none of those products managed to of all exploits against those vectors. So when you when you correlate it uh we find in the test data you find a very very strong correlation. So if you usually assume or if you do risk management and have no better data say we have two devices so device A failure rate of 10% device B has a failure rate of 10%. So what's the failure rate if I
put them in series? Most people go, "Oh, it's 10% multiply by 10% is 1%." That would only be true if there was no correlation. There was absolutely no correlation. Unfortunately, it's still 10%. Almost 10%. So, show of hands really fast. This is what Stefan likes to put together, right? So, when we first looked at the data, this is what Stefan went running into our CEO's office with and then they glazed over like donuts, right? Or Malta, which which which like made the message better, right? I love this stuff. You're awesome. I could I could never make myself do that. All right, so some of our conclusions and recommendations and then we'll go ahead. We have some time left.
We'll definitely open it to Q&A. We'll do some modeling for you. We'll do whatever you want year over year. We'll play with the data, make it scary, take screenshots, take them back to the office. Yeah. Any plans to release data? Um, yes. uh it'll it's going to be added as part of our subscription service, but it will not become public domain. Um we're still wrestling with some of the little nuances like if I create this modeling tool and the the transform like you'd have to own Maltiggo, right? And then you would be able to download our transforms and then act utilize our transform database on the back end much like um much like packet ninja's uh you
know social networking system, right? So, you know, I've actually been been speaking to Daniel Soie, uh, about, you know, how we would copy the same kind of model. The the the niggling issue that we still have though is if I put the real CDEs because I'd have to do that to like help a consumer that was using this, right? Um, otherwise my phone's going to be ringing where somebody's like, "What is 917642?" Okay, that's CDE07-142 or something. Um, I I got to make sure that I'm not now supplying. I mean, clearly using this offensively is not like a brain twister, right? I I can without talking in between it. If you told me my target has this, this, this,
and this in like under five seconds, I can tell you what malware kit to go get and just launch an active campaign against them. That is 100% chance of of reaching its target. Oh, you have no idea. Like I've taken splitiggo and hooked them together. So splitgo is a module inside of meltgo that does pent testing. So when it uses nm mapap to fingerprint the devices I wrote a new transform that actually said okay take that fingerprint. I endmapped and fingerprinted everything inside the lab. And then there was a transform called does NSS have this? And it popped out the actual name and version of the product. And then you went right click show me the exploits. Right click the
exploits. show me what primer tools can get that have this automated cuz I'm lazy. And I was like, I just did my pen test in like 45 seconds and I'm going to bill you for the whole two weeks. Like done, you know. Um, so again, that's the only thing holding us up right now is figuring out how do we do this without handing everybody a tactical nuclear weapon. So conclusions for us. I don't care if you stay with your accent. Okay, there are two sides. This is what vendors and the marketing slides claim and this is what we find in our in our research. So we not only test the security, we also test the robustness,
the stability of the product. So by 10 gig device, well that's nice, but when we test it as of 3 gigs, it's a switch. This is the kind of information we have. So more often than not, uh statements from the web are are very very exaggerated. We have some where it is the opposite but the majority it's ex exaggerated speed the security or the performance and does you know there's also a trade-off between speed and security and out of the box they go for speed. Yeah, the devices are generally almost always configured for speed, especially inline devices, right? Because you can't make it stop and doss your own network, right? Then it becomes a leveraging tool for me, the attacker,
right? Um, but of course, like Stephan was saying, we find the when we do the performance testing, we'll find, you know, a 20 gig device at a very premium price that only really works in mixed data traffic as a 5 gig device is one massively expensive 5 gig device, right? Um, you may have seen some like NSS employees and then sometimes we give out the t-shirts. One of our t-shirts says it's hard to argue with a shell and then it's a picture of the Microsoft shell. So when we run metas-ploit, we pop shell on the machine. We will commonly have them go, yeah, we looked in our logs and it says that exactly at the time you
said you did launch it, we saw it and we stopped the exploit. Really? Hold on. Boom. So the shell just opened again, right? You saw that? Yeah, but it says in my logs. Okay, hold on. There. I did it again. There I did it again. So now we just wear the t-shirts and point to them. I can't argue with the shell machine. I don't care what your laws say. Prevention is limited. No product or combination of products provides 100% protection, which means you should assume you are already compromised. If you believe you are not compromised, you're probably in the wrong business. And well, security is a market for lemons. You probably have a pretty bright future either
way. Knowledge is power. Defense evasion modeling DEM is critical for understanding your threat landscape. Understanding what gets through to me, especially when I was a CISO, would have been far better than walking around every day going, "God, I wonder what's going to hit me in the back of the head. I wonder what's going to hit me in the back of the head. I wonder what's going to hit me in the, oh, the CIO has the wrong look in his eye. This is going to be bad." Right? That was like a pain. people, not always technology. We can't stress this enough, right? Technology can only detect what has been modeled before. We need people to recognize new
threats and attacks. As we have seen, some previous are very, very agile. They're very good in coming up with new attacks with attacks out of our world. So if you don't have the right set of people they can recognize they can make sense out of pet or other conservations you do it. And we're not saying outsource right we're not saying just build an army of drones. We're saying find highly qualified very well paid very well taken care of security professionals. First class hires first class. Second class hires third class. Third class people are not going to keep your network any safer than the products that you buy, believing they're going to stop not only zero days and APS, but
things that people haven't even written yet. Because if I go back to some of these [Music] transforms and just for fun, I ask it what year those exploits came out. you're gonna find telling me that you can stop a zero day that hasn't even been written yet is absolutely a joke. Wow. It actually I think it showed my
computer. Oh, 2005. Welcome to the
party is back here. 2000 as well. Yeah, these are open books. still no need for zero days, right? I'll leave this slide up here even after we're done talking. These are uh some links uh to NSS's website where you can look at a lot of these white papers. They're free um as a lot of our papers are. Um go read them, download them, use them around the office, point to them instead of Wikipedia when you're having an argument with somebody because that's actually tested. Um, unless you're one of those guys that send me Wikipedia links when we're having a debate and I just laugh at you and I stop answering. So, great. Any questions or anybody wants to model
some? Yeah. So, this type of analysis is really good at detecting how well a vendor performs, but it also seems like figuring out where the vendor Oh, sorry. Hold on. Sure. No, it's okay. I can speak up. Um but it also seems really useful to see that the stuff is getting through and use that as prioritization for your security team's remediation efforts. Um to that end it seems really clear that metas-ploit comes first stuff you should fix. Uh how would one get or what is an efficient way to get the other exploits that are not that readily available from the black kits? Ah um we have another piece of research that we're not presenting today but we did
present it yesterday at black hat that we call threat forecasting. So in that case we actually have calling it a honeyet is just wrong but it's the easiest way to get everybody to mentally visualize it. Um where we can take your gold images of your servers and workstations and so forth and actually put in line the physical devices that you use the endpoint products that you're using. Those systems then crawl the internet to malformed websites and so forth. And then we actually take those exploits and run them against your stack and your gold images and provide you with a threat feed. There's many threat feeds on the planet. But this one is actually your stuff, right? And we
can even tell you what would be the difference running your stuff in North America versus running your stuff in China versus running your stuff in Iran versus running your stuff with a different language pack and it happens to go into North Korea. So we can we can slice and dice that in a way. It's actually our testing rig that we use to test all the other products. In this particular case we finally turned it into a product. It's another thing that's come out of uh NSS going from a pure testing lab to now bringing on researchers that are taking what what we're doing and figuring out different ways of reusing that data to enhance and
make security better. Yeah. Do you have a transform that maps to patches and patches? No. Unpatched fresh systems. Uhhuh. No. But what we do have is the ability to show eventually the CVEes and then you could we could probably make a transform very easily. It would take you to the CVE entry on the on the database website and then you can read all about the patches and so forth. Yeah. Right. Especially if it's Microsoft, right? They'll like here's like click here to get the patch. This is the info or go into the registry and change this key or whatever that you won't find that in the modeling tool. I don't think ever. It's just it's superfluous, right? It already
exists out there. we'll just point you at the source. No need to reinvent the wheel. Yeah. So, um, you guys do a lot of performance testing. I'm wondering you and you touched on a little bit what the intersection is between like real world how firewalls are configured versus exploits because you did a really good job of explaining how you let their their tuners come in and tune it. Most of the equipment that I encounter has serious operational configuration issues, right? And and that was even with someone you thought was sane looking at it, right? like I had to make hard decisions because of you know I have 5 minutes to do this. So is there a
way to test that like the this is a real world test but the real world in the data center this thing was set we actually have the data sets for the the recommended settings which so many people just go ahead and run with because they're afraid of causing an issue inside their own company when they stop something from working especially with an IPS in front of users. Yeah. Um, so yeah, we just don't have that on the laptop that you want to talk about ugly 65 to 85% more dots appear. Well, I think what I'm talking about is real world is actually subrecommended. Oh, yeah. No, absolutely. Um, we could if you had a customer or was you yourself
and you said, "Look, this is how we've configured our source fire." All right, great. Send us this send us the configuration file. Guess what? We have plenty of inside of our laboratory. Everybody's IPS and NGF. Okay, that's a good point. So, yeah, we could then rerun it and then provide you with that data set because it's it's just that would actually be very valuable because you could see like your initial recommended data and then you could say and here's where you measure up and we're not trying to scare you, but it's significantly different. Yeah, that's that's a really amazing point to make to somebody that we could actually take our data and say, look, this is best case
scenario where you probably should be. This is where your device is measured like and best case wasn't fun. Yeah. Yeah. Best case was not fun. So great. Anybody else? Yeah. Just because I'm curious, what combination do you plan to be the best? Um to be honest with you, that was actually one of the better ones. If you take Checkpoint and Source Fire or you take Palo Alto and Sourcefire and so forth, um when you start picking the products that constantly score massively well, like up in the way high 90 percentile, um and put them together, um that's doing well. Again, that's 2,000 CVES. There's like 60,000 in the CVE database. So, how well willing are the vendors? Cuz
this seems kind of like a new thing for you guys to go into this exploit stuff. How how are they dealing with this from a positive feedback and improving? You showed them this stuff. So, the question was, how are the vendors dealing with, you know, this new way of mobing their their uh the efficacy of their products? Um, we bring them in, we show them this. Um they're very receptive. Um you know some of them I mean it's it's it's no secret some of them will come to us and ask our product right and we're happy to talk to them tell because at the end of the day we just simply want to see security get better. It's not like it's
not like we're a priesthood. That's what we do. You definitely get paid money, but at at the end of the day, everybody that works at NSS, they're security researchers, pure and simple, right? We just we want to see the products get better because it means we get to make a better harder test, right? So, as soon as they get better, I just make the test harder. But right now, I can't make the test too hard because it's like picking on a slowed class. So, don't quote me on that one. So, yeah. Anybody else? Yeah. So, one of the things that um I've had an idea for and I'll speak loud so everybody can uh is there's a lot of
different kind of macro approaches to doing security right like what works best hardening your system or putting antivirus on it or like whitisting or like things so I missed part of the talk so maybe maybe touched on but have you looked at doing that and the idea that I had just kind of get this sandbox environment run a bunch of malware on a bunch of different systems and see like, okay, this one's configured this way, this way, this way. Which one? Yeah, I think I have like three minutes and it would take me longer to answer that question. I'd be happy to talk to you as soon as we're done. Anybody else is welcome? I'll answer that question in
my pres. Excellent. I don't want to steal a answer is nothing. Nothing. And that's that is the answer. But the cool news is if you do model it out, then you get really scared at how big the nothing is. uh which is what we do every day. So I'm sure the presentation that he has will explain that as well. What about like more like the numbers you have are great and it goes back to 2005 and that's really important to show those CVs but what about like the last 30 days like fresh still hot off the presses tax getting because you're doing a lot of work on this. So this was this was an academic research project. So we had to
have a control variable CBES. Um the modeling that we can do live because of our Bayet infrastructure, we actually can show and model what is actually live happening on the internet right now. Um there we're discovering zero days constantly. We have no real way other than giving them NSS IDs of enumerating them. Um but yeah, it goes from bad to worse. Now when we do a lot of our F e FSC checking, we actually are constantly using either CVES or now recognize zero days, right? So like if Dooku doesn't necessarily have CVES and so forth attached to it yet, we'll still run it in the way that we've seen it packaged on the internet. Um and we'll just mark
it that way. Yeah. The data just gets better over time with that. Absolutely. Yeah. And then you see then you see a lot of the old those old things, they just come repackaged literally, right? They repack them, they recompress them, they transmit them. So you wrote the signature to say, "Fine, if it comes across HTTP and it's this, then stop it." And then they're like, "Oh, okay. Well, that's fine. I'll just send it through uh send as an email attachment because you're not watching SMTP or I'll now I'll put it on an encrypted website. So now it's HTTPS. Congratulations. Your your inspection engine completely because it wasn't looking in the HTTPS stream even though it was decoding it.
It was only looking for that exploit by definition in the HTTP stream. I mean, we see that all the time, especially with products. So, it kind of blows your mind. It's like the same rule with like one change and still it would work. So, great. Anybody else? We'll be around. Come hit us up. Thank you very much. Thank you very much.