← All talks

2014 KEYNOTE - Jesse Burns - How bleeding edge software can be safer than long term support

BSides Manchester1:00:13117 viewsPublished 2015-10Watch on YouTube ↗
About this talk
Why the future of secure systems is taking the risks of new features, and managing the risks of the latest versions rather than the old “playing it safe” on stable old versions. How to keep systems secure despite software needing frequent patches, and getting off the failed mindset of the reliable old version, or the software appliance. The talk is about how we are moving towards fast patching, and away from long term support and software as an appliance. I will suggest we need to accept a lot more risk around applying updates, and be able to quickly regression tests our most important business functions well enough to stay on the bleeding edge versions of software.
Show transcript [en]

My name is Matt, for those of you who don't know me. I set up B-Sides London, or one of the original founders of B-Sides London. I moved to Manchester last year and I thought, we need to do a B-Sides event outside London. I'm really amazed at how well this has turned out. But it's not just me. We set up a B-Sides Manchester CIC, which is a halfway house between a charity and a limited company. So we're now officially a not-for-profit organization with the view of hopefully arranging more events in the future. But like I said, I'm not the only person that helped organize this event. In fact, I have to give a lot of thank yous to a lot of people. For example, Mark Turner and Lloyd Brough who

have been working diligently the last month or so. In fact, I don't even think they've been doing their day job. There's also some other people I'd like to thank as well. I'd like to thank the marketing team, Kirsty, Amina and Kate Clark who isn't here today. Thank you very much for helping out. All the volunteers and all the sponsors as well. You're going to hear more from the sponsors. A few logistics. If you need the toilets, they're downstairs. We're not expecting a fire alarm.

Lunch reserved at 12 o'clock. It's sandwiches and etc. The usual kind of stuff. At 11.30 there is a Women in Security round table held in the cafe which is through the doors the other side of the room. In fact, if you want to use that space feel free to go in there and hang out. The cafe is not open but Women in Security will be having a round table in there at 11.30. We're going to kick off

So I'm going to hand over to our compere, Javad from 451 Group who we'll have to thank them because they let him come here today and tomorrow to give us some of his funny jokes. If you don't know Javad, yeah I can tell from the left, but if you don't know him just search for CISSP. I'm missing out a word in between there because it's slightly naughty but you'll find him on YouTube. We're going to kick off the event, so thank you very much and enjoy the rest of your day.

Thanks for that Matt, no pressure I suppose. So welcome everyone, I'm Jebat Malik and this is B-Sides Manchester. This is a far better turn up than what I was expecting. If any of you are a student and you're waiting for biology or something, you're in the wrong room. One thing I love about Matt, I was with Matt when we started off the first B-Sides London and two things, he's avoided saying the word community because he always pronounces it community, for some reason. And the other thing I love about Matt is everything he says sounds like a question. But anyway, we've got a couple of days ahead and this wouldn't be possible without our sponsors. and for legal reasons I've been told I

have to mention them all by name. We'd like to thank NCC Group, Paul Swigger, Pentest Limited, Pentest Partners KPMG Netitude and IO Active. So can we get a round of applause for all our questions?

Now you can abuse them, just get the free stuff, and you can get on with it now. I had a joke written, but I've forgotten it now. I want to gloss over that. So to kick off, we've got a fantastic keynote speaker. I was told to say that I believe he's fantastic. He flew all the way over from San Francisco, taking two flights. And I don't know if any of you have ever been to San Francisco. but we were just talking about how they've got a pretty bad health problem. This isn't a charity of thought, by the way. But because of that, Jesse was saying, hey, Manchester's really nice. It's clean. I haven't smoked urine anywhere. It's, you know, yeah, Manchester's a great place. You're

terrible. So he's here to talk about some bleeding edge technology.

I think I'll head it over, so not eat into his time, but enjoy. That's it. Is this anyone's first ever B-sides?

Okay, B-sides is a bit different from every other thing that you've probably been to. If you've ever been to an Infosec or RSA or a corporate conference, we like interaction. It's all about the community. The great thing is the talks are always brilliant and good and we try to select good speakers but it's also about the conversations you have out there in here. And the other really important thing is making a lot of noise on social media. Because if you make noise on Twitter then everyone thinks, oh that looked like a really good conference and I wasn't there. So I'm going to kick things off. The hashtag is bsidesmcr and before I hand it over I'm going to take a selfie.

Thanks, I'm Justin Burns and it's a pleasure to be here in Manchester. And it's not just the olfactory experience. I've been walking around a little bit and hanging out with people. We had a great event last night at the, what was it called? Freedom Tech. Anyway, some good demos and stuff. I'm sure this is gonna be a great conference. I'm from San Francisco, as I was previously mentioned. I'm a founder of a company called iSEC Partners, which is an NCC group company now. I do computer security work. I work with a lot of different firms. Most of the firms are companies that think of security as a key differentiator for their company. So they're building platforms, mobile

phones, operating systems, social networking systems, all kinds of stuff. And working with a combination of people that are making the platforms and the libraries from different groups, I see them sort of sometimes interacting in this weird way where everyone's trying to build the next version of their platform and they're trying to learn from the lessons what went wrong with the last one. And they really know because those bugs that you just see as CVEs, maybe you say like, oh, that's bad. To them, those are things that kept them up for a weekend and caused them to miss their kids. They really care about that stuff and they get really thoughtful and they have a deep

understanding of their systems. They're working on the next generations of their platforms trying to make the next generation more secure and trying to systematically undermine the vulnerabilities that were ruining their lives. And then they look over and they see the next guy's product they're using as dependency and they think, well, my next generation, I should probably integrate it with their last generation. Because that thing was the secure thing that I can rely on and that I know how to deal with and that I can trust and it hasn't had a vulnerability in years and I feel really good about that. And meanwhile, the other developers think targeting the version that they secretly know is not the version that they would use if they were going for

things. And this little contradiction is what I'm kind of referring to when I talk about . So, this is a little outline of my agenda. One thing I need to kind of point out is that this talk is about something a little basic and keeping systems secure. I just want to consider this carefully and not dabble meaninglessly and use a jargon. None of us want to be this person or take this person's advice. Words like cyber are very tricky, they're broad, a little vague. They can be very useful, but when an expert starts talking, they should probably try and get specific. I have a bit of a challenge here today because I'm talking at a high level. I'm not talking about your particular

system. I'm talking about the idea of using more cutting edge systems not sticking with long term support, but maybe moving from long term support version to long support version. Because I can't tell you, oh you're using this particular Apache stack, or we're talking about your particular OS X configuration. That's gonna be fundamentally a little vague, so I apologize in advance if I have to be a little bit too cyber. But in the spirit of cutting to the chase, let's start with the takeaway. So this is really my in conclusion slide of what I'm arguing for. So the three things boil down to plan to upgrade working systems. That's a really hard thing, but when you have a system that's in production and it works and it's important

to you, you want to at least aspire to keep it running on the latest, greatest thing. Even though you don't know if it's really gonna work and you do know that it works on the existing. Know how to know. if a configuration of your system works. And usually what that means is regression testing. Having a set of tests, and this is what you'll see in companies that are really good at software that can really turn around and fix this fast, is they know how to test to see if their system works with some change. They change something that shouldn't affect anything, and they check all the important things to make sure that those still happen.

All the stuff you'll get fired. And then finally, planning to make things better. Enhanced mitigations, things that aren't functional but do do things like add address space, layout randomization, break rock chains. The plan shouldn't just be, hey, we're gonna roll out this new feature, but sometimes you have a release that, hey, we're gonna roll out this new feature that isn't user apparent at all. It's just an upgrade to some sample piece. And if there turns out to be a bug, great, now you've got a whole chain. So, I also should be

say a little word about the important system for your business. It does not make sense to do this for everything. This is expensive, we're talking about investing more in the security of the system than you absolutely have to. So this probably isn't for your internal World of Warcraft server or the SharePoint that is only used by a couple of people to plan parties. This is for the systems that are critical to your business. And those will be different for every business. So if you've got a startup, it might be really easy. You have a mobile app, that mobile app is the only thing you're known for, that's the system. Maybe there's a couple of servers it talks to, those are the systems. You've got a giant

financial corporation, it's got thousands of systems, you're gonna have to pick and choose which ones you actually care about. So this is a bit about making sure that expectations are aligned in terms of what the CTO thinks they're investing in and what the engineers think they're investing in, what stuff are the methods. So, and your dependencies of course are all the platforms, libraries, and other systems that you rely on. Alright, so let's talk a bit about what kind of software risks we have to accept. So, let's start with the obvious, which is that software systems are never totally secure. And getting something totally secure is a completely irrational goal. That would be like wasting all your money if you tried to do that. So, you have to

accept risk, and you have to know what risks you're accepting. Because we're going to acknowledge that we can't eliminate risk, we're gonna plan to handle the kinds of risks we expect, and when something unexpected happens, it's gonna suck, and we'll try and work around it. But hopefully the unexpected things won't be of a class that we should have expected. So everything gets broken, even Chrome OS, which is this fantastically updated, minimally functional operating system. The Linux kernels have tons of vulnerabilities even this year, and it has one back. Every platform has bugs. It's just a natural part. Anyone who doesn't expect serious bugs in any dependency they have isn't paying attention. Credit card processors, you see these breaches all the time. Banks, the sandboxing system for

popular applications, crypto libraries, the mail spools of security experts. So we know that we're not gonna be perfect and we're gonna have to just plan for that. Governments and humans create more risks all the time. Everyone decides that social media is a great thing and they expose themselves to a bunch more risk, create a bunch more attack surface. Governments do, you know, different things to get risk for companies by creating notification laws so that now even when you have a breach that you don't think the information's ever gonna come out, you're forced to tell everyone and warn them, which hurts your reputation even though maybe you could have slipped under the rug. When you pay the ransom demand, you still have

to notify everyone that you pay the ransom demand. And then of course now we have weird things like maybe the government is lying to us about cryptography trying to break into our data centers. So we want to minimize the worst risks, plan to handle the expected problems, and then avoid losing our jobs to some kind of catastrophe. So this user expectation change is another aspect of what we're dealing with here. Users used to expect very little in the way of privacy from computers. Any app that runs on a computer can do anything that any other app can do. So you're running a Tetris game, it's gonna read your email. And nobody thought that was a security call. They just expected that because you're logged in as you. And so

that game can read your email. Well on mobile, the expectations changed. Users can run that on an app, and if that app turns out to be malicious, it can't read your email. And that's pretty cool. But there's other changes like Users now might not want to keep data in some of the great spying powers. We are pretty used to that with mainland China, but now the UK and the United States are on that list for a lot of people. Not in my country, but over in Europe a lot. And then we're starting to see things like app stores as lines of defense. This system has chains that maybe there's a marketing model that it gets people.

Also we've had this heart and heart really undermines my argument in a weird way because of course it didn't affect the old version of open SSL.

So yes you can all look smug right now. But of course I'm sure you are all familiar with the fact that you're running 0.9.8 and most of the people that I know who are running this do it because their old embedded system is compiled with a crappy old version and it isn't being I know that you guys all patched CBE 2013-169, right, last year? Because that was a timing bug that affected all versions of open SSL. So, oh, nobody patched that, eh? So, you know, don't be too smug, you know. We're not ever gonna be able to eliminate risk completely, right? What we're trying to do is find the path that minimizes it. And so, and of course there's another bug after Hartley also open

SSL that also needs to be patched and also affects 0.98. You might get lucky by taking even a completely random approach. What we're trying to do is find a person that's better than random. All right, so what we know for sure. We're gonna have to patch software. It needs to be nursed. Without patching software, it kind of turns ganglion. It needs to be kept up to date. Nobody knows how to make a web browser. It's too complicated. So what they do is they make something that works kind of like a web browser, and then they fix it. And this is very different than the visible world. It's very confusing for people when people say, well, shouldn't I be able to fire the team once they

create the trading system? Can't we get rid of that cost now? We fired all the people who built the building. Why do we need to keep these guys on staff? And it's just a hard world. So everything gets broken. Users are starting to change their expectations. ship them, and another example is maybe telephones. So a couple years ago, people would think of a feature on a telephone that remotely wiped that phone or killed it as a backdoor. It would just be straight up like, I found out that the government was putting backdoors into phones to kill them, or the phone company, or the OEM. You can see people, thought that sort of thing was just

absolutely not acceptable. And now, in the states, there's laws that people are debating to actually make it mandatory to have that functionality because people have realized after they all bought very expensive smartphones and have them stolen, I had one stolen earlier this year, that hey, maybe it would be nice to take away some of that incentive and maybe that back door would be a really nice thing to have. So, you know, expectations shift and we have to adapt our businesses

So where are we gonna draw this bar of reasonable software expectations and risks? It's totally subjective unfortunately. The idea of risk acceptance, you can't, I can't say it's the same for me as you. Edward Snowden is a fugitive, right? He's got a very different idea of what risks he wants to accept than hopefully you do. You've got, Things like what your vendors offer. If they offer a security control and you don't use it, then you're gonna look pretty bad when there's a breach and that control could have prevented them. So that might be one bar. What do your competitors do? Same kind of idea. If you figure out what customers have already accepted in terms of risk, that can be a

powerful indicator for you. But the key thing is to exercise reasonable judgment. We always wanna be able to say, hey, this is why I did this, this was the trade off, and yeah, it burned us this time, but I still think it was the right bet. And as long as you can kind of justify that, and it wasn't catastrophic, you might not keep your job, but you'll probably do okay in the interview for the next month. And of course, in retrospect, we can always say that we could have spent more. And we can almost never say that, and we always suspect that we spent too much if it wasn't a horrible problem. So that's a pretty dangerous, tricky little balance to

So, planning not to have a catastrophic breach would be great. That would help us keep our jobs, but we know that nobody's perfect, so we have to handle all the expected types of breaches. And that means problems with software we deploy, dishonest internal employees, and when we have a dishonest employee, we better be able to do things like their bad behavior. Go back and characterize what the full extent of their bad behavior was, and point to the fact that we abided by the principle of least privilege, and didn't expose that person to more than they needed to have access to. I think you'll find in a lot of internal networks, least privilege is not a well-practiced rule, right? So the more we can do

around that sort of thing. There's also times when we're gonna have to be able to work really effectively for things like handling child abductions, right? Nobody wants to bail on a security front when you've got to protect a kid We have to be really clearly on the side of getting this solved. And then sometimes we have to do almost the opposite where we're handling things like government spying so that we make sure that it's hard for the government of Tunisia to steal the Facebook passwords of people who are using that site to organize it. And that's why the login of certain social networks has been under SSL because that's not a fun feeling to me that you have to protect you So, it's very

tricky though. When we have special problems like information that's especially sensitive, we need to have special protections for it. So financial data, medical data, stuff related to minors, stuff related to certificates. We have extra protections that can be as complicated as hardware security modules, multi-factor authentication, and as simple as re-authenticating before you change your password. There's a whole variety of multi-facusers. So, do you think it's lucky that HeartLeap only affected version 4.1.1 of Android, which was still millions of devices? Or do you think that's because they took some special precautions, right? Those guys compiled it with the OpenSSL No Heartbeats flag, and sadly, when I looked in the commit log, I found a commit saying

that it fixed compatibility with something else, not, I'm hardening this incredible presence in foresight. But this is a feature that wasn't really necessary and that kind of hardening, I mean, it was probably easy to get that through code review because everyone wants to see attack surface diminished and things that you aren't using aren't necessary to expose to, right? So even though you can move to the latest version, you can still harden it and do things to lock it down, extended mitigations, custom builds. Anyway, so that's a great example of how you to avoid some of that exposure, although not in 4.1.1. But still, that's a lot better than 100.9.

A good example of the thing, or the other thing we have to do is we have to not be repeating our mistakes. So, making a mistake once is completely, you can get away with that sometimes. Doing it a few times or keeping making the same errors, that's my favorite. My favorite example of this is the RSA Secure ID. So RSA had a breach in March of 2011 and they were forced to announce it. And they said that we were a customer at the time, we used their secure IDs. And they told us that hey, it was a breach internally and there isn't anything that you have to worry about right now. And we got a little nervous about that and we thought like hey, what

would a breach at this company expose us to in the worst case? And we thought about it and we and we looked on their site and we figured out that we were able to reorder RSA tokens without having to resynchronize them with our server. And that meant to us that they must have a copy of the scene inside there somewhere. We thought, whoops, we accidentally escrowed the keys to our environment with a third party. We didn't realize we were doing that. They certainly didn't go out of their way to explain that their architecture was fundamentally insecure and that they were keeping a key for if you lost it. So we migrated off of RSA to

RSA. Google Authenticator. And we advised some other people to do that. And actually one of their guys got up and said, we were irresponsible for being so alarmist about this thing. And then about a month after that, Lockheed Martin Greech came to light. And Lockheed pointed out that yes, not only did they have the seeds that they didn't admit they had, but they were the thing that was the target of the attack, but they were stolen by a group of Chinese hackers who had then used them to attack this rather critical defense contractor who has such good security that when they did that they were immediately caught. So, I mean, hopefully they didn't also attack other ones,

but anyway, I was feeling extremely smug because just before I read that article, I used Google Authenticator to connect into the VPN. And I got this nice little message. We were lucky. Our IT guys would have been in a lot of trouble that they hadn't been clever enough to figure that out. We're a security company really classic. So where are we gonna place our heads? So usually we're not talking about different platforms when we're going long term support versus the latest thing. Usually we're talking about the exact same thing. Windows 7, that's under long term support now. It's not the current version of Windows. It's gonna be supported for a very long time. Probably the lifetime of most applications. And if you look at

Windows XP, that thing was supported forever. It's dead now, but not any time using it. SQL Server 2012, most people aren't running that. They're still on earlier versions of it. It has a very long support lifetime. The old Red Hat Enterprise Linux, that stuff, people use five more commonly than six, certainly more commonly than seven, and four is not uncommon. That thing has 10 year life support, right? And in its final throws, it still gets patches way after you think it should be gone. So if we're targeting one of these platforms, I know sysadmins that always will hold on Red Hat 5 and 6. It's what they consider to be the best versions of Red Hat ever. They're not keen on a new goal thing. So yeah,

I kind of have to wonder what's conservative here. If you look at Linux, the Linux kernel team has a special team called the stable team. And that stable team handles version 3.15 today of Linux. And 3.16 is unstable. The last version of Ubuntu, the LTS 4.204, that's actually shipped with a 3.13 program. And that's not even the most recent stable branch. So that thing's gonna be supported for quite a long time. There's different groups that you can kind of appeal to. So obviously you can't be deploying every new version of the Linux kernel in prod. It's just impractical. and if you have any kind of scale. Maybe you can do it on your laptop with Gen2 or something, but

that's not how most people do it. But you have to kind of wonder what's going to serve with here. And I don't think that this long-term support version is going to hold on as a strategy after I give you some arguments. But also, if you look at Apple, one thing I like about them is that I think that they see the writing on the wall for this. They've got iOS upgrading really well. to move to iOS when new version ships quickly, as opposed to Android. And they know what a big advantage that is. So if you look at Mavericks, which is version 10.9 of Mac OS X, they've stopped charging for the upgrade. And I think they're trying to get it so that more people adopt a new

version. The new version that they've announced, which is called Yosemite, the 10.10 update, that one's also gonna be free. So they're definitely moving to a model, pushing people to be on their most recent version, just keeps your costs in. I think another aspect we need to think about whenever we're doing this is also what kind of vendor are we gonna get? It's really important. I see all the time this whole situation where a bank or some kind of company has chosen, usually it's not a company, it's a technology company. Technology engineers, they are naturally, they know the difference between the dependency that they want to maintain and one that is just some website that they found that was maybe written by a university student stopped being maintained an

hour after the first release was made. But you'll find this stuff, like crazy stuff, like OAuth libraries and things that are linked into apps and you just have to remember what's going on there. So vendor selection is really important. We want reliable vendors, Microsoft, Red Hat, Apple, . All right, so what's the problem with these old versions? Well, one big problem is that developers don't care about them. They care about the new one. Wherever the money goes, the sales goes, that's where the interest goes. Who wants that glorious position of being the developer in charge of the maintenance version of the product, right? No, you want it to be the person making the new feature that everyone's excited about, so at the party you can explain like, yeah, that's

totally my feature. We get a lot of abandonware in the world. If you end up integrating that, your life is horrible. So I think Truprint did a real good service for us. They did this post the other day when they stopped, when they abandoned their software. And everyone thought it was a hoax. I personally was like, hoax, right? They're telling you to go and install BitLocker. By the way, that's what I'm using on my service for three years. And that couldn't possibly be real. And we started looking and, you know, that looks pretty darn real. And I think they went in, they changed their site, so it said, this product is insecure. And everyone was like, well, they must know what's unfolded. And that's not the case. Any product

that doesn't have a team actively protecting it and working and fixing the bugs as they show up, it's doomed to become insecure. And so as soon as they abandon it, that was not an appropriate choice. You shouldn't be migrating towards this thing. It's toast. So I think that was real, that was real smart enough and showed some real security sophistication, the kind of thing that you'd expect from a vendor of a encryption product. Another aspect of course is what people hate at Hentest. And generally, when we do pen tests, we're paying paid to test the new release, the next version of the new thing. The focus is not on the thing that you're no longer making revenue from, it's on the thing that you want to make

more revenue from in the future. You have more money to pay to make that better. Scoping gives you an insight. So, in the best case, software's totally open, right? We have open source software and we can see what people are doing. if they're being honest with us and relating to us to the best of their ability, what the security fixes are that they're making. And then we can say like, hey, I know that this new version fixed a bug that wasn't fixed in the old version, so I have to migrate to it, or I can make an effort to backport that fix. Maybe it's a pretty complicated fix, backport that fix. I mentioned before that maintenance is a little bit less glorious

than some other jobs. So we should be able to see some of this in the open source community because it's forced to be open. So, got a couple examples here of silent fixes. This is places where Linus himself, in some cases, has gone into the repo, maybe been notified by a security manager about a security puller, gone into the repo, pushed a patch, affixed it, not mentioned its security implications, told stable to pull it. And that's that. and then the Red Hat guys dig around a little bit, a couple of weeks later, they figure it out. So if you look at that third link, you'll see that there was some progress on this and people kind of started figuring out, hey, this is a

security bug. And so it got backlit after about a month to Red Hat 5, and then a few weeks later, a fix goes in for Red Hat 4, and then a few months later, a fix goes in for Red Hat 3. I thought it was grouchy about it. Another example here is this last link. There's a guy here complaining that an arbitrary kernel memory reads security bug reported was silently fixed. And that's just not cool, right? You don't want to see silent fixes on bugs. I have friends, I hear urban legends about bugs, right? It's very hard to get on lists where people discuss the details of what's going on with security issues, and then if you go around and tell people about it, you don't, stay on

the list. But I do know that my friends who are on those lists are the very most skeptical people I know about long-term support versions. And they often will do things that basically they'll tell people, hey, make sure you have this patch in there. And if you're not a pal, you don't know. So how much better or worse do you think this is gonna be in closed source? This is all open source. The thing that makes this hard It's the immense complexity, the fact that the bugs themselves are a field that you have to actually be an expert to do that. There's no NDAs involved. It's all just social and technical. And it's still too

complex for us to cut through. Alright, let's see. So older versions, commercial software are interesting too. So it's not uncommon to see a new version of a platform come out and say something like, this is the most secure version of X ever. Our new release is the most secure version ever. What does that mean? Are they trying to maybe tell you that the last version does not have security fixes that the new version has? Because what else could that mean? They're trying to say, look, this version's more secure. And some of those might legitimately be new security features. It attacks something the old version could. And some of them might be fixes, and some of them might be redesigns so that the kinds of bugs that they expect

that they still have in the old version are gone. And this kind of holds up when you look at the numbers. So, so far this year Microsoft was extremely transparent about their security stuff they publish. No, I'm serious. It's easier to get their bug information and exploitability information than Linux. They have a website, it's all formatted, you can go and read it, and they have an executive summary version, and they talk about things like exploitability metrics, and how easy this bug looks like to exploit versus another one. So it's pretty good. So I used their numbers, and I looked over the 36 bulletins from this year, and that covered 157 separate issues. So it's very common for three issues to be found in parser at the same time, and

two of them affect one version, and one of them affects another. So sometimes they get the same CBE, they share a CBE. So of those, 39 of the issues were not affecting the latest version, but were considered exploit code likely for the previous version. So one version back of their software and 39 issues changed from not exploitable to exploit considered likely, which is their highest. For bugs that have exploits in the wild, that's a big one.

13 issues, however, went the other way. They didn't affect the previous version at all, but they were considered exploit-likely in the latest version. So that gives you an idea. Just this year, we've seen three times as many bugs affecting one of, these are all supported versions. I'm not talking about XP, which isn't supported anymore. Of the supported versions, the previous ones were three times more exploitable than the new one,

I'll have a new one, right? Stick on the new one. The one that they're trying to sell you is the one that they have invested in security. And if you're deciding to skate by on the old support version, you're in a little trouble. I also ran these numbers for 2013, which is a full year, maybe a better set. So they had 106 bulletins in 2013, 334 different issues. Again, some of these share the same CVE number. And of those, 91 were not affecting the latest version, but were related as exploit code likely on the previous version. And, yeah that's right. And, oh I didn't write down the other numbers. Seven, yeah seven issues didn't affect the previous version but were rated as exploit code on the current

version. So seven to 91, that's a seven to one ratio. So three to one this year, seven to one last year. So maybe people have started getting a little better at looking at Windows 8 and there's a little bit more sophistication in some of that so that's starting to come up. But even if there's some good excuse for why this is happening other than this is literally a better software product, we should just be honest. Maybe researchers don't know it yet, they haven't bought the new computer. That still helps us, right? What we want is not to get exploded. So what do you think is conservative again with this? I think it's considerate to run that old algorithm that

you have lots of experience with or to jump in and figure out how to make BitLocker work on a service program even though there isn't very good websites to explain it and just do it right, a couple hours and the first time I did it and got locked up funny and now it's working for anything. It's worth it, you're good at it. Alright so,

Another big advantage of new designs is developers are systematically on getting rid of the bugs. Sometimes you get a worse new version. So this is a program called Final Cut Pro X. It's kind of notorious. It was a rewrite and they just branded it as if it wasn't. And so it had giant functional regressions and everyone hated it. That kind of thing is rare. When was the last time that you saw a release of Linux or Windows or iOS where the old version was actually better functional? You may have had a better UI. There might have been things you liked better about the old version, but when it was actually less functional.

It happens occasionally. I don't think it's a big argument for sticking on the good stuff. So even if developers are totally slapping, they're not trying, they don't care about security anymore, it's just this is a monetization release. Their tools, their frameworks, their testers are all smarter. They're all more capable. They're all gonna be able to, the third time you test them out, you're gonna be able to find a lot more sophisticated problems. So all of the rest of the things around them have made the system more secure. This time when they compile, the built-in X code clang static analyzer's gonna be better at finding problems. Things are just releasing the exact same code base a year later, you probably are gonna end up

accidentally making a better problem. And then finally, the last advantage of new software is those dishonest sound fixes that I complained about before. In the new version, they help us, right? We've got the advantage of dishonest sound fixes. So

there's a couple of companies that I think can teach us a little bit about patching. So these web companies mostly target consumers, and they're very used to patching quickly, they run the stuff themselves, they have software engineering cultures that are very strong, and have a strong testing emphasis. And so when companies like Google go and release Chrome, they see it in the light of all the rest of the systems that they release, not so much of them. And then mobile platforms, they're changing a lot of stuff, right? From Apple's more on Turing machines, to you know, androids, you know, kind of battles and issues with app ops and root. So let's look at those and see if you want to go out

there. So Facebook had this very classic kind of hacker ethos. Move fast and break things. It's easier to get permission, blah, blah, blah. So this hacker idea is that the benefit of change outweighs the risks and understanding still moving backwards, blah, blah, blah, right? we need to, we're more afraid of innovation than we are, or of failing to innovate than we are of failing to deliver the service. So this guy is not gonna fire you for innovating, right? If you take a risk and it doesn't pan out, I mean, it's really dumb, maybe he's gonna sack you, but you can be pretty confident that the guy who says that a lot, and it's on the wall all over the place, that

dude is, he's gonna be backing the people that take some chances and do the work. It is impossible to be this fast and loose with smoke tests, right? Basic tests that show that when you plug in a radio, the smoke doesn't come out. That's not what he's using to back this kind of thing, right? You need to have real serious regression testing infrastructure to be able to move quickly. And also it's impossible for many businesses. If you're a big red-redded bank or some kind of other lumbering giant or you have products that are deployed aren't connected to a network, or they connect to the network once every year, or that have extremely tight cost constraints, so they can't even afford much in a way of flashable

memory. This is gonna be hard for them. So Facebook, of course, changed their position in 2014, and they're now moved fast with stable infrastructure. And I actually think this is a more sensible approach for most people, right? Because what they're trying to do is build a platform other people can rely on, and Mark Zuckerberg, he's no dummy. So if you've got a bank and you find a cross-site scripting bug in it, how long do you think it takes them to fix it? If it's not being actively exploited. If it's in malware, then banks will often have WAPs and other kinds of things which are very risky to use, but which you can use to temporarily make a little filter that blocks a particular attack.

But generally, these are glacial conservative, cautious companies. Go out and find a cross-site scripting bug in Facebook. Facebook invites you to do this, by the way. They have a bunch, like I have a hat. They like it when people show them bugs in Facebook. So go out and find one and report it to them responsibly and watch how long it takes in the fixer team. And be careful when you're testing it because they also have been known to just notice that people have found one and fixed them. But it takes them minutes, worst cases, hours to get a patch of bug like that. They expect that bug. It's a reasonable thing to expect, you know, a web actor gonna have a cross-site scripting bug. So they're really, really good

at making that bug. And that's very cool. No bank can do that outside of a WAF, right? And a WAF is a very band-needed kind of solution. It's incredibly fragile. If you find another way, you do it in another language. You can fragment the packets so that they can't see the screen correctly. You know, WAFs all fall down. They're great for stopping mass malware exploitation in a really bad day, but they're not software engineering.

It's very limited. Google. Google got a lot of flack from people like me, actually, when Chrome came out with silent updates. Silent updates are very different than silent fix. Silent fix, the idea is you just don't mention to anyone when you slip something in and you let them run an old, insecure thing. This is just the exact opposite. You silently go and you don't give them an option to not install a new update and you push them up to a new version. And that protects them magically and transparently from whatever bug it is, hopefully before it gets exploded. And this little graph shows how when they started doing this, everyone started being on the patched version of Chrome

right after they released it and their competitors' browsers were left vulnerable. So the area of vulnerability is massively increased for people relative So they feel like they should give big raises, and they're probably good. They also changed my opinion about this. I come from the land of WSUS and managed deployments, where it's very important not to break anything. If we break anything, that's terrible. But that slows us down terribly in our deployment in patches.

This is a really interesting approach. Mozilla just came around to it. Firefox is starting to do this too now. which is very cool. Also, Chrome made the browser open source and they made the updater open source so that people can see what they're doing and there's a little bit of accountability that comes from that and generally it's being acceptable. One tip I have for you if you're going to do sound updates is don't introduce them within a month and announce them you're also going to introduce DRM. The feature of DRM is anti-user. It's a thing that you do to the people that use your computer in order to convince someone else that it's okay to let them use content. And so

because it's got this anti-user intent, the feature, I mean, it's very hard to say it's not malware, right? And that erodes user trust. And it erodes user trust in exactly the way that user trust is necessary for sound updates. You have to really know that this company has your best interests in mind and they're not gonna start monetizing the computer in a bot. And I think we can all feel pretty comfortable when Zilla's not gonna do that. But because they have such a big reservoir of trust, but if you're not Mozilla or Google, this is gonna be a much harder setup. I should also point out, Windows 8 makes it a lot easier for users, end users especially, to just let everything start. And

that's a big improvement for me. Alright, mobile. So iOS, the big story about iOS is that they have figured out how to get people to adopt new versions. And that's really crucial, because iOS versions often include hundreds of security fixes. Serious bugs fixed quietly with a big long list of thank yous. When iOS 6 came out, there were like over 100 bugs fixed in WebKit with thank yous to people on the Google security team in this tool called ASAM. And they just, They're making serious improvements to the platform. And because they own the whole stack, they're able to push it out, to make it free, and to get users to adopt it really, really quickly. So they're getting the advantage

that Google was getting out of Chrome on iOS. And, well, good job. I totally ruined this thing. There's one little criticism I could make of their system, which is that compatibility comes partly by not changing apps.

When you install the new version of iOS, it doesn't change out the version of libraries that you linked against as a developer. So the user experience of your app doesn't change. So in older versions of iOS, when you clicked on links in the webkit viewer, they would not change color, and then they changed it so they would change color, and that experience didn't change until people upgraded the app. So that static linking of those libraries in there actually does leave users exposed for quite a bit longer than if they just kind of forced them to upgrade, but they're very interested in user experience and that would change and break user experience a little bit. Overall,

they've done a great job. Android, this is one of Android's main security weaknesses. Android doesn't own the whole platform. Mostly, their stuff comes from partners. So you get OEMs, and the OEMs have characters, and they all kind of collaborate to make releases. And it all slows down. And guess what OEMs are interested in? They're not interested in the one they released six months ago. They're interested in the one they're releasing six hours from now. And that's what the marketing money's going to, and that's what the support is for getting the latest version. And that's why when you look at Jellybean there, Jellybean's from 2012. And before that was Ice Cream Sandwich, and that's still represented on this list with pretty good slice. That slice is

probably as big as the a whole slice for all the versions other than iOS 7 on iOS, right? That's from 2011. You know, Android when they change their platform a lot, they make a semantic change in the API, they change the API number. And so these guys actually release major name releases like this that have multiple API versions. So app compatibility breaking API changes inside Jelly Bean, inside Ice Cream Sandwich. You get really weird things where a release like Ice Cream Sandwich has address space layout randomization support, or better address space layout randomization support. And a release like Jellybean adds the removal of the read logs permission from a minor dot release inside there. So not

all of them. But that's a huge thing, that's one of the main security vulnerabilities is information leaking logs into other apps. And that just got totally pulled up. I mean you could backport that fixed with older phones, but people wanted to. And so it's a real hard thing to be on an old version. And you could be on Jellybean and still not have that fixed because that's a . Another thing that they did was they added in multi-user support. Think about what that means for an operating system to add multi-user support. That's a pretty big tweak.

KitKat had SE Linux, read and external storage, all kinds of cool security stuff and unfortunately they also took away a security feature that went into 4.3. So in 4.3 they added in this thing, AppOps, nice privacy feature, let you stop that app that you downloaded through your contacts, which iOS can do. And then they pulled it out in a doc release for KitKat that also included a security fix so you couldn't not apply. So that's too bad. I didn't upgrade for a while but I had to eventually. And then I upgraded my device. to root your device, life is good again. Now you can go and you can use the engine mod privacy guard. It doesn't matter that that company that sold the phone doesn't want

to support any more. It's this nice open source team has got a new secured version man that can run the latest greatest thing and life is good again.

So, all that said, I really don't want us to go crazy. I don't want us to be on the true bleeding edge, but on stable versions, like Windows 8.1, that will get fixes in short order, not

the dead version of Chrome. It can be very reasonable to take a site down. One of the times which it happens if you have a serious volume and you can't fix it, and you just take your system down, that should be a plan. It's not your first plan, but when Revenue Canada suffered from heart leak, They weren't in this position to push a fix like that. And so they pulled the site down. And they still got a lot of information stolen, but by pulling it down, they minimized the extent of the breach. And that sysadmin did a good job. Those security people, yep, anyway.

Another thing I wanted to say about WAFs and IDSs, they can be useful. But generally they're allowed to do more than they actually can. I've definitely seen uses for them that were excellent, like when people have router configurations that have occasionally been lost, and then traffic that shouldn't have been allowed across networks showed up, and that problem occurred twice, and it didn't occur three times, because after it occurred twice, an IDS went in, and it wasn't looking for anything fancy, it was just looking for traffic that shouldn't be there. No trying to look for attacks from 10 years ago that aren't gonna be there, filling you lots of false stuff. The only thing I could

say was, bing, Your router's messed up. And then they can come and fix it. Anyway, that's a great way to use it. Target, of course, had FireEye and they ignored their warnings. And so they were probably in worse shape than if they hadn't had it. Because now, not only did they miss it, but they had been warned. Neiman Marcus had a breach. It took three and a half months in 2013. And it's been reported that they had an IDS system that alarmed 60,000 times on this breach. And so everyone's like, well how can you miss something that's alarmed 60,000 times? And I would have said, how can you miss it if it alarmed once, right? It didn't alarm once. It alarmed 60,000 times. And they said

that that accounted for 1% or less of the daily entries in the endpoint logging solution. So, yeah, that system is just there to make you look like you're You know what I mean? It's certainly not gonna stop an attack. Anyway, hopefully we won't invest, and that is often, think about the investment that that stuff represents. That's a huge security investment. If you spent that time making those systems so that you had better regression tests for them, putting in exploit mitigations, making better network isolation, DMZ isolation, entire firewall rules, updating to latest versions of operating systems,

you'd have a much bigger return than that.

Alright, so near the end here. We really need to plan to stay updated, and what that really means is getting by. If you want the ascent of your manager, the team, from the top ideally, you want people to say, hey, we're willing to spend more than we have to deliver this service because we want it to be more secure, and so we're gonna do things like make releases that don't have any features, that just add security, enhancements that just take us up to the latest, greatest versions. a little extra pain, not wait for some other people to work out the kinks of getting this to work, but to work them on ourselves and eat some

of that harm, just for our most critical situations, right? Just for the things that we really care about that we want to have a competitive advantage on. And then we need to invest in getting really good at pushing updates. So pushing updates means you've automated deployment, but it also means you have this automated regression testing, and you don't You get good at knowing when a system is configured in a way that it works. You've got tests for all those problems that have shown up before and you can do that in a fast way. You have pre-production systems that are actually online. One of the big things I find is that it can be hard to get time on a pre-prod environment for companies. If you

don't have a pre-prod environment ready to test something and heart bleed happens, how do you fix that? You always have to have a highly available pre-prod environment. And if you are planning to make a zero change release, or just a security enhancement release, well, you can use that viewcroft environment and throw away that release that you didn't need, right, it was nice to have, and fix the thing you have to have. That's all this useful. All right, and then budgeting for enhanced mitigation. So enhanced mitigations are things that you do to your system to make them harder to exploit. I generally avoid implementing expensive ones. I like to stick with the free ones or the cheap ones. The cheap ones, not only are they generally good,

Once you've done them, then you can invest in your fancy IDS, IPS, whatever.

An example of this that's great is the Enhanced Mitigation Experience Toolkit, EMET, from Microsoft. You may have heard of it from CBEs when you were reading about Internet Explorer vulnerabilities and they said, hey, we don't have a patch for this yet, but if you use EMET, nobody's figured out how to exploit it there. That's pretty cool stuff, right? They put in protections in the memory, a lot of stuff, make it harder to make your rock chains and generally interfere with exploitation. So this helps a lot with that arbitrary code execution bug, which is the main one people get very excited about. It's not very useful against DOS or data exposure or access control bugs, but it's a good tool to have new tools. Another one of these that I

really like is a set of nice kernel hardening patches called GR security. And it's very impressive. It's also a little hard to use. So it used to be that we had sysamins who were kernel hackers in the NewHack program, and that's the kind that you have to hire if you want to do this, right? If you go and you meet the people who are site reliability engineers at companies in the valley that are huge, like Facebook and Google, like look at what they're offering on the jobs, right? On the job listings. Those guys are really serious engineers, right? And they have them doing very serious security engineering for kernels and things like that. Anyway, if you can get

people like that in your employ for at least your most important system, they can help you incorporate some patches in GR security. And you don't necessarily want to target your release date for the product, the first release, because that release date's already gonna have everyone at the edge of their seat for every possible feature in, but if they target a release, they're gonna have great tests for that. They need to be able to know that it works before they push it in a problem. If you can take that same testing infrastructure and do a release a month later, where you don't change anything functionally, but you just add in a bunch of these exploit mitigations,

that's a big win. And it can make exploiting your bugs a little harder than exploiting your computer's bugs. It's real nice when I see one of my customers' names in that story that says, all of these companies were hacked in this space. except for blah, blah, blah. It's like, yeah, I know why that is. Right? And anyway, so rolling this stuff out over time, hardening the kernel more and more over time. These things are very piecemeal. And it's fun, you see bugs like this PTY, privilege escalation bug, CBE 2014-196, had four different aspects of JR security that blocked this exploit. And that's pretty sweet and you don't have to get all the way through GR security, you don't have to touch the whole thing. So

anyway, it's a good investment. I got another time here. So I'll just say, in conclusion, it's often safe to be on the cutting edge. I hope that you buy that given the three to one advantage on Microsoft vulnerabilities, the protection can sound the fixed bugs, and the fact that you have to do this kind of agile stuff anyway because you need to be able to respond to hardly another kind of advice with that. Thanks. Are

there questions? I can run around with the mic or something.