
Thanks so much to all of our sponsors. Again, we could not put this conference on without them, so I just want to give a major shout out to all those, especially our two leading sponsors, Fitbit and HackerOne. So thanks to our sponsors.
So I just want to introduce our keynote today. Jason Druby is a career technologist turned FBI agent and is now a tech entrepreneur. He has many years of experience working in information systems and security. More recently, he was a FBI cyber agent in New York City where he worked on some of the nation's largest national security and cyber crime intrusions. He was later promoted as a supervisory special agent in Washington, D.C. where he was responsible for major data breaches, hacktism, and cyber extortion cases around the country. As a director at Tanium, he is helping to advance security products to enable corporate network defenders on an even larger scale. He's applying his skills and experience in incident response, investigations, penetration testing, analysis, and threat intelligence to help solve the cyber
crime epidemic that we face today. And with that, I turn it over to Jason.
Thanks for waiting on all the lines. I think these guys didn't know that this many hackers were gonna be up this early. So, you guys surprised them. I know you're responsible.
Thanks a lot for having me here. I love B-Sides. This is my first time at B-Sides SF though. So super excited to keynote for you guys. I love B-Sides because it's definitely more personal. I really think that going to conferences over so many years now that I think a lot of conferences have lost that identity. And this is a good place to actually collaborate and talk. And feel free to talk to me after the fact. I'll be around all day. So I'm again super excited. So, a little bit of background on myself. You know, I started out as a hacker, 10 years old, reading 2600 Magazine, hacking and freaking, terrorizing my friends and family for many years. And I, over many years, I kind of
had the hacker's dilemma. The hacker's dilemma is, at some point, you need to decide, do you continue on that life of terror, or do you want to go make money? And so, I decided to go make money. I started fixing computers when I was about 15 years old working for a local little ISP slash PC desktop support place. I was there for many years. I still got into network administration, system administration. I moved into telecommunications for a clip. And then I moved into network administration and network security. And then I got a phone call from the FBI to come work for them. And that really transformed my career pretty significantly. When I got to New York City for
the FBI, I went through Quantico, went through all the stuff you see in the shows, carried a weapon, had a vest, did push-ups, but I also had this other skill set that was very much needed in the government at the time. And when I got landed in New York City, I was doing, I entered on a squad that was a very brand new squad for that time, and it was a national security cyber intrusion squad. This is something that wasn't really normal. in the Bureau to be working that type of case. And so I basically had free reign to start developing a program that was actually gonna be efficient and work. And so I worked Russia, China, Iran, our top three nation states that were hacking the
Northeast region quite frequently. And I did that for many years and I'll talk about some of those experiences in a moment. And then I was promoted later as a supervisor in Washington, D.C. where I started working hacktivism, extortion, large-scale cyber extortion cases, and large data breaches. And so I've been around during all the major data breaches in the last, pretty much the last decade. I've had at least some direct access to them or had some involvement in most of them. And when I left government in 2015, I really wanted to go somewhere that was gonna really change the way that I was thinking about how I was doing my job. And the government was very reactive. So, you know, when someone gets hacked, you'd
send out a bunch of FBI agents, you go try to help that victim company. And what I learned is just over time, it wasn't working, right? We were just being reactive, we weren't being proactive about it. And so when I left for Silicon Valley Tech, I wanted to do something that was actually proactive, helping those same victim companies, who I now call my customers, to protect themselves. And that's what I do at Tanium now. I do a lot of research and development with that team. So,
What I want to do today is I want to talk about my experiences, what I perceived in being in government and now back in private sector, some of the parallels that I've seen, some of the illusions that I had, and many of you may have these same illusions about how things worked, and then talk about the real realities behind them now that I talk to Fortune 500 companies on a daily basis pretty much. But before that, I'll talk about some of the cases that I worked. You know, I visited banking institutions quite frequently in New York City, and the illusion here is that banking institutions are safe. Right? You guys know this. You know, I would argue that they're actually the most vulnerable out of any institutions
that we have pretty much now. And the reason for that is not only they're huge conglomerate corporations, they have a lot of computers, they have a lot of infrastructure, it's really, really hard to maintain at scale. And they're really just getting by day to day just by keeping the infrastructure up and running. Security is now a forefront, but when I was in the FBI, it really, there wasn't a huge concentration of security. And then you have a banking institutions which have a lot of money, right? And people want that money, and hackers want that money, and nation states want their intellectual property. And so they're heavily targeted. So it's basically many against one for a lot of these banking institutions. So when I, you know, my perception of the
banking institutions that they were incredibly secure and they were the best in the industry. And what I found out, that was not necessarily the case in all the institutions I went to. Now they've completely transformed that industry and now they understand the threat, but back then they didn't. And even I didn't understand it then. Another case I worked, it was probably one of my first cases, was the Times Square bomber, Faisal Shahzad, who dropped a bomb on 45th and Broadway in New York City. And the illusion here was that terrorist organizations were not technically savvy. And this was the first case where that changed the perception of the government. And Faisal Shahzad actually had an
IT background. He was using virtual private servers, VPN connections to communicate to the Taliban in Pakistan. And he was very good about his OPSEC, extremely good. And when I finally One of the, probably the biggest significant discoveries we had in that case was when I decrypted one of his encrypted RAR files that had the actual bomb plans for the bomb. That was pretty significant to show my superiors that, hey, he was using encrypted RAR files, which doesn't sound that techy for, or advanced for you guys in the room, but for terrorist organizations, that was pretty advanced. And they've now significantly increased that type of obfuscation and those techniques, so. Another one was the Nasdaq hack of 2010.
This one I can't really talk too much about because I think the investigation is still ongoing. But the illusion here was that, or I guess it's not really so much an illusion, but there was a lot of talk of potentially the trading platforms being compromised by hackers at that time. And so one of the worries was, can you manipulate a trade in a high frequency trading situation on a platform? What I learned My illusion was yes. I think yes for everything. I think anything can be manipulated. All the things that I've seen, all the investigations I've done, I've seen hackers do everything to manipulate anything. It's pretty incredible. But what I did learn about this is it's very, very hard to manipulate a trade when they're doing microsecond
transactions, and then all of those trades still get reconciled at the end of the day. So the trades that you do will get reconciled at some point by a banking institution or a trading firm and they can detect fraud pretty rapidly. So my illusion there was that it could just easily be manipulated and hackers are just gonna get away with all these trades. That's not so much the case. What you're seeing now is there are hacking teams buddying up with trading firms and they're trying to manipulate the overall system by getting insider information. That's really where the hacking is happening, it's more for insider information. So that was a surprise to me when I was going through this scenario during
that time. Another case I worked was Goldman Sachs Programmer. This is another high frequency trading case. This guy was working, Sergey Alenikov was working on a high frequency trading platform that was making Goldman about $500 million a year.
He exfiltrated all the code, not just the code he was working on, but the rest of the entire project, went to another firm, got paid three times as much, and then they stood up a high-frequency trading platform back then, which was, you know, took a lot of research and development, and he did it in six months. And so the interesting thing about this case is that Alenikoff actually was freed. He went to the second appeals court, and it turns out when we charged him with economic espionage, there was a... there was a requirement in that federal charge, which was that whatever you were stealing from another company had to be a tangible good. Okay, that was the term, the terminology inside the federal statute. And what got determined
in that federal appeals court is that code, computer code, was not considered a tangible good. So he actually got away on a technicality. If you read the appellate court's decision on it, basically said he's guilty of stealing this information and going and using it to bring up this other company, but code was not a tangible good, which I thought was pretty interesting technicality. That has now since changed, so toads, code is now a tangible good. And so if you do steal code, you can get arrested under a federal statute.
The most, the latest one that I worked right before I left the FBI was the Iranian hackers that infiltrated a dam in Rye, New York. Have you guys heard about this one? I'm sure some people have heard this. Started looking at Iranian hackers for many years. They were able to infiltrate a SCADA system that was attached to an open Sprint card, and they can manipulate the dam controls. The illusion here was, you know, the news media made it a huge deal, but that dam was actually just a sleuth skate. that was protecting a small little town in Rye, New York, and it basically held about four feet of water. So it wasn't this massive dam that we thought it was. When we first got the intel that it was
a dam, we didn't know exactly where, it was pretty scary. I think the federal government was pretty scared about that potential. But the idea behind this is scary, right? I was following the Iranian hackers when they were basically started out as script kitties, using other people's tools. They started developing their own tools. They became masters at SQL injection, even created their own Havij tool. And they did all of that within about a year, 18 month timeframe. So it was really incredible to see their progression and how rapidly they started getting good. Then they hacked this dam and they hacked the SCADA system. While it was pretty, technically it was pretty easy for what they did,
they were still able to manipulate controls and get to the backend controller, which was pretty scary. And that's what led to the Iranian hacker indictment last year.
That was after I already left, so everybody got the awards and I didn't get an award. You know, whatever. It was good work, it was good work for the country. And it was just the idea of the indictments for the federal government, it's their idea of showing that, hey, we know what you're doing, so stop it, and we never got our hands on these guys, they're still in Iran, they're probably still hacking us, I'm guessing to this day, but the point is, hey, You know, this is a mechanism for the federal government to show that we know what's happening.
I think probably one of the most devastating hacks of my career was the OPM hack. Obviously, all of my fingerprint data, all of my information, all my family's information, all my friends' information got stolen from this hack, and it likely is in the hands of a foreign nation state. The scary thing about this is I had the illusion that my own data in the government was protected in a way that I would expect it to be protected, and it wasn't, unfortunately. Even, you know, our FBI systems at the time were really heavily protected. I felt pretty confident about what they were doing, but sadly, all of my actual personal information lived at OPM, which I
didn't know until the hack, and now all that's gone. So for the rest of my life, I can't trust my fingerprint on any biometric device for the rest of my life. And I'll talk about a little bit of that when we talk about the future of biometric compromise and things like that. Some of the very, very last things I started working were the hacktivist DDoS attacks on Xbox and PlayStation. Hacktivism has slowed down quite significantly. I'm not sure exactly why. I mean, it was very, very rampant right before I left in 2015, but I've seen a major shift into probably more discrete operations and less... I mean, the problem with hacktivist, the reason why they mostly got caught is because they would
tell people, what they did. It's kind of basic human nature to tell your friend that you did something, and that's generally how most of them got caught by the FBI. Okay, so I showed you some of the cases. The illusion there about the cases is that I'm this great FBI agent, and I was doing all these amazing cases, and I was doing good work, and I felt like I was a good case agent, but I was at the right place at the right time with the right skill set. The reality behind what's happening right now in the federal government is they have a thousand FBI cyber agents that are working just as many cases. I had more cases that I had to, I couldn't actually
do because I didn't have time or I didn't have to, I had a major workload that I couldn't even ever accomplish in a lifetime. That's how many cases were coming in back during that time. So when people asked me back then, is this a problem? I said, hell yes, it's a problem. It's a major problem, and we don't have enough people working. So for those in the room that are looking at potentially helping government in some way, I mean, they need help. I mean, they're not gonna be the only, government can't be our only savior. We have to save ourselves, we have to use software to save us, we have to use our brains to
save us, and what I wanna talk about today, thank you, got a slow clap.
What I'll talk about today is some of the parallels that I saw being in federal government and now back in the private security, and then talk about some of the realities and illusions that I had. So one of the biggest parallels I saw in both sides, and I see this every day, is the visibility. Most large enterprise organizations, and I say large, like 10,000 computers and up, and maybe 10,000 employees and up, have a huge problem with visibility. They have no idea how many assets they have, they have no idea how to protect them, they have no idea whether they're vulnerable to vulnerabilities or security, They have all sorts of tools. They might have the
money to buy the tools, but they don't know how to implement those tools. It's a huge, huge problem. It's still a problem to this day. How do you make accurate decisions about risk and about security policy when you don't even know what you have? I've talked to some large enterprise customers and I said, how many assets do you have right now on your network? And I'll get an answer like, I have about 100,000 to 200,000 computers. I'm like, that's a 100,000 computer gap, right?
That's not good. But that's the reality. And this kind of excludes some of the banking institutions, because they've done a lot in security. But look at some of the large retailers or large medical companies that don't have that level of visibility. And those are major, major problems, right? The other piece I saw, I'm a huge firm believer in the three P's principle, people, process, and product. I would say the majority of customers that I go to or people that I interface with have one of these three things. They might have really good people, but they have no process and no good products, no good tools. Like in government, I used all open source tools for every investigation I did. Some
other organizations that are a little more progressive have two of these three things. They might have really good process and really good tools, but they have no people to use them. Right? Banking institutions are kind of like, if you guys want to get a job, there's about 60 to 100 open billets for every bank out there right now, if you want to live in New York City or Northeast. That's because they don't have enough people. They have the money, they have the processes, but they don't have the people. So you need to strive for your organizations. You need to strive for those three things. You should have all three of those things working in concert
to really be effective. The other one is, you know, principles of basic hygiene. I think, you know, I remember RSA, I think it was RSA 2012, and I was looking at all the sales bros getting up in front of the mic, and they were talking about the APTs flying in from over here, and the APTs coming up from the ground and just waving their hands around, just talking nonsense. And it was like fear-mongering, basically. And realistically, most of those organizations that are buying those tools, those advanced tools, don't even have principles of basic hygiene in their own environment. It's the same people that are telling me that I have 100, 200,000 machines. You need to get those basic principles down first, then you can start going after
the APT. So even then, you're fighting a losing battle most of the time, unfortunately. Another parallel I see is the perimeter is you now. It's every mobile device, it's every laptop that's mobile. Most workforces are working from home nowadays. It's really the traditional network is gone. The perimeter is completely dissolved. Everybody says this in a lot of in a lot of talks, you've seen it probably for the last couple years, but it really is true. It's really, really hard for organizations to defend the perimeter now. And it's why there's a concept in security now called micro-segmentation. Essentially, we need to segment every single device that we have. And it needs to have access controls specific to that device and
that user, and every single individual asset needs to have that. And that's gonna be the future. If you can accomplish that, I don't know anybody that's really doing that to scale yet, but that is gonna be the future of security is that micro-segmentation. Another one is need for rapid response. And I have a whole couple slides on rapid response, and I'll cover that in a minute. And the last piece is just overwhelming choices. I mean, some of you guys might go to RSA. I mean, look at that. It's kind of disgusting, right? I can scream up and down and tell you that Tanium is an amazing company, but we're just a little blip on that screen here. The problem is,
look at like a CISO at a company, Chief Information Security Officer. They have to decide which products are gonna be good, which combinations of products are gonna be good. It's so hard. They can't spend hours and hours and hours of their day trying to figure this stuff out. It's very, very difficult. And everybody wants the big red easy button. Right? I had to, by the way, that's Spanish for easy, if you didn't know. Turns out the easy button's trademarked, so I couldn't put it up here.
So, but this is what everybody wants. Seriously, everybody that I talk to wants this big red easy button, hit it, and I'm secure. I mean, it's the most ridiculous thing. And I know it's a little exaggerated, but I mean, when you really get down to talking to executives in large organizations, that's what they want. They want this big red easy button. and it just doesn't exist. There's no silver bullet. You guys all know this in the room. We're gonna be talking about vulnerabilities all today, how to hack things. I mean, it's just, again, it's many versus one, and it's gonna be a consistent problem, and you just gotta get better, get your teams more tuned,
and get better at your processes. So let's talk about rapid response a little bit, and some of the illusions that I had, and some of you may share these in the room. Every large enterprise has an incident response team, right?
Alerts lead to positive findings and outcomes, right? My favorite title is the let's make alerts great again. That's my favorite title of the whole B side, so good on you. All our primary software tools work great and they scale perfectly. Same company, same team, right? And workflows are simple and automagical. Raise your hand if this all fits your organization right now. Yeah, there we go. You guys are lucky. How many people? Like three people in the organization maybe? Right. So what are the realities? IR teams are still in full development. If you exclude, again, maybe the large banking institutions that just have an ungodly amount of money to spend on this, most IR teams are still evolving. The majority of IR teams that I deal with are a
former IT guy that knows a little bit about Unix and a little bit about Windows. Those are the teams that get together during intrusions. And they change frequently. It's really hard to maintain those teams because of breach fatigue. And I'll explain what that means in a couple bullets. One of the other things that I see a lot is alerting is just out of control. I mean, the analysts are just flooded with alerts now. And when you saturate the number of alerts, they just become completely meaningless. So I agree, let's make alerts great again, right? I mean, we should be alerting on significant, minimal, false positive alerts, right? I mean, that's the true idea of getting your alerting and getting
it working. You know, I feel like bad alerts turn people into security goldfish. That's my term for it. It's like, here's an alert, so I'm gonna go this way. Oh, here's another alert, I'm gonna go over here. It's like tapping the window for a little goldfish. And they never actually accomplish anything, they never get to the root cause analysis. And it just, it burns out your teams.
We've also been using really, really inefficient processes. Stuff that we've been doing for 20 years, since I fixed PCs when I was 15 years old. We get an alert, we go find a couple boxes, we go image those boxes, which could take hours, and in some cases, for enterprises, they have to fly someone out to a remote location to go do it. Then we take that hard drive back, then we index that hard drive, which takes another several hours. And then we look at it and we find out it's a false positive. So you just spent two days maybe getting all that information for a false positive, right? And then you just take that hard
drive and you stack it next to your desk and you got a nice little even stack of hard drives. And it's always first in last out. So it's just whatever the hard drive is on the top of the stack is the most important. And you never get to the bottom, right? I mean, this is very, very standard for most of the teams that I visit. And that methodology does not work anymore. It just does not work at scale. It doesn't even work for normal organizations that are smaller in size. And so the need for rapid response, this is why you've seen a lot more new technologies out there talking about rapid response, being able to
get from alert to a machine and get forensics really, really rapidly, okay? So you're seeing that convergence now in a lot of the technologies, a lot of RSA technologies companies out there that are doing this. Seeing a lot of customers that need huge need for tool consolidation. When you're trying to train your team on 10 to 15 tools that all are super complex and require a lot of training, it's impossible. I do a lot of training for my customers at Tanium and it's hours and hours of training for one tool, just one tool in the tool shed. So those tools need to be consolidated. You're seeing a lot of companies go to tool rationalization to figure out what they actually need to accomplish the job and that's the progression
now. Teams are, I have a couple of my colleagues actually do a talk on breach fatigue. And what does that mean? It basically means like when your job becomes too cumbersome and too overwhelming, you actually reduce your level of security and you can't effectively do your job. And these IR teams, especially if you've been in a breach, these teams burn out quickly. Because imagine if for any of you that have been in a breach before, a breach scenario at your company, everyone's hair is on fire. You have the board, yelling at the executives, the executives, you know, yelling at their subordinates and so on and so forth, all the way down to the boots on the ground guy that's actually working the, you know, 12 to 14 hour days.
And they just get burned out. And that's why there's such a change in frequency of the IR teams that exist. So, you know, we need to combat that breach fatigue and make sure that our teams are actually working efficiently. A big...
kind of other illusion is that automation is not possible, or is possible. We need an automation, a way to automate our processes. And it's really hard to automate your processes if you don't actually know what your process is and whether you have good processes. So I do think automation is possible, but when customers come to me and say, I want to be able to automate and quarantine machines immediately. Well, if you automate a quarantine off a bad alert, you might just quarantine your entire enterprise. Right? And people want to do this. I'm not even joking. I hear some oohs, but this is a legitimate thing that people have asked me. And it's very, very scary. So you need to have your processes down
packed before you start doing automation. And the last point there was that just because your team is doing well, in an actual breach scenario in a large enterprise, you still have to get information from other teams. So same team, same company is not always the case, and in fact it's mostly not the case. Where you finally get your nugget of information that you really need, and I gotta go send that to the network guys, and it just drops in a black hole. So you need to have, to really be effective, all those teams need to be interacting and ingrained with each other. And that's kinda what I'm talking about on this next slide, is just some suggestions for companies that are trying to do rapid response.
You need to have solid playbooks, so solid processes. reduce the security goldfish in the group. Your goal should be to shorten system recovery time. It's not necessarily the investigation every time. When you look at large organizations that just need to stay, keep that machine in production, you need to shorten that system recovery time. Just face it, even though you might be an instant responder at heart and you just want to fix the problem, find the hacker, it's really about getting that system back up and running. Having solid playbooks will also reduce that knowledge drain. Because what happens is you have a nice, you have a good IR team for a good couple of years at an organization. They go through breach fatigue, they go through
a major breach, they all get exhausted, they go somewhere else. And that whole team will go together. And now you lose all your knowledge from that team. By having good playbooks, you'll reduce that knowledge gain, drain. Some of the alerting that I'm seeing companies go to for metrics are like mean time to patch. How long does it take from a critical patch to the time you actually patch it, and how do you enforce that internally? How do you go from the time that you go from an alert to a triage event? And the reason I say this is because people come to me and say, hey, Jason, I have one of the best IR teams in the country right now. I'm like, oh, that's cool. So tell me why.
They're like, well, I don't know. These guys just do a lot of work for me. They do a bunch of investigations. I'm like, well, I can't, I have no metrics to tell you how you're actually doing your job. So these are things that are super important And it's super important for your teams to be better because now you kinda know how quickly you can get stuff done. And time is of the essence nowadays. When intrusions are happening in minutes and seconds, you really have to work in minutes and hours, really. Hopefully someday minutes and seconds. Time to remediation and enforcement is another metric that people are doing. So from the time you actually remediated the event to the time you actually enforced the policies from the problems
that existed that created the actual event. So that's a good metric to have. And then you should have these metrics per business unit, like which business units are the worst or the best. You should have that designated in these type of scenarios.
The other big area that I like to talk about is threat intelligence. And here's some illusions about threat intelligence that you guys have probably heard or I experienced several times. Threat data is accurate and easily consumed. Right? Super easy. Information sharing is easy, so go do it. That's what the government tells you. Right? And attribution matters. I need to know which APT actor is attacking me. And then of course, machine learning will save all the kittens. That's my obligatory kitten reference. I mean, what's the reality here? Right? You guys probably already, some of you already know this, but the definition of threat intelligence is the application of threat data. and its basic form, the application of threat data. Most
organizations can collect all the threat intelligence they want. They can store it in huge data lakes, but if you can't apply it, it's completely meaningless to you, right? And not all threat data is equal. A good example, and I hate to get on US cert, because I have a lot of buddies that work there, but the latest Grizzly Step malware report had a lot of controversy around it because there were IPs that were just tour exit nodes. And the Yara rule that was attached, if you did just some simple Google dorking, you would find that that Yara rule matches a Russian criminal forum hacking web shell. Right? And now the federal government is going to
say that, hey, if you find any of these things in your network, you're hacked by the Russians. Right? That's bad, right? I had a time when I was a supervisor of the FBI too where I argued with an assistant director. It was about an Iranian message that we were gonna send out to victim companies that we thought were being hacked by the Iranian hacking group. And I argued with them about a zero byte hash. We were gonna send out a zero byte hash to all the institutions that we thought were being hacked by the Iranians, and that would basically hit on any file that has zero bytes in size. So we're gonna tell them that if you actually scan for this hash and found it, if
you're lucky, you're hacked by the Iranians. And I argued with them, big time. And we still put it out. We still put it out. Yeah. It was an executive decision, I didn't have any control over it. So, you know, those type of things happen. So you gotta really know the threat data that you're ingesting, okay? It's not perfect, government's not perfect, you know, but And they are trying to help. There's a lot of legitimate people really trying to do the job and trying to get that data out. But you gotta know that it's not equal in every threat feed. It's also very fickle, right? If you look at the Verizon data breach report from last
year, they took a lot of threat feeds and combined them and did some overlap research. And they found that there was only a 3% overlap in all those threat feeds. Which was surprising to me, because I thought there would be a lot more overlap, assuming that the collections were coming from bad sources on the internet, bad sensors, right? And the reality is, is those sensors are collected by different companies in different regions. Those indicators change so rapidly that there's really no overlap, because infrastructure is just changing in seconds and days, or seconds and hours and days now, used by these hackers. are only good for about seven days nowadays for all the polymorphic malware. So the data is very fickle.
You need to know what that data is and you know how to apply it. And if you can apply it at scale, if you're lucky, you know, you don't need to have it there forever, right? And then of course the application at scale is hard. Every customer that I've met has a problem with this. How do you take in these massive feed? Yeah, I might spend all this money to get a feed, but how do I even actually search for these indicators? It's really, really hard to do at scale. There's not many technologies that have the ability to do that right now. The other kind of real difficult situation is who do you share your
data with? This is, you know, for the last two political administrations, sharing data has been the answer to cybersecurity. And there's a lot of problems with this. Who do you share your data with? Do you share it with the FBI? I loved when customers or victims back then shared data with me in the FBI. The issue with that is if you share too much data and the FBI goes and arrests your employees because they have child porn on their laptops, then that's a problem. You know what companies hate more than anything? Is FBI arresting their employees, right? But the FBI has an obligation, if you give them a hard drive and they find other criminal activities happening, they have an obligation to actually investigate that, okay? So what
happens is, that happened a couple times when I was working cases. Found some data. on a machine that was either child porn or some other part of an investigation, and we had to pursue it, as we should. But that same victim company never called us again for anything. And they were getting hit by nation states all the time, right? And they had valuable data that we needed to conduct our investigations. The other thing is, do you share it with your regulators? Well, the regulators ask you to share the data willingly, okay? Well, what does that end up in? That ends up in fines, usually. These companies get fined by the regulators because they were willingly
sharing data. I mean, that's wrong. Do you share it with your neighbors? DHS has done a really good job about putting these ISACs together, and that's been working really great. But you also have organizations that believe by sharing my threat data, I'm giving an economic advantage to my neighbor. And they do believe that. I don't believe that, but they actually believe that internally. So who do you share your data with? I think I've actually seen a decline in sharing of data over the last couple years, unfortunately. So it's a very hard problem. No one seems to have a full solution for that. I think DHS is doing a good job in that space. The second to last piece here is you need to strive for data asymmetry. What
I mean by that is when I was in government, government just had massive data lakes of all sorts of data. And I don't think they were getting the value out of that data. They have data scientists that are combing through that stuff, different classification levels. It was very daunting, the amount of data that they've collected. And I would say a lot of organizations are doing the same. A lot of organizations collect a ton of data. But what are they doing with it, right? Are you getting enough output? Are you getting as much output as you're getting inputs? Probably not. So you need to strive for that data asymmetry. only collect the data you really need to get you that output. And the last piece is machine learning,
right? Machine learning is really domain specific. I think in the security space, it's still in its infancy. I think a lot of the stuff that ML has solved has been mostly academic. I just read about a machine learning and AI system that just beat a retired Air Force colonel in some flight simulations, and it was all done from a Raspberry Pi. a single Raspberry Pi, that's pretty cool to beat a retired Air Force Colonel that's gone in combat multiple times. So, you know, machine learning is there, it's real, it's happening, but it still hasn't really translated into the security space just yet. And that'll be more part of our future.
One of the questions, the last questions there was, you know, does attribution matter? Here's all the actors that you could be worried about. You have hacktivists, you have criminal, organizations, advanced criminal organizations, you have the insider threat, espionage, terrorism, warfare components, kind of those last three are more of the APT type actors. Who should you be worried about? If you look at this graph from the Verizon data breach report, the majority of activity is actually still financial crime. So when you're worried about what you need to protect in your organization, you should be worried about the things that you actually should be worried about, which is mostly financial, okay?
Couple more things quickly. Compliance and best practices. This is always fun. This is an illusion that I think a lot of people have is that compliance saves money, right? And following these compliance and best practice lists will keep us safe. And that my regulators are my Snapchat friends, right? Those are all things that I've kind of, I mean these are again exaggerated, but I've heard these things out of people's mouths. And the reality is, and I'm a huge fan of compliance and best practices actually, but individually, generally speaking, most of the compliance metrics really pose an undue cost and have minimal effect in most environments. And you can ask most people that have to go through these type of compliance audits. But if
you combine them, you know, in a combined sense, they are very useful. And I think you're seeing a lot more people kind of following suit in combining these these compliance metrics. Now there's a lot of overlap. There's a big reason why the SANS top 20 and the NIST cybersecurity framework and the ACSC, which is a former ASD top 35. There's a reason why a lot of those things overlap and they're very basic is because people are doing them wrong. I mean, you would look at those compliance lists and they're like, okay, that's very, very simple and basic, but it's because these large organizations can't do it. They really are struggling with it. And so they're
gonna remain basic until we can actually solve those problems. And that's why I always see these big compliance matrix when I go to a customer and they have this massive spreadsheet that you couldn't even put on this projector here. And it just talks about where their compliance is and where they sit. And it's really daunting. And the last piece is who do you report to? And a lot of people have come to me in government and in private saying, hey, I have this problem. I found this actor. I found this intrusion or I found this problem. But I don't know who in the government to report it to. And some of them will actually report it to a government agency thinking that other government agencies are talking to each
other. So a good example is I had a banking institution report some stuff to one of their regulators. And it actually was more relative to an FBI investigation. And they said, hey, well, we gave it to that other organization. I said, I don't even know who that is and what that organization does. But they're also confused as well. And I was in government. So it's very confusing. I get a lot of people asking me, where do I get my data when I actually find something, right? All right, last thing. We're gonna wrap this up. So hopefully you learned a few of the illusions and the realities that I've experienced. A lot of the stuff that I've touched on very briefly is actually in deeper topics throughout the couple days
here. So if you were interested in any of those topics, there's a lot of stuff around what I just talked about in a deeper, probably more technical,
presentations. One thing I wanted to talk about was the future. You know, what does the future look like for all of us and some predictions I have. So definitely see a move for increased application layer attacks. I think we all kind of know this. If you read that Verizon data breach report every year and some of the Cisco reports and all that, most Most vulnerabilities are targeting application, application layer and applications. Until we actually figure out a good way to securely code things, we're gonna have nothing but application attack problems, all right? OWASP is a great organization that has been trying to fight this for years and just making you better coders. So if you're not a part of that organization and you're a developer, please
start visiting your local OWASP chapter. We're gonna see a lot more cloud host and hypervisor attacks. We know it's all happening. I haven't seen evidence of it yet. We know it's all happening. It's probably already happening now, but it's gonna increase, right? As people are moving to the cloud, there's gonna be increased reason for a hacker to attack these type of hosts and hypervisors. We're of course gonna see the Internet of Corruptible Things attacks, is what I call it. And it's gonna continue, right? You look at the Mirai botnet that took down OVH and Krebs' website and several others. There aren't many ISPs that can handle a terabit per second attack. So we're getting to a point where even the top tier ISPs are struggling to actually handle
that amount of traffic and it's only gonna get it worse. I predict that we're gonna have massive internet outages that are gonna be unlike anything we've seen. as long as we continue to put out those IoT devices that are unprotected.
I think there's gonna be a lot more automation around detection and investigation and remediation within incident response. I think you're gonna see a lot more machine learning techniques that are actually gonna do real good triage of events without having to have a human intervene. One of the other areas that I think is very interesting to me is the bio data compromise. After I got my ID stolen and my fingerprints from OPM, I was thinking about it. What happens if my fingerprints get stolen, which was just basically a flat bitmap file, by the way, that had my fingerprint data on it. What if that gets stolen? What if my retinal scan gets stolen, which is probably
stored in some flat file as well, unencrypted? What happens if my DNA gets stolen? If you guys use 23andMe, I think it's a government's conspiracy to take everyone's DNA. That's just me being paranoid. When you get all that bio data stolen, how do you authenticate yourself anymore? How are we gonna authenticate ourselves in the future? It's gonna be very, very difficult for us to go to a banking institution that doesn't have the brick and mortar anymore to authenticate ourselves. Mobile versus desktop, we all know this. Desktops have pretty much been teetering off while mobile's exploding. I don't see them, the majority of the threats coming from the mobile devices other than major DDoS attacks yet, but you can bet that
as financial transactions move more towards mobile and as more data is stored mobily, that's where the attacks are gonna occur. And the last one for the end of the presentation is drone swarms, right? Who's not scared of a drone swarm? You guys saw the Super Bowl and the only thing, so I am a prepper.
And I was trying to think about this the other day. So I was looking online to how to create my own EMP devices. And there's some really good articles about it. And I'm super glad that our battery technology is not great yet because those things will fall out of the sky after about 20 minutes. But still, when we have good battery technology and we have drone swarms, that's going to be a scary day, right? Oh, sound. Yeah, I like that. The sound wave. Yeah, I like that. See? Another prepper in the room. It's good. Oh, he's meeting. couple here. And this is the future, it just basically is showing ransomware on every single IoT device that's out there. You guys have probably already seen that, but it's basically,
you know, you're gonna have to pay $5 to use your toaster. That's our future. I mean, I know my mom's gonna be calling me in about a year. And that's it, I really appreciate the time, guys. I'll be around the rest of the day, and so I'd like to talk to you guys.
any questions or is it? We'll do all questions on peer list. So anything on that? On behalf of B-Sides, on behalf of Fitbit, one of our sponsors, we want to give you a tug of our appreciation. All right, thanks guys. Thanks guys, I'll be around. Thanks. So we have a short break. The schedule's been updated. So and then we'll get started with the next talk. So thanks everybody.