
We ready to kick this? Yes. All right. So guys, first off, thank you. Thank you. Thank you for coming to B-Sides this year. As you can tell, we are not professional conference planners. So bear with us as we work through our stuff. But we're really, really excited to be able to bring this back physical again this year because it's been, what, three years since we've actually had an in-face one with COVID and everything just driving us crazy. So my name is Brad Bowers. I'm one of the organizers. Chris and I are just going to give you a little bit of an introduction, a little bit of logistics, and talk to you a little bit about
what we've got going on today. So first off, logistics. You are standing in track one right now. Track two is right on the other side in that other section behind the other curtain set. Those are the ones, if you look on the website, you can see which talks are being done. We'll also put up a banner piece that has information about what talks are going where. But going online is by far going to be the easiest piece. If you need anything, such as you have a question, look for somebody with a red shirt or a red jacket. There's tons of us here. There's 60 volunteers helping out today. So they'll be able to kind of
guide you and make sure that you have everything that you need. All right. So if you're facing the elevators, the ladies room is to the left. You'll find the bathroom up there. If you follow that corridor down and around, there's a water fill station there. We also have water bottles and some other stuff that are coming in that will be able to help out for people if you need something. To the right of the elevators is the men's room. Now, if you go all the way down to the right, you go into our other section. The first thing you're going to see when you walk in that room is our water rabbit, which Chris will
talk a little bit about in detail, but I think you're going to find it really, really interesting. As you go into that room, there's a whole bunch of stuff going on in there. The sponsor tables, please visit the sponsors as you get a chance. They make this possible. Seriously. The only way B-Sides happens in Philly is by the gratitude of people like Neutrality that has given us this facility to support it. So please treat it kindly. And all of our other sponsors that have really helped make and allow us to pull this together. In that room, you'll also find a chill-out section. You'll find all of the villages. Spend some time, walk around, and get
a chance to meet not only like-minded people, but see some of the stuff that they're talking about. Chris? All right. And everyone, thanks so much. I'm really, I'm blown away how many people we have here. I mean, just within the last three days, we've sold so many tickets. And it's a thank of Space Rogue, of course, who's our keynote. Applause for that, please.
We also, Naturality, big, big thank you to them. They put this, they allow us to have this fifth floor, 30,000 square feet we got. We carved up for you so they can enjoy and be with like-minded individuals. Some other logistics we left out, we're gonna have an after party. We're gonna do announcements at closing around 5:00 p.m. So if everyone's interested, please feel free. You don't have to drink. You can just come and enjoy and hang out with us. We got two hours. We have a Def Con DJ. They're gonna be doing some like DJ sets with us. And with that, we also have a little bit of uniqueness this year. We are going to
have a music village starting at noon. This is going to be teaching you about how to do synthesized work, modular components, as well as being able to make music the way you would want, either with programming or with physical. That's going to be ran by all of our people. To find out that information, go to Hacker Tracker. That is the official app. If you've been to Def Con, Black Hat, it is that same app. We have B-Sides Philly in that with all of our conference details as well as our map. We also have right now over inside of the village area, the Syntax, which is our DJ. So if you just want to chill out
and listen to some music, we're good. The final thought in regards to a lot of our music, in our tracks because there's a lot of echo, we're making it a silent disco. So after the keynote, you can go right behind you and there's going to be headphones. You just basically show your badge, which is the green badge you have, check that in, you get some headphones, This track one is track blue. So on the headphones itself, it will have three channels, blue, red, and green. Blue is track one. Red is track two. Green is disco. Just chill out and hang out. Take that, enjoy it, and remember to return it when you're done because not
only does it have a cost issue, but also be able to share with everyone. Because we have limited amount, we didn't expect this many beautiful people here, so this is wonderful. And I guess that's about it for me, so I want to do an introduction. Go ahead, Garrett. And here's Garrett, and also Garrett handled all of our badges, so big round of applause for that. Real quick, if you were an early... registrant you may not have received the PCB the big green PCB badge you were entitled to one as an attendee if you did not receive one come find one of us in a red sweatshirt where the old heads here we'll get you sorted
out all right so I'm going to introduce our keynote speaker so first off Some of you will probably recognize him just because his face has been all around. Others of you will recognize it and if you don't, you need to go online. So we're fortunate enough today to have someone who's kind of a legend in the cyber security and hacking community space. And I won't go how far back that is because I don't want to age him or me or any of the other ones, but I grew up and learned about this awesome industry that we all work in, learned about how technology played together, learned about how hardware hacking and how the internet was
changing things for not only citizens but businesses and everything else. So let me introduce you to Space Rogue Chris who's going to be our keynote today. Again, an absolute legend for this piece. I think you're going to be really entertained because he's also very dynamic in the way he speaks. So please help me in thanking and welcoming Chris Space Rogue to the stand. All right. How's everybody doing? Wow. Big room. Okay, my name is Chris Thomas, also known as Space Rogue. I've been in this industry, as he said, for about 30 years. I'm currently the global lead of policy and special initiatives at IBM's X-Force. Long, crazy road to get to IBM. Anyway, that's a rat hole already. I'm going to share
some highlights today with you from what we call the 2023 Cybersecurity State of the Union. The data I'm going to present today mostly comes from the IBM X-Force Threat Intelligence Index Report, which is derived from billions of data points from network and endpoint devices, incident response engagements, vulnerability exploit databases, other sources. and our cost of data breach report released earlier this year, which is built using data from over 553 breaches in 16 different countries. If you want a copy of either report, go to ibm.com/services/security or just Google IBM Threat Intelligence Index or IBM cost of a data breach report should be the first link that pops up. Alright, a little bit about X-Force. I'm not here to sell you anything today, but I do want to tell you
we are a threat-centric team of hackers, responders, researchers, and analysts. Our portfolio includes offensive and defensive products and services fueled by the most comprehensive threat intelligence in the industry. We do everything from standard pen testing to adversary simulation. We do cyber range exercises, dark web research. We are a hacker-driven offense, research-driven defense, and with intel-driven protection. All right, as I share some of the highlights from our threat intelligence index report and cost of data breach report, I will also note some updates to how we've conducted our research this year and how we're reporting our findings to improve our reports and make them more actionable for our clients and for you. Alright, so we split the report up this year, we split actions on, actions attackers
took and what impact those actions were. Extortion being the most common impact we saw showing up in 27% of incident response cases and most of those 30% were in manufacturing. Now the types of extortion we've seen has evolved over the last 10 years or so, building from simple data encryption of singular endpoints through ransom-based DDoS attacks to network-wide infrastructure to organizations suffering multiple extortion attempts with additional threats of DDoS, data leaks, and encryption. Now phishing, of course, remains the leading infection vector, identified in 41% of incidents, followed by exploitation of public-facing applications. Macros have dropped out of favor now that Microsoft has turned off macros by default. Finally, thank you. Looking forward, we expect threat actors to
begin attempting to extort downstream victims. So basically, if you have a business partner that gets compromised and that partner has your data or some of your data, you may also get a ransomware demand even if your own systems were still secure. Now we see this really as the logical next step for threat actors to increase pressure on ransomware victims so that they can monetize their actions. Now we did a little refining this year in how we track attacks and we took a closer look at the specific actions attackers took on victim networks known in the report as actions on objectives. With almost a quarter of all incidents we reported last year, the deployment of backdoors
was the top action on objective at 21%, followed by ransomware, business email compromise, then malicious documents, spam campaigns, installation of various tools came in at the bottom at 5% of cases. Now, one contributing factor we found was a spike in Emotet. Emotet is a Trojan-type malware, usually spread via spam, and it's been around for like 10 years or so in one form or another. It uses sandbox detection to evade researchers and has a remote command and control infrastructure, originally targeted bank account information, but now it's usually commonly used as a method to install other malware families, or basically as a dropper. This spike in Emotet deployments really underscores the enduring threat that ransomware poses. which is a consistent thread in our incident response cases last year. Asia Pacific remained
the top spot as the most attacked region, 31% of all incidents that we responded to. It's up 5% over the previous year. Asia Pacific, specifically Japan, was the epicenter of where we saw the most Imhotep deployments last year. Probably accounts for that of regions increasing the number of attacks recorded. Now on the vulnerability side, 26% of the vulnerabilities that X-Force tracked in 2022 had known exploits. Now according to our database, which was started way back in the early 90s by the original ISS X-Force, and lists over a quarter million vulnerabilities, the proportion of vulnerabilities with known exploits has been dropping in recent years, currently standing at about 78,000 vulnerabilities, or 34% with known exploits. Only
3% had zero days. This means that attackers are continuing to use older vulnerabilities and exploits and not relying on zero days as much. Really shows the benefit of a well-maintained patch management process. The second year we've analyzed that, we've had data from the report analyzing phishing kits and we found a significant drop in attackers' interest in credit card data, down 52% this year. Most sought after information was PII, personally identifiable information, names, addresses, email addresses. That can be used by threat actors for additional phishing operations, other malicious activities, or sold on the dark web for profit. They're still looking to monetize that information. They've just moved from credit cards to PII. Another change we made
to our data collection was adopting the MITRE ATT&CK initial access techniques to track how threat actors gain access to victim systems. This aligns our data a lot more closely with the broader industry reporting, helps put our findings into context with other threat intelligence reports, or other threat intelligence sources. Now we find phishing still remains the number one infection vector, no big surprise, identified in 41% of incidents, followed by exploitation of public phishing applications of 26%. Now due to our shift in tracking, we're also able to see the breakdown of phishing types. by weaponized attachment, malicious link, or through a spear phishing service, with attachments being the top type used in 62% of cases. Thread hijacking, where a threat actor inserts some thrills
into an existing email thread. Long-standing tactic, seen that for years and years, but it's getting more effective to entice victims to engage. We saw a marked rise last year in this, which is a trend we theorize is driven in large part by Emotet spamming. Spam from Emotech, Quakebot, and Iced ID, three malware families, all made heavy use of threat hijacking last year. Alright, you may have noticed on a previous side that I mentioned that ransomware attacks were actually down 4%. from 17 to 21%, which by no means indicates that ransomware threat is lighting up. Our data really shows that the time it took to gain initial access to engaging in an interactive session to carrying out the
actual ransomware attack decreased significantly. Ransomware operators have become much more efficient at gaining privileged access to Active Directory and deploying their ransomware.
Last year we ran into 19 different ransomware variants, up from 16 variants the year before that. Lock and Pit was about 17%, up from 7%, followed by Phobos, WannaCry. I should mention that we saw several cases of Lock, Bit, and WannaCry that were leftovers from previous infections that happened several months or even years prior and had never been properly cleaned up, which means that when you Engage in an incident response team to clean up your infection. Be extra diligent when they're conducting their remediation operations. Make sure that your vendor is very thorough when they're doing the cleanup. All right, so what does all this mean? Let's take a look. Here's some data direct from our
total cost of a data breach report. The trend, unfortunately, is that a data compromise continues to get more and more expensive, increasing 15% in the last three years, reaching an all-time high of 4.5 million this year. which comes out to about $165 per record. I should note that these numbers, they don't really include the cost of the ransom itself. If there was a ransom in the incident and it was paid, then those numbers only represent the cost of lost business and the remediation efforts. Thankfully, the mean time to identify a breach, which we call MTTI, and the mean time to contain the breach, which we identify as MTTC, both stayed roughly the same this year versus last. We're down to 204 days on average to identify a breach. Then
another 70 days to contain it. Which basically means that if your organization is compromised, it's going to stay that way for about nine months or so. Oh, there's a joke there that I totally missed, isn't there? Anyway, I've given this talk a couple of times, and that's the first time that pops into my head. The largest percentage of compromise we have, 39% involve data stored across multiple environments, followed by 27% of involved data stored in the public cloud. The number of breaches occurring across multiple environments surpassed the combined 34% of breaches occurring in only private cloud or on-prevents environments. All right, on this slide, if it's listed in red, the costs increase. If it's blue,
then the industry's costs actually decrease. Healthcare dominated for 13 years in a row, this year jumping up $800,000. Top three industries remained the same as last year, healthcare, finance, and pharmacology. Energy and industrial breach costs increased this year and pushed them higher up the list. Transportation had the second largest increase year over year with an average breach cost of about $600. Here we have rankings by country. United States number one whoo Middle East and Canada all maintained their top spots Latin America shall the shoppers to increase with an average breach cost increase of 890,000 and UK had the sharpest decrease from 5.05 million to 4.2 million fishing and compromised credentials responsible for about 31 percent of breaches combined with fishing moving
into that lead spot by a small margin over stolen credentials and which was our most common vector the previous year. Cloud misconfiguration was identified as the initial vector for 11% of attacks. Followed by business email compromise at 9% So now this year for the first time our reports examined both zero-day or unknown vulnerabilities as well as known Unpatched vulnerabilities as the source of the data breach and found that more than 5% of the breaches studied originated from known Vulnerabilities that had not yet had their patches applied so again an effective vulnerability management program can really assist your organization here and Now despite all the hype we see in the media about malicious insider attacks, fairly uncommon. Only occurred in about 6% of attacks.
Unfortunately, malicious insider attacks were also the costliest at an average of $4.9 million, which is 9.6% higher than the global average cost of $4.45 million for data breach. Now phishing was second most expensive at $4.76 million. and breaches attributed to the system area were the least costly at an average of 3.96 million and the least common at 5%. So I think it's a good time to talk about zero trust. Now, you can't go to the local security vendor or back in the vendor hall and ask anyone to sell you zero trust. Well, actually, I guess you can because you'll probably find somebody to take your money. Zero Trust isn't really a product. It's a philosophy. It's a mindset. It's a strategy, a framework. It's an approach. It
basically means treating any network as already compromised and assuming that the threats, both external and internal, are always there. This is how you make it difficult for both external and internal actors. Now, even as the global cost of a data breach has increased, Our survey participants reported divided perspectives on increasing security investments after an incident. 51% of respondents indicated they planned for additional security spending after the breach, but 49% did not. Of those that are planning to spend more, they're going to target planning and employee training as where most of the money goes. Now, I'm going to personally disagree with this a little bit here. I'm a big proponent of training. Awareness is extremely important, but you're
never going to get your numbers down to 0% with training alone. Someone is going to make a mistake and even accidentally click on the wrong thing or open the wrong attachment or something. Personally, I think the money is better spent on testing and detection. Testing is great if you actually go back and fix the problems that were found. All too often, when we're doing our engagements and we do our offensive and security tests, we get a list of issues back and we give them to the company and then they don't do anything about it. In which case, why did they even bother doing the testing? So just, and if you're not going to do anything,
just do the training. But if you're committed to fixing issues that are found, then in my view, the money is better spent on testing and detection. All right, let's take a look at what the data says we should do about all these issues that we have. Security, AI, the big buzzword that everybody's talking about, and automation, very important investments for reducing costs and minimizing time to identify and contain breaches. Organizations that used AI and automation extensively within their security plans experienced, on average, 108-day shorter time to identify and contain the breach, which, if you remember from our earlier slide, is about half the total average time. These organizations also reported lower data breach costs to the tune of $1.7 million
reduction. I'll talk more about AI in a minute. Let's take a look at some more cost savings. Organizations with an incident response team and a regularly tested IR plan also save some over those organizations with no team or no up-to-date plan, saving about $1.5 million. They also identified breaches 50 days faster than those organizations with no team and no plan. Usage of DevSecOps also saw large savings in costs. Organizations with highly integrated security testing in the software development process saved $1.6 million compared to those organizations that had a low adoption of DevSecOps. Now compared to other cost mitigating factors, DevSecOps demonstrated the largest cost savings. So if you don't currently have a DevSecOps program, you may want to put one together. How am I doing on time? Oh, I'm
fast. Okay. Just try to slow down. I hope I'm not speaking too fast for you all. I'm kind of burning through this, trying to get you all back on schedule. All right, 40% of breaches were identified by a benign third party or outsider. whereas 33% were identified by internal teams and tools, and over a quarter, or 27%, of breaches were disclosed by the attacker as part of the ransomware attack. Attacks disclosed by attackers also cost significantly more. Attacker-disclosed breaches had an average cost of $5.23 million, which was about 20% more than the average cost of breaches identified through internal security teams. Additionally, breaches disclosed by attackers cost about 16% more than the average of all breaches for the
year. Now, compromise identified by an organization's own security teams and tools were significantly less expensive, costing nearly a million dollars less than incidents disclosed by the attacker. Now, with only 33% of compromise being identified by the company's own internal security teams, I have to wonder why. You have people dedicated to finding breaches And are they just that bad? Or are they really just stretched too thin without adequate funding to conduct proper training and threat hunting exercises? Now considering the cost savings and time savings of having a breach identified by your own internal team versus a third party, or worse, an attacker, it would seem to me that moving money around in your budget so that testing and detection get a bit more funding would make
a little bit of financial sense. All right, so what happens if you call the cops? Normally, most situations, don't call the cops. But in the event of a data breach, it will cost you less and is remediated quicker if you call law enforcement. Only 63% of respondents said that they called law enforcement, and those that said they didn't call paid almost 10% more to clean things up, and that it added an extra 33 days to their breach life cycle. I don't have a slide on it, but I wanted to mention that there is minimal cost savings if you pay the ransom. Now, paying the ransom can be questionable and a risky strategy. In most cases, you usually get your data or systems back unencrypted, but that's no guarantee
that the criminals will cooperate or that they haven't also made a copy of your data. And there's no way of knowing where the money goes. Sure, it may just lie in the pockets of the criminals, but it may also go help fund repressive regimes or terrorism or worse things. So, organizations that paid the ransom saved a small amount in total cost, $5.06 million compared to $5.17 million in total cost, cost difference of only $110,000, just 2%, 2.2%. However, that savings does not include the cost of the ransom itself. Now, given the high cost of most ransomware demands, organizations that paid the ransom likely ended up spending more overall than those that didn't pay the ransom. Our data shows that paying a ransom has become increasingly less advantageous overall.
There was a time when, and I don't like to admit it, that the FBI actually recommended paying a ransom because it was quick, you got your data back, all done. They don't recommend it anymore. It's pretty much up to each individual organization. All right, let's talk a little bit about the use of AI in cybersecurity, both in defense and in offense. Give me a second. Yelling too much here. Y'all can hear me in the back, right? Louder. Okay, I can do louder. Get out my drill sergeant voice. There's been a lot of chatter about how AI is going to help transform cybersecurity, especially for the attacker. One such possibility is the ability of criminals to use AI to help them develop polymorphic
malware. Now polymorphic malware combines a mutation engine with self-propagating code to change its appearance continuously and rapidly morph its code. Makes it very difficult for signature-based anti-malware to detect because the signature is always changing. We take a look at Black Mamba, which is a proof of concept malware that was created by security researchers using ChatGPT. Now the malware can receive new unique code from the chat GPT website rather than making connection to an attacker owned command and control system. Sounds pretty bad. However, this highlights a number of ways in which the threat from a generative AI can miss the mark when it comes to actual security impact. The first thing to consider here is that polymorphic malware is not new. and Black Mamba proof of concept
does provide detection opportunities for defenders in the form of command and code chunks being passed across the network inside the malware. Also, resources claim that the malware would be harder to detect because it executes in memory and calls out a high reputation C2 site that most enterprises won't block by default. The reality is that the security community has dealt with these types of threats for years. We've seen malware call out to trusted sites like Microsoft, Azure, AWS, and other popular hosting sites. And we've dealt with a lot of malware that only resides in memory without writing to disk, so-called fileless malware, very popular a couple years ago. These concerns are not new. Defenders have capabilities to address them. We have EDR solutions that can monitor execution of malware and
memory. We have OS event logging that can provide traceability for what's occurred on a host. Combined endpoint logs with say, Sigma or something, and we can detect suspicious execution activity even when AV and EDR miss a detection. This is the core of defense in depth, of not depending on just one strategy like file signature detection. The fear here is that publicly available machine learning and AI products are lowering the barrier of entry for malware development. Previously, an offensive malware creator had to be knowledgeable in a well-suited development language, evasive coding techniques, reverse engineering, debugging, have strong research abilities, among a host of other skills, and threat actors had to be able to leverage AI prompts to assist with these functions. We're going to
see an increase of custom malware in the wild and a reduction in development time from idea to proof of concept of that malware. In addition, AI will be used to rewrite existing malware to make it more difficult to detect and evade existing signatures and detections. AI can also be used to port code from higher level languages to lower level, again, making it easier for the malware to evade existing detections. But there's a number of caveats to this. First, there are guardrails in the form of content filters that disallow tools such as ChatGPT from responding to overly malicious prompts. In the example here, researchers were able to eventually get the tool to create malware for them, but not before being denied the outright request to write code for shellcode injection
into a running process. Now, researchers did have to negotiate their way around that, around what the tool would not accept and craft their request accordingly. And as attackers get more experienced at this, the generative AI maintainers are going to get more experienced at defending against them. They're going to make it more and more difficult for attackers to use their tools for malicious purposes. Now, relatedly, the existence of these tools does not mean that no coding or technical experience is required. The most successful interactions with tentative AI tools for this purpose will still be based on an understanding of how the malware needs to function. And finally, just because a less skilled actor is now empowered to create malware does not mean that they are empowered to
carry out the whole attack chain. Depending on the exact nature of the attack, many more steps are required to successfully have impact. So really, AI is just one more tool. It's not a magical be-all, end to cyber security. Five minutes. All right. I think I can do it. That's good. I think I'm right on schedule. We're also tracking an uptick in security researchers leveraging generative AI. Oh, I just did that, didn't I? Oh, yeah. Oh, to develop social engineering prompts and phishing lures. Now, less than 10 years ago, it wasn't uncommon to see criminals seeking out content creators and editors in other languages other than their own or those in their target population. Certain underground actors would specialize in making realistic templates.
Part one, one of the parts of a very simple benefits of Generative I for attackers is the ability to create clean lure material and to do so at scale. Now, at X-Force we have Snow or Snefety Carruthers. She's known as our chief people hacker. She's in charge of our social engineering efforts. She tested generative AI versus human generated phishing lures in a series of tests to determine which development technique was the most successful when phishing clients for security testing. Now humans emerged victorious, but just by the narrowest of margins, and AI generated phishing lures were reported as suspicious at a higher rate. Now humans may still have the upper hand for the moment when it comes to emotional
manipulation and crafting persuasive emails. The emergence of AI in phishing signals a pivotal moment in social engineering attacks. I'm going to give you some key recommendations for your business and consumers for you guys to stay prepared. When in doubt, call the sender. If you're questioning whether an email is legitimate, pick up the phone and call the sender if possible. It may not always be practical, but if the email is internal and asking the receiver to transfer a large sum of money right away, a phone call might be worth the effort. Consider choosing a safe word. I know that might sound risky, but with your executives or even family members that you can use in the
case of a phishing or AI generated phone scam. Abandon the grammar stereotype. Dispel the myth that phishing emails are riddled with bad grammar and spelling errors. AI driven phishing attempts are increasingly sophisticated, often demonstrating grammatical correctness. That's why it's imperative to re-educate our employees and emphasize that grammatical errors are no longer the primary red flag. Revamp your social engineering program. Includes bringing techniques like vishing into the training program. Simple to execute, often highly effective. At X-Force Red we released a report called, I forget the name, but we found that targeting phishing campaigns that added phone calls were three times more effective than those that didn't. Strengthening your identity and access management controls. Employee training, again, was only going to take you so far.
constantly adapt and innovate. The rapid evolution of AI means that cyber criminals will continue to refine their tactics. And I'm actually going to skip this slide. What does it say? It talks about AI and ML and no silver bullet for defense. I'd rather get to this slide. What should you take away from my talk today? First, attackers going to attack. That's what they do. They're going to find your weaknesses and no matter how much security awareness training you put on your employees, They will find a way in. They will probably come in through email via a phishing attempt, and once they get in, the goal is to make money one way or the other, most
likely via extortion or harvesting PII, and they will likely use some form of ransomware. Now, while we're getting better at identifying and remediating breaches, it is coming with increased costs. An efficient vulnerability management program can help reduce your attack surface from both known vulnerabilities and zero days. Implementing a defense in depth strategy along with a zero trust infrastructure can help limit the damage from external and internal threats, as well as reduce the total time needed to identify, contain, and remediate the breach. The numbers from our data breach report prove that having a dedicated security team, having a well-tested IR plan, integrating DevSecOps, and leveraging AI can actually save you money in the event of a data breach. And while it may not be mentioned in the report and backed
up by hard data, these things will also greatly enhance your security posture, reducing your risk of a breach in the first place. All right, last slide I have today. I just want to talk about my book for one second, Space Rogue, How the Hackers Known as Loft Changed the World, available at Iffy Books, one of the sponsors here today of B-Sides Philadelphia. And that's all I have. Thank you very much. Enjoy your day here.