
Good morning everybody. Welcome back to Bides Las Vegas day two ground floor. Um this talk is titled thinking outside the sock structured analytical techniques for the overloaded cyber analyst. And your speakers are Alina Thai and Haley Bean. Uh a few announcements before we begin. We'd like to thank our sponsors, especially diamond sponsors Adobe and Iikido Security and our gold sponsors Formal and Profit. It's their support along with our other sponsors, donors, and volunteers that makes this event possible. These talks are being streamed live and as a courtesy to our speakers and audience, we ask that you check to make sure your cell phones are set to silent. If you have a question, you'd be using
the audience microphone that I'm holding right here in my hand. I'll bring it around so that YouTube can also hear you. As a reminder, the besides our photo policy prohibits taking pictures without explicit permission. These talks are all being recorded and will be available on YouTube in the future. With that, let's get started. Please welcome your speakers. [applause] All right, good morning everyone. Thank you for being here. My name is Haley Beam. This is Alina Tai. Uh we're both from Washington DC. We work in instant response and cyber threat intelligence. We met in grad school where we studied intelligence studies. So um today we're excited to share with you one of our favorite techniques that we learned
together. Um structured analytical techniques and then apply them to how we can utilize these in the sock. Um, so to start off, any stock analysts in the audience? Cool. We have a couple. Um, well, today we're going to cover what structured analytical techniques are? Um, also known as SATs. We'll probably abbreviate that. And then, um, we'll discuss how they we can use these to make better decisions. Uh, we'll see how we can apply them to current CTI frameworks and then walk through some scenarios, um, in real world situations. So to get started, um, let's put ourselves in the situation. It's 2 AM. The sock phone is ringing and we have an alert. Um, there's PowerShell executing suspicious
DL. And then broader investigation reveals credential dumping, lateral movement, and possible data exfiltration to an external IP. Um, it's time to wake up the rest of the team, sign of broaden the investigation, and everyone's gathering for a kickoff incident call. So we have our intrusion overview. We have our team of analysts and we also have leadership on the phone asking is the threat contained? What gaps exist in our alerting? What was the timeline of the attack? And who was behind it? So we have our threat intel analysts, our DFIR analysts, and our threat hunter all with their own analytical burdens. Um they're all acting under pressure and acting with limited amounts of data. So the
intel analyst has to deliver that fast attribution. uh the DFIR is overwhelmed by incomplete artifacts trying to piece together a timeline and our threat hunter is kind of unclear of which way to go and where to look for more data and how to find the gaps in the alerts. So I want to talk about the cognitive biases. So uh does anyone know cognitive biases? What are they? Okay, so I see a few hands. So, cognitive biases are mental shortcuts and dynamic errors in our thinking that may be affecting our decisions or our judgment. So, there are around 151 cognitive biases, but we just going to like list a few right now. They're like nine on that. But, so why
do we need them? So, in cyber security, the issue is we have the same shortcuts that could lead us to the wrong when accuracy and objectivity are in a rush. So biases are dangerous uh because we they are unconscious and also automatic. So they might affect our like analytical words. So let's say I'm going to go over a few examples. So let's say confirmation bias. When we're doing analysis may we may be seeking evidence that supports something that we already believe in or let's say anchoring biases when we're overrelying on the first piece of information that we found and we keep going back to it. So that's an anchoring biases. So optimism bias is like when you're over estimating like
the positive outcomes or again let's say group thinking where like a bunch of people that have the same background did the same work throughout like years we tend to like form the same type of thinking and this called this like group thinking. So pretty much we're just eliminating scenarios that maybe we are forgetting just because we all are kind of wired to like think one way. So let's go back to our three analyst at the 2 AM incident. So let's say our CTI analyst gets like a early report saying that we get some IOC's and they're looking at it as like oh this is overlapping with APD29. Boom. This is anchoring bias. So they're building
their entire attribution assessment just based on this assumption. It's just like unconsciously dismissing evidence that maybe not fitting this AP29 narrative. So maybe this Intel analyst is exhibiting anchoring biases on confirmation biasing. Maybe a little halo effect thrown in. So let's say oh it looks like a 29 in this side. So it must be a PT9 everywhere. Also our thread hunter when we're looking at it they're also doing dealing with biases. So let's say they're hunting for techniques that maybe they saw in the last month's campaign. So they're maybe overlooking new patterns uh or they're maybe focusing on some high value system that is potentially and potentially missing the brother compromise. So some of the biases that the threat hunter
might be dealing with in this scenario maybe is like availability heristic biases or group thinking or maybe identify victim biases. So what about our DFI analyst? So they're also dealing with biases. I'm just going to pick like another example. Let's say they're dealing with optimism and illusionary color correlation biases. So when the they look at it when the C2 traffic goes quiet maybe they're assuming hey maybe the threat is contained or maybe they don't find the browser artifacts they are assuming that the exfiltration has occurred. So when they see the patterns that where none exist maybe they declaring that this incident is uh resolved prematurely again. So with biases I feel like everybody's dealing with this but nobody
is like uh excluded from this. So we're not talking about like incompetent analysts. We're talking about like smart people. So experienced profession with like years of experience but because we have experience in doing all these analytical things may our brain is like wired like think fast. We have other tasks to do. So just going back to the basics we are all humans. We are all have biases. So that's why we are talking about biases today. So it's like more to understand this biases or our thinking pattern and we can build processes to like counteract them. All right. So this is where we bring in structured analytical techniques. These are systemic repeatable methods used to reduce those cognitive biases and then
ways to improve our analytic rigor and intelligence analysis. Um they challenge our assumptions. They make sure that we have evidence. They pro make sure we have critical thinking and explore alternative outcomes. So um these come from the formal practice of intelligence analysis. They started way back when thinking back to um the grandfather of intelligence Sherman Kent. He called for ways of structured reasoning and objectivity in intelligence analysis, ways to reduce bias and provide clarity around estimative language. And then later on um folks like Richard her and others formalized these in the early 2000s into the techniques that we'll walk through today. These were really heightened um during intelligence failures like 911 or the weapons of mass
destruction in Iraq. Um a lot of cognitive biases like group think were at play from those with failures. And so these structured analytical techniques were put in place um to help counteract that. So there's a ton of techniques. We could spend the next two days workshopping these. Um but today we're just going to walk through four. Uh the key assumptions check, indicators and warnings, alternative futures analysis, and then my favorite analysis of competing hypothesis. Okay, so we're just going to start with the easiest one out there, key assumption check. Uh, so pretty much you have an assumption and you check it. So when you're building threat intelligence analysis, how much time do you spend examining what are you assuming versus
what you actually know? So probably like most analysts, not much. We're just looking for uh an assumption that's like a quick solution. So the solution here is pretty much we are using keys assumptions. So let's say we are assuming our adversary are behaving rationally. We're assuming our tools are accurately identifying these threats or maybe we're assuming our infrastructure overlaps indicate the same threat actors. So the goal here is to identify and challenge the uh underlying beliefs that influence our analysis or decisions. So this involves explicitly stating assumptions then questioning the validity and the impact of it. So key assumption check mainly like uh helps with confirmation biases but like also other cognitive biases. So the
process is pretty straightforward. There's only three steps. You first start with your assumptions. So you have a you put out like a list of assumptions. Uh what might be true about your analysis? What's correct? You're assuming hey your cap your adversary have certain capabilities, certain motivation or they like doing things in a certain way. So that's like an assumption. Then next you look uh and examine each exam uh assumption. How confident are you about this assumption? Do you have evidence to support it? How critical is this to your analysis? So without this assumption, is your analysis still valid? Lastly, uh the last step is you pretty much just test your assumption. So what evidence could
invalidate your assumptions? Are there other similar assumptions out there? uh maybe you can like bring somebody else in. So this is like only a three process step. Uh I feel like once you like do it so often it's going to become mental. So I'm just going to go over like a simple assumption. So we're going to list in the first step our assumptions. So we going to examine them seek evidence to support and then test them. So let's say we have four assumptions. So we have oh uh the tool overlap. Hey, maybe this is the same threat actor. So the second assumption, maybe a 29 doesn't share the same tools with other groups or another
assumption, our toy identifications is accurate and it's not a false positive. And you take each of these assumptions and you look at the criticality of how this is put into your overall analysis and then you put your confidence. So you can like be low, medium, high. That's like the bear. And based on all these obsumptions, you like test each of them and see if your confidence is actually as accurate as you first predicted it. So the next one I was going to talk about is indicators warning. So as my name says it it I feel like it's pretty straightforward but just in traditional intelligence indicators and warnings was the process of detecting and responding reporting sensitive time sensitivity
information about foreign developments that may be called for more hostile actions or intentions. So pretty much it's like providing advanced notice of potential threats and s suggest a potential and enemy capability or intention. So pretty much uh indicator and warning help a lot with the risk assessment and it shall not be used for like definitive predictions. So in cyber how we going to use this? We already collect and analyze information from a broad source of uh sources. So just to like develop our indicators everybody knows about like IOC's, IOB. So this serves as signals. offering insights to potential uh or active cyber threats that may be targeting let's say uh a certain industry we want to create like
detection. So the goal of this SAT is pretty much to facilitate the prediction early detection and warning of cyber incidents tailored to your environment or your scenario. So let's say an example is you're researching the dark web for like threats. You don't want to like look all everywhere like you have certain goals. You want to look some for threats that are relevant to your company or like the assets you're looking at or certain vulnerabilities. The goal pretty much for me is like to waste less of my time like I have other stuff to do. And the process is again is only four steps. So let's say you're defining the event. What type of threat scenario are you
monitoring for? The second one is identify the indicators. So there are like three of them indirect, direct and environmental. So direct indicators are activities that are related to this threat. Indirect indicators are supporting activities or preconditions and environmental is something that may be triggering the threat. The third step is like to establish detection methods. So where would you observe this indicator or what data sources or collection methods are needed? Uh and the last one is like to set uh warning thresholds. Uh so what kind uh combination of indicator triggers may concern or how would you escalate based on the indicator severity? All right. So we'll also talk through alternate futures analysis. So if you like overthinking what could happen
next, this is a technique for you. Um there's also four steps with this to just uh go through a systemic method. Um explore plausible scenarios that could happen in the future. So step one's going to be defining what's our focus of the question that we're trying to figure out. What's our uncertainty and what's the time frame around that? Um then step two identifying key drivers. So what are critical variables that could shape the outcome and then what factors have high impact but also uncertain direction. In step three we're generating the scenarios. So coming up with three to four plausible scenarios making sure that they're mutually exclusive but comprehensively collective. And then each scenario should also be very
consistent and as detailed as possible. In step four, we're analyzing the implications. We're determining what the consequences are of each step in that scenario. And then we're asking how does our response change based on each step in the scenario. This isn't going to tell us what scenario is going to play out, but it's going to tell us if these three h things happen, this might be the scenario playing out. And since we've thought through it, we understand how to react. All right, the last one is my favorite one, which is the analysis of competing hypothesis. Um, so maybe you're going back to our intrusion and we say like, okay, is this PowerShell executing a suspicious DL? Are we saying that's
suspicious because that's maybe our anchoring bias that we think it is or could it be legitimate system admin activity or legacy software? Um, we can put together all these hypothesis and then pull together evidence to help support which one is most consistent or least consistent. So, first generating all of our hypotheses, making sure we have some obvious ones and then making sure that we have some unlikely ones too, even a null hypothesis thrown in there just for comparison. And then collecting evidence. So, gathering evidence and seeing everything that relates to the scenario. We put that on a matrix and then we score it basically from strongly supports to strongly inconsistent and then we have a neutral
aspect too. So then you can start to see which of your hypothesis are least consistent and then which of your pieces of evidence are the most valuable. This is also really important because it exposes what pieces of evidence you might be having gaps in and then that can help target your investigations of where to focus next. Okay, part two we're just going to walk through how we can integrate these with current CTI frameworks. So the intel cycle miter attack and the diamond model. >> Awesome. So let's talk about my favorite framework now. So the intelligence cycle we can let's look at how the cities enhance each phase of the intelligence in intelligence cycle. So for those who
are not in intel I'm just going to like give a brief definition of what the intel cycle is. So it's pretty much a structure process used to transport raw data in actionable intelligence. So involves a series of interconnect steps and is not a one-time process but the repeating loop where each phase informs the next. So usually has six steps. So as you see can see on the screen it's only five. Uh so last step is feedback but just for like the sake of this presentation. Yeah we're just sticking to five right now. So the first step is planning and direction. Uh in this stage we're just doing for intel we're doing like priority intelligence requirements.
So we're establishing our requirements of hey what intel we should collect. For this we're just using let's say key assumption check. We are checking our assumptions to drive our priorities and testing those assumptions to see hey is there requirement or assumption is still valid. Does does this is is this relevant to our like uh goals or our company. So this helps to prevent misdirection in collection efforts. The next step in the intelligence cycles is collection. So in the collections we are all collecting intelligence. are collecting a lot of things but just to like help out with like uh time let's say time or like focus we can use indicators and uh warnings SATs to help
with the resource focus. So instead of like again collecting everything this SAT helps you direct to monitor certain channels that may be let's say more relevant to your company or the scenarios you're planning. Uh the third step is processing and exploitations. In this step, we all know what red teaming, right? So, red teaming pretty much just challenges the initial findings we find uh the initial findings we get in our collection stage. So, let's say instead of accepting, hey, this looks like a C2 traffic maybe what if this is a legitimate encrypted traffic. The first step of the cycle is in analysis and production. So for this step is pretty much like beef of it. So let's say you're using
ACA. So analysis of competing hypothesis. This SAT is help with like evaluating multiple explanations and help attribution errors from anchoring on the first hypothesis that maybe came out to your mind. The last step fifth uh dissemination. So we all know about like the threat vulnerability risk matrix when we're like delivering those intelligence reports. to let's say the relevant stakeholders. For this, we're just going to add like the probability and impact SAT. So, it's called low probability high uh is it impact and probability SATs. This is pretty much just helps communicate their uncertainty range because like in Intel you can't just say hey this is going to happen. You have to like give them like a lingo
to like just predict. So an example here I would say like hey let's say this ransomware is most likely but is uh most likely in a certain scenario and this is the impact it's going to have on that scenario. All right so we all know and love the MITER attack framework um utilizing this to map threat actor uh ttps over time. Um the way we can utilize this with SATs is throwing in that AC. So um we all struggle with attribution. It's always going to be a challenge. Um so threat actors love to overlap. They love to use identical TTPs and just because the technique overlap doesn't always mean the same threat actor. Um so in this
case we can use AC buildout attribution matrices. Um, we can also have competing threat actor hypothesis and then therefore you can help use this to score both from a technique base but also what you're seeing in your intrusions as your evidence and then have a stronger conclusion of why you might be excluding certain threat actors or why you're including them going forward. For a third one, we also have the diamond model. So, um, we can utilize this with various SATs. Um, the alternative futures analysis can help look at the evolution of TTPs over time. We can throw indicators and warnings on there to really hone in on the infrastructure and the victim elements of the model. Um look at um shifts in
targeting patterns or tooling changes. And then lastly, I love an a because you can throw it on everything, but again it comes back to attribution. Um and then just strengthening your analytical conclusions. So jumping into our last part of the presentation, we're going to walk through three different scenarios um and then apply the SATs and then how the sock can kind of operate with these. We're going to go through um insider threat uh geopolitical cyber response and then some cyber criminal activity. All right, so starting off with our first uh scenario, we have the DPRK IT worker threat or legitimate user. If you're not familiar with the DPR iKit worker threat, highly suggest going to
research that because it's really interesting, fascinating, and it's probably in all of our environments. Um, so, uh, scenario overview, we have a machine, um, 247 activity utilizing awake software, and then there's evidence of remote management and monitoring tools. Um, there's periods of high activity, multiple collaboration tools are in use. And then we've also found um information that there are laptops being shipped to similar addresses um with various names. And then we have of course our suspected DP RK IT worker insider threat investigation launched. So what we'll do is we'll perform a key assumptions check to determine is this an actual risk to our environment and then we can use indicators and warnings to help broaden
our threat hunt and our intel assessments. So right now we're asking the questions of what assumptions do we have and then what indicators can we pivot on. So diving into the key assumptions check our first assumption is going to be our background check process is pretty sufficient. In reality the DPRK IT workers are bypassing these um background check processes by using US stolen identities and we see that in evidence reports of this being across multiple companies. Um the impact to us is going to be high. uh the fundamental hiring process is failing. They're being able to bypass that. The second assumption is that our geographic verification process is going to prevent any foreign actors from accessing our
environment. However, in reality, DPRK is utilizing these laptop farms so and proxy operations so they're able to get into our environment and look like they're US-based. We're seeing this through the FBI seizures and operations taking down these laptop farms. And this is going to have critical impact to us as not only as we might be in violation of sanctions by hiring these workers, we have that insider um threat um and data theft. Our third assumption is going to be that remote work patterns indicate legitimacy. So, hey, this person's been working from home for a while. They're probably legit. Uh in reality, these actors are maintaining persistence in environments. Some are employed for almost up to a half a year. Um
CrowdStrike is tracking this persistence across multiple environments. And so this also has a high impact to us. Um long-term compromised risk. Lastly, um AI. We know threat actors are using it, but maybe it hasn't significantly changed this threat. In this scheme, an example, the actors able to really hone in on AI, uh utilize AI enhanced photos, utilize AI and their um coding practices and communication practices, kind of flying under the radar while delivering a lot of output. And of course, this is going to have high impact to us. Our traditional verification methods and ways that we monitor employees are no longer working. So the result of this is that the IT workers from DPRK have
evolved. They're sophisticated and it still presents a sophisticated threat um to our environment. So moving into the indicators and warnings, we can now apply this to help our threat teams and our intel teams. We can look at tactical indicators like requests of new hires to change their shipping address um or locations for payment. We can also look at activity during DPRK worker hours. Um we can also look for high periods of activity followed by low periods of activity kind of simulating multiple people coming into the same machine to get stuff done. Um, we can look at tactical variables and known IoC's like um, alerting on AstroVPN usage or the same um, voice over IP or email
addresses. And then we can also in our hiring practices make sure that we're looking for those overenhanced photos that might be AI generated. Moving to kind of like the intel assessments, um, we can look at this from an operational perspective. So seeing how they're targeting different types of companies. So is there a surge of applications targeting crypto or AI companies? Um is there infrastructure changing new uh KBM device being deployed? Um how is the payment routing changing for these? How are shell companies coming into play? And then how is the social engineering tactics evolving over time? Um from an intel perspective, we also want to take the next step back and look at it from a
strategic lens. So what other policy changes, what other sanctions are in place that are driving the DPRK to take these approach and u move forward with this operation? What about a technical evolution? So as AI grows, how are these actors going to continue enabling this to disguise their identities and move through our processes and then um from a scale expansion? So like globally, are they going to go to other locations, other companies, target our um foreign partners and then integration too? So how are they coordinating with other DPRK um operations? So now that we kind of have an idea of all these indicators, we can look at the warnings and set thresholds around them and kind of
inform our threat hunt teams of what to look for from a behavior perspective. We can look prehire at those tactical observations um looking at the email, looking at the laptop shipping, just the general behavior. We can look postire for that VPN, maybe remote management tools, KVMs, and then we can build those detections utilizing hiring data from workday. We can also put detections out through EDR and then have um these strategic indicators to monitor from an intel perspective to inform those hunts. >> Awesome. So for the next scenario, we're just going to talk about the side the Iran cyber response. So as we all know there's like a current conflict in Iran/Israel and we thought this might be
a good example. So in this uh scenario there's cyber attacks are happening in both countries. nuclear facility strikes, the bank data destruction, missile attacks on US bases and also US counteratt attacks. So as you can see, there's like a timeline on the screen. Uh on June 10, there were like a P cyber attacks against Iran. On the 13th, IDF strikes on nuclear facilities and following that the predatory spar destroys the Iran's uh Iranian bank data. uh Iran obviously fires back missiles at the US military base in Qatar and then we hit them uh we hit their nuclear bases. So in this scenario let's say we are trying to like do some tabletop exercise or something like
that. The leadership is asking a question uh what's the likelihood that Iran's going to hit the US critical infrastructure in let's say the next six months. For this we picked uh alternative future analysis SAT. So it's pretty much to de develop a certain number of scenarios for like the question that we've been asked. So just for this example, we're developing four scenarios for the Iranian cyber response against the US critical infrastructure over uh the next six months. So instead of dealing with massive uncertainty, we're trying to predict one outcome or instead of trying to predict the one outcome, we're trying to map all the scenarios of how this patient could unfold. So we have like uh four scenarios as you
can see on the screen using this SAT is to determine the likelihood of each scenario happening in the next six months let's say. So the first scenario is which let's say is the most likely 35% the conflict stays within uh Israel and Iran. So the activist slow down over time the US critical structure stays out of it. Uh there's not much happening. So as indicators we see reduced attacks frequency or maybe US diplomatic engagement. So the implications here are the return to baseline cyber threats level. The second scenario that we may be developing is hey what if the war the cyber war is escalating. So the Iranian to start hitting the US power grid or maybe the water system
directly. So this is a pretty bad scenario. So this is definitely going to trigger the national emergency protocols and the military cyber response. Uh we're just based on our indicators we're just going to give it 25% probability. So we're just looking at like let's say messaging infrastructure reconnaissance capability and the implications here like I said uh US military and cyber military is going to get involved. The third scenario, this is in no order. It's just like random scenarios that you develop just through brain juice just thinking. So for the third scenario, let's say we have a proxy campaigns. So it's let's say 30% Iran is playing the long game to proxies. So they're keeping the activist groups active through
certain campaigns or website defacement. Their goal is to like keep enough distance to like avoid direct retaliation. So for indicators we can again look at telegram or look at recruitment or their like uh funding. So again this is the implication this long-term defensive resource allocations. How long would Iran have funds to like fund all these other groups and what's when are they going to run out of this funding and it's likely we also have to like include something that hey it's out there probably it's not going to happen but we should just add it. So let's say we are taking the negotiation route. Uh we're doing everything to like diplomatic negotiations and Iranian is just not
giving up but like they're limiting their cyber operations. So everybody's backing through uh just steps back through their back channel negotiation. Everything goes quiet. The whole thing is kind of over. So this is possible but historically this has not been dis deescalated that cleanly. So the goal here is like instead of betting on one prediction, uh leadership can prepare for multiple realities and watch for early indicators in which direction we might actually head out to. All right, our last scenario is going to be the cyber criminal attribution case. So um we have a ransomware intrusion at a luxury hotel chain. This started through a fishing attack targeting the help desk staff. Um, one employee was
specifically a victim. Uh, they had their credentials reset through MFA fatigue. This enabled access for our threat actors. Um, through investigation of our octa logs, we're seeing admin role abuse. Uh, we're also seeing some lateral movement to systems that contain financial and legal data. Um, the files are then encrypted with a custom Blackbasta variant and a $15 million ransom is then demanded. Um, we have a ransom note written in anguish uh threatening to leak executive misconduct as well and attackers are using live chat extortion citing past ransomware hits to kind of build some credibility. So, of course, leadership asks, "Is the scattered spider?" While this might be leadership anchoring bias being shown, um, a scattered spider is highly in the
news right now, we're not sure if we want to make that conclusion. So, we're going to utilize the analysis of competing hypothesis to determine that attribution. So what we're going to do is develop four different hypothesis that this was done by scattered spider that this was done by a black basta affiliated group. This was done by a new threat group that we're not sure of yet. And then we'll throw in a null hypothesis is that we don't have enough evidence to determine attribution. So we're going to take the pieces of evidence that we saw in our investigation and put them down uh the table here and put our hypothesis across. We're going to say that we have
the help desk engineering. We're going to talk about the abuse of octa and MFA. We're going to highlight that custom Blackbasa ransomware variant. Um, something that we uncovered in the investigation is that we have a C2 infrastructure overlapping with Blackbasa IPs. Um, so that's definitely something we want to call out, but we also do want to call out the TTP similarity with scattered spiders, not ignoring the previous attacks in the news. We're going to highlight that the target sector was hospitality and then call out the extortion aspect with the data leak threat and the personal misconduct. So, our hypothesis, we're going to score them. Uh, putting a plus sign for anything that um supports that,
putting a double plus sign if it strongly supports that, throwing in some zeros if it's neutral, doesn't really support, and then u minus signs if it's inconsistent. So, while we are seeing a lot of consistency with the scattered spider hypothesis, what we can hone in on is that hypothesis 2 is the strongest one because of that black basa variant. So we can tell our leadership team that we do have some overlap between these teams, but because we are going through this a process, we're able to confirm that it is a black bass affiliate with moderate to high confidence. So um a all of these SATs, what we can do is kind of come back to our intrusion
and our team of analysts here. Um, our intel analysts can truly benefit from a key assumptions check, making sure that they're not making rush judgments um, based off of AP reports they previously read. Um, they can also benefit from an a to determine that attribution. And then we can look at our DFIR analysts. Uh, they could also benefit from a key assumptions check, maybe throwing in some alternate futures, uh, depending on how different attack scenarios could play out. um red team analysis. I know we didn't dive too deep into that, but I think we're all familiar with blue team, red team analysis. Pretty much the same practice we do here. And then from our threat hunter, let's do some structured
brainstorming, but then utilize the indicators and warnings to help do those behavior-based hunts. So, um SATs, um there's a ton of them. Sometimes they take a lot of time, but just throwing one in here and there can help save some time, especially when you're dealing with a lot of pressure and a lot of uncertainty and rush to make decisions. Um, I know I've used it in the past when we've had different hypotheses of what could be happening or what different pieces of evidence could be used and it's been really beneficial in the sock. These aren't going to eliminate that uncertainty. These aren't going to eliminate our time pressures. Um, but they are going to help us make stronger
conclusions. So, uh, the next time you're facing that 2 a.m. call or the under pressure decision, um, consider using one of these and then help improve your defenses overall. So, thank you all so much for joining our talk. I know we ended a little bit early. Appreciate you being here. [applause] >> We have time for a couple questions.
>> Uh fantastic talk. Thank you so much. Um so regarding your uh Israel Iran scenario that you showed uh specifically the analysis f uh sorry alternatives futures analysis slide you had some um percentages of likelihood. Can you walk through how you determine and how you calculate those likelihood percentages? >> So let's say so you can like mix the alternative future analysis however you want but we just going to take like 100%. So for this example we just use four scenarios but you can develop 20 of them and just try to like put which one you think it's like the most probable. So that's why we had like 35 25 30 and whatever the rest was. So yeah, pretty
much you're trying to like find evidence that supports your alternative scenario. Like no matter what type of evidence, you just put it in there and like once you get like everything listed out and like it's out there, you can like try to like change those scenarios. So those scenarios are usually good for like leadership because they're not going to ask you, hey, give me 20 scenarios. I want to see the top three scenarios that are most likely to happen. And you just want to make sure that your cover is every single angle. like let's say that this is like a geopolitical conflict. You want to look at how other geopolitical conflict like uh escalated before or what happened. So you look at
historical content to so yeah I just added like the negotiation part like hey maybe like we are actually coming to a diplomatic negotiation something like that. Uh not likely but some words that happened. So you just put like a small scenario out there is just to like cover yourself that hey I actually cover every single scenario that might happen and you just list out the defenses that you're supposed to have like also like um for those that you're more likely to happen you can like spend more time on it >> and then yeah just pulling back old evidence and Iran's pretty easy to predict um especially with how they respond with cyber and their tactics. So
looking at the actions that drove that in this case it was a military action. So the idea that they would um respond in cyber on US critical infrastructure is likely. The idea that they would span into private sector businesses is unlikely because the um action taken by the US wasn't translated to economic or business operations. So yeah thank you for the question. Any other questions?
Uh thank you so much for the talk. The one thing I was thinking about the whole time is it's really fascinating how you can use this to you know kind of adjust and validate. But how much time does it take to compile something like this during a real-time event in order for it to be relevant? >> Um that's a good question. And I think if you ask some formal intel analysts, they would say days to months. Um, we've also done them in coursework or I've done them in investigations that have been tailored down to an hour or two. Um, one of my co-workers is in the audience and I know one time we had an
indicator that we were unsure of and um, was this malicious? Was this not? And so we threw an a on it in two hours were able to bring that to leadership to say this is why we're excluding it from our analysis. Um, so that's been really helpful. But I mean indicators and warnings that could be its own intel assessment of however long it wants to take. It could be its own threat hunt. But I think putting a few hours together um putting the right team members on it and then um executing it will just give you that evidence to say this is why we did this. So instead of kind of that gut based. And who do you have doing this analysis
in terms of is it composed of multiple different groups from the people actually executing it or is it a separate team? >> It's usually the analysts involved in the actual investigation. So um multiple IR like whoever is working on the analysis itself. So if you have your team of intel analysts um it's challenging when you might only have like one person working on that obviously. So then maybe pulling in uh people from that same team or in a different case like maybe in a key assumptions check you pull in people from various teams in the sock um to kind of give that different perspective and diversify the answers. Yeah. Thank you. I think we have one more question
or what kind of tools have you found effective at communicating a lot of these analyses to like you mentioned leadership as a primary a consumer of them. Do you like package these up as slideware? What what's been effective for you there? >> Um, so in my case, I'm lucky that my leadership has a background in Intel, so they're familiar with these types of techniques. So just kind of walking through like I think the AC is a great example of that output of having the matrix there so you can really show your evidence and then show your arguments with it. Um, I would say the outputs too of just having like a short write up of
the indicators and warnings for example, maybe turning that into an intel product that you can pass to your um, threat hunt teams with the actual IoC's and things that they need to go and take and hunt on, but providing that strategical lens too. All right. Well, thank you everyone. Have a great rest of your day. [applause]