
[Music]
Okay.
>> All right. Mic test round two. Round two. Sounds a little loud here, but we're good there. All right.
No, I'm finishing right now.
>> First introduce you guys. You want me to do some >> We don't have intro. >> You don't have to read that. [Music] Hey, hey hey. [Music]
I [Music] do. [Music]
[Music] [Music] Bobby down. [Music] Fire.
Damn. Are you [Music] la? [Music]
down. [Music]
Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Heat. Heat. Heat. [Music] Heat. Heat. N.
Heat. Heat. Heat. [Applause] Heat. Heat. Heat. [Music] Heat. [Music] Heat.
Heat. Heat. N. [Music]
Heat. Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat.
[Music]
[Music]
[Music] Hey. [Music]
Woo! Wow! [Music]
Good morning everybody. Welcome back to Bides Las Vegas day two ground floor. Um this talk is titled thinking outside the sock structured analytical techniques for the overloaded cyber analyst and your speakers are Alina Thai and Haley Beam. Uh a few announcements before we begin. We'd like to thank our sponsors, especially our diamond sponsors Adobe and Iikido Security and our gold sponsors formal and profit. It's their support along with our other sponsors, donors, and volunteers that makes this event possible. These talks are being streamed live and as a courtesy to our speakers and audience, we ask that you check to make sure your cell phones are set to silent. If you have a question, you'd be using
the audience microphone that I'm holding right here in my hand. I'll bring it around so that YouTube can also hear you. As a reminder, the besides LV photo policy prohibits taking pictures without explicit permission. These talks are all being recorded and will be available on YouTube in the future. With that, let's get started. Please welcome your speakers. [Applause] All right, good morning everyone. Thank you for being here. My name is Haley Beam. This is Alina Tai. Uh we're both from Washington DC. We work in instant response and cyber threat intelligence. We met in grad school where we studied intelligence studies. So um today we're excited to share with you one of our favorite techniques that we learned
together. Um structured analytical techniques and then apply them to how we can utilize these in the sock. Um, so to start off, any sock analysts in the audience? Cool. We have a couple. Um, well, today we're going to cover what structured analytical techniques are? Um, also known as SATs. We'll probably abbreviate that. And then, um, we'll discuss how they we can use these to make better decisions. Uh, we'll see how we can apply them to current CTI frameworks and then walk through some scenarios, um, in real world situations. So to get started um let's put ourselves in the situation it's 2 am the sock phone is ringing and we have an alert. Um there's PowerShell
executing suspicious DLL and then broader investigation reveals credential dumping lateral movement and possible data excfiltration to an external IP. Um it's time to wake up the rest of the team sign of broaden the investigation and everyone's gathering for a kickoff incident call. So we have our intrusion overview. We have our team of analysts and we also have leadership on the phone asking is the threat contained, what gaps exist in our alerting, what was the timeline of the attack and who was behind it. So we have our threat intel analysts, our DFIR analysts and our threat hunter all with their own analytical burdens. Um they're all acting under pressure and acting with limited amounts of data. So the intel
analyst has to deliver that fast attribution. uh the DFIR is overwhelmed by incomplete artifacts trying to piece together a timeline and our threat hunter is kind of unclear of which way to go and where to look for more data and how to find the gaps in the alerts. >> So I want to talk about the cognitive biases. So uh does anyone know cognitive biases? What are they? Okay, so I see a few hands. So, cognitive biases are mental shortcuts and dynamic errors in our thinking that may be affecting our decisions or our judgment. So, there are around 151 cognitive biases, but we're just going to like list a few right now. They're like nine on that. But, so why
do we need them? So, in cyber security, the issue is we have the same shortcuts that could lead us to the wrong when accuracy and objectivity are in a rush. So biases are dangerous uh because we they are unconscious and also automatic. So they might affect our like analytical words. So let's say I'm going to go over a few examples. So let's say confirmation bias. When we're doing analysis may we may be seeking evidence that supports something that we already believe in or let's say anchoring biases when we're overrelying on the first piece of information that we found and we keep going back to it. So that's an anchoring biases. So optimism bias is like when you're overestimating like the
positive outcomes or again let's say group thinking when like a bunch of people that have the same background did the same work throughout like years we tend to like form the same type of thinking and this called this like group thinking. So pretty much we're just eliminating scenarios that maybe we are forgetting just because we all are kind of wired to like think one way. So let's go back to our three analysts at the 2 AM incident. So let's say our CTI analyst gets like a early report saying that we get some IOC's and they're looking at it as like oh this is overlapping with AP29. Boom. This is anchoring bias. So they're building their entire attribution assessment just
based on this assumption. It's just like unconsciously dismissing evidence that maybe not fitting this AP29 narrative. So maybe this Intel analyst is exhibiting anchoring biases on confirmation biasing. Maybe a little halo effect thrown in. So let's say oh it looks like a 29 in this side. So it must be AP 9 everywhere. Also our threat hunter when we're looking at it they're also doing dealing with biases. So let's say they're hunting for techniques that maybe they saw in the last month's campaign. So they're maybe overlooking new patterns uh or they're maybe focusing on some high value system that is potentially and potentially missing the broader network compromise. So some of the biases that the threat hunter
might be dealing with in this scenario maybe is like availability heristic biases or group thinking or maybe identify victim biases. So what about our DFI analyst? So they're also dealing with biases. I'm just going to pick like another example. Let's say they're dealing with optimism and illusionary color correlation biases. So when the they look at it when the C2 traffic goes quiet maybe they're assuming hey maybe the threat is contained or maybe they don't find the browser artifacts they are assuming that the exfiltration has occurred. So when they see the patterns that where none exist maybe they declaring that this incident is uh resolved prematurely again. So with biases I feel like everybody's dealing with this where
nobody is like uh excluded from this. So we're not talking about like incompetent analysts we're talking about like smart people so experienced profession with like years of experience but because we have experience in doing all these analytical things may our brain is like wired like think fast we have other tasks to do. So just going back to the basics we are all humans we are all have biases. That's why we are talking about biases today. So it's like more to understand these biases or our thinking pattern and we can build processes to like counteract them. All right. So this is where we bring in structured analytical techniques. These are systemic repeatable methods used to reduce those cognitive biases and then
ways to improve our analytic rigor and intelligence analysis. Um they challenge our assumptions. They make sure that we have evidence. They pro make sure we have critical thinking and explore alternative outcomes. So um these come from the formal practice of intelligence analysis. They started way back when thinking back to um the grandfather of intelligence Sherman Kent. He called for ways of structured reasoning and objectivity in intelligence analysis ways to reduce bias and provide clarity around estimative language. And then later on um folks like Richard her and others formalized these in the early 2000s into the techniques that we'll walk through today. These were really heightened um during intelligence failures like 911 or the weapons of mass
destruction in Iraq. Um a lot of cognitive biases like group think were at play from those with failures. And so these structured analytical techniques were put in place um to help counteract that. So there's a ton of techniques. We could spend the next two days workshopping these. Um but today we're just going to walk through four. Uh the key assumptions check, indicators and warnings, alternative futures analysis, and then my favorite analysis of competing hypothesis. Okay. So we're just going to start with the easiest one out there. Key assumption check. Uh so pretty much you have an assumption and you check it. So when you're building threat intelligence analysis, how much time do you spend examining what are you assuming versus
what you actually know? So probably like most analysts, not much. We're just looking for uh an assumption that's like a quick solution. So the solution here is pretty much we are using keys sensor check to verify our assumptions. So let's say we are assuming our adversary are behaving rationally. We're assuming our tools are accurately identifying these threats or maybe we're assuming our infrastructure overlaps indicate the same threat actors. So the goal here is to identify and challenge the underlying beliefs that influence our analysis or decisions. So this involves explicitly stating assumptions then questioning the validity and the impact of it. So key assumption check mainly like uh helps with confirmation biases but like also other cognitive biases. So the process
is pretty straightforward. There's only three steps. You first start with your assumptions. So you have a you put out like a list of assumptions. Uh what must be true about your analysis? What's correct? You're assuming hey your cap your adversary have certain capabilities certain motivation or they like doing things in a certain way. So that's like an assumption. Then next you look uh and examine each assump uh assumption. How confident are you about this assumption? Do you have evidence to support it? How critical is this to your analysis? So without this assumption, is your analysis still valid? Lastly, uh the last step is you pretty much just test your assumption. So what evidence could invalidate your assumptions? Are there
other similar assumptions out there? uh maybe you can like bring somebody else in. So this is like only a three process step. Uh I feel like once you like do it so often it's going to become mental. So I'm just going to go over like a simple assumption. So we're going to list in the first step our assumptions. So we going to examine them seek evidence to support and then test them. So let's say we have four assumptions. So we have oh uh the tool overlap. Hey, maybe this is the same threat actor. So the second assumption, maybe a 29 doesn't share the same tools with other groups or another assumption, our toy identifications is
accurate and it's not a false positive. And you take each of these assumptions and you look at the criticality of how this is uh put into your overall analysis and then you put your confidence. So you can like be low, medium, high, that's like the bear. And based on all these assumptions, you like test each of them and see if your confidence is actually as accurate as you first predicted it. So the next one I was going to talk about is indicators warning. So as the name says it it I feel like it's pretty straightforward but just in traditional intelligence indicators and warnings was the process of detecting and responding reporting sensitive time sensitive information about foreign developments
that maybe cause for more hostile actions or intentions. So pretty much it's like providing advanced notice of potential threats and s suggest a potential and enemy capability or intention. So pretty much uh indicator and warning help a lot with the risk assessment and it shall not be used for like definitive predictions. So in cyber how we going to use this? We already collect and analyze information from a broad source of uh sources. So just to like develop our indicators everybody knows about like IOC's IOS. So this serves as signals. So offering insights to potential uh or active cyber threats that may be targeting let's say uh a certain industry. We want to create like detection. So the goal of this SAT is
pretty much to facilitate the prediction early detection and warning of cyber incidents tailored to your environment or your scenario. So let's say an example is you're researching the dark web for like threats. You don't want to like look all everywhere like you have certain goals. You want to look some for threats that are relevant to your company or like the assets you're looking at or certain vulnerabilities. The goal pretty much for me is like to waste less of my time like I have other stuff to do. And the process is again it's only four steps. So let's say you're defining the event. What type of threat scenario are you monitoring for? The second one is
identify the indicators. So there are like three of them indirect, direct and environmental. So direct indicators are activities that are related to this threat. Indirect indicators are supporting activities or preconditions and environmental is something that may be triggering the threat. The third step is like to establish detection methods. So where would you observe these indicators or what data sources or collection methods are needed? Uh and the last one is like to set uh warning thresholds. Uh so what kind uh combination of indicator triggers may concern or how would you escalate based on the indicator severity. All right. So we'll also talk through alternate futures analysis. So if you like overthinking what could happen next, this is a technique for you. Um
there's also four steps with this to just uh go through a systemic method um explore plausible scenarios that could happen in the future. So step one's going to be defining what's our focus of the question that we're trying to figure out. What's our uncertainty and what's the time frame around that? Um then step two identifying key drivers. So what are critical variables that could shape the outcome and then what factors have high impact but also uncertain direction. In step three we're generating the scenarios. So coming up with three to four plausible scenarios making sure that they're mutually exclusive but comprehensively collective. And then each scenario should also be very consistent and as detailed as possible.
In step four, we're analyzing the implications. We're determining what the consequences are of each step in that scenario. And then we're asking how does our response change based on each step in the scenario. This isn't going to tell us what scenario is going to play out, but it's going to tell us if these three h things happen, this might be the scenario playing out. And since we've thought through it, we understand how to react. All right, the last one is my favorite one, which is the analysis of competing hypothesis. Um, so maybe you're going back to our intrusion and we say like, okay, this is PowerShell executing a suspicious DLL. Are we saying that's suspicious because that's maybe our
anchoring bias that we think it is or could it be legitimate system admin activity or legacy software? Um, we can put together all these hypothesis and then pull together evidence to help support which one is most consistent or least consistent. So, first generating all of our hypotheses, making sure we have some obvious ones and then making sure that we have some unlikely ones too, even a null hypothesis thrown in there just for comparison. And then collecting evidence. So, gathering evidence and seeing everything that relates to the scenario. We put that on a matrix and then we score it basically from strongly supports to strongly inconsistent and then we have a neutral aspect too. So then you can start to see
which of your hypothesis are least consistent and then which of your pieces of evidence are the most valuable. This is also really important because it exposes what pieces of evidence you might be having gaps in and then that can help target your investigations of where to focus next. Okay, part two we're just going to walk through how we can integrate these with current CTI frameworks. So the Intel cycle miter attack and the diamond model. >> Awesome. So let's talk about my favorite framework now. So the intelligence cycle we can let's look at how the cities enhance each phase of the intelligence in intelligence cycle. So for those who are not in intel I'm just going to like
give a brief definition of what the intel cycle is. So it's pretty much a structure process used to transport raw data in actionable intelligence. So involves a series of interconnect steps and it's not a one-time process but the repeating loop where each phase informs the next. So usually has six steps. So as you see can see on the screen it's only five. Uh so last step is feedback but just for like the sake of this presentation. Yeah we're just sticking to five right now. So the first step is planning and direction. Uh in this stage we're just doing for intel we're doing like prioritating intelligence requirements. So we're establishing our requirements of hey what intel we should
collect. For this we're just using let's say key assumption check. We are checking our assumptions to drive our priorities and testing those assumptions to see hey is there requirement or assumption is still valid. Does does this is is this relevant to our like uh goals or our company. So this helps to prevent misdirection in collection effort. The next step in the intelligence cycles is collection. So in the collections we are all collecting intelligence. we're collecting a lot of things but just to like help out with like uh time let's say time or like focus we can use indicators and warnings SATs to help with the resource focus. So instead of like again collecting everything this SAT helps you direct to
monitor certain channels that may be let's say more relevant to your company or the scenarios you're planning. Uh the third step is processing and exploitations. In this step, we all know what red timing, right? So, red timing pretty much just challenges the initial findings we find uh the initial findings we get in our collection stage. So, let's say instead of accepting, hey, this looks like a C2 traffic maybe what if this is a legitimate encrypted traffic. The first step of the cycle is in analysis and production. So for this step is pretty much like beef of it. So let's say you're using ACA. So analysis of competing hypothesis. This SAT help with like evaluating multiple explanations and
help attribution errors from anchoring on the first hypothesis that maybe came out to your mind. The last step fifth uh dissemination. So we all know about like the threat vulnerability risk matrix when we're like delivering those intelligence reports to let's say the relevant stakeholders. For this we just going to add like the probability and impact SAT. So it's called low probability high uh it impact and probability SATs. This is pretty much just helps communicate their uncertainty range because like in Intel you can't just say hey this is going to happen. you have to like give them like a lingo to like just predict. So an example here I would say like hey let's say this ransomware is most likely
but is uh most likely in a certain scenario and this is the impact it's going to have on that scenario. All right. So, we all know and love the MITER attack framework. Um, utilizing this to map threat actor uh TTPs over time. Um, the way we can utilize this with SATs is throwing in that AC. So, um, we all struggle with attribution. It's always going to be a challenge. Um, so threat actors love to overlap. They love to use identical TTPs. And just because the technique overlap doesn't always mean the same thread actor. Um so in this case we can use a buildout attribution matrices. Um we can also have competing threat actor hypothesis and then therefore you can help use this
to score both from a technique based but also what you're seeing in your intrusions as your evidence and then have a stronger conclusion of why you're maybe excluding certain threat actors or why you're including them going forward. For a third one we also have the diamond model. So um we can utilize this with various SATs. Um, the alternative futures analysis can help look at the evolution of TTPs over time. We can throw indicators and warnings on there to really hone in on the infrastructure and the victim elements of the model. Um, look at um, shifts and targeting patterns or tooling changes. And then lastly, I love an AC because you can throw it on everything, but again, it
comes back to attribution. Um, and then just strengthening your analytical conclusions. So jumping into our last part of the presentation, we're going to walk through three different scenarios um and then apply the SATs and then how the SOCK can kind of operate with these. We're going to go through um insider threat uh geopolitical cyber response and then some cyber criminal activity. All right, so starting off with our first uh scenario, we have the DPRK IT worker threat or legitimate user. If you're not familiar with the DPR IK it worker threat, highly suggest going to research that because it's really interesting, fascinating, and it's probably in all of our environments. Um, so, uh, scenario overview, we have a
machine, um, 247 activity utilizing awake software, and then there's evidence of remote management and monitoring tools. Um, there's periods of high activity, multiple collaboration tools are in use. And then we've also found um information that there are laptops being shipped to similar addresses um with various names. And then we have of course our suspected DP RK IT worker insider threat investigation launched. So what we'll do is we'll perform a key assumptions check to determine is this an actual risk to our environment and then we can use indicators and warnings to help broaden our threat hunt and our intel assessments. So right now we're asking the questions of what assumptions do we have and then what indicators can we
pivot on. So diving into the key assumptions check. Our first assumption is going to be our background check process is pretty sufficient. In reality the DPRK IIT workers are bypassing these um background check processes by using US stolen identities and we see that in evidence reports of this being across multiple companies. Um the impact to us is going to be high. uh the fundamental hiring process is failing. They're being able to bypass that. The second assumption is that our geographic verification process is going to prevent any foreign actors from accessing our environment. However, in reality, DPRK is utilizing these laptop farms so and proxy operations so they're able to get into our environment and look like
they're US-based. We're seeing this through the FBI seizures and operations taking down these laptop farms. And this is going to have critical impact to us as not only as we might be in violation of sanctions by hiring these workers, we have that insider um threat um and data theft. Our third assumption is going to be that remote work patterns indicate legitimacy. So, hey, this person's been working from home for a while. They're probably legit. Uh in reality, these actors are maintaining persistence in environments. Some are employed for almost up to a half a year. Um CrowdStrike is tracking this persistence across multiple environments. And so this also has a high impact to us. Um long-term compromise risk. Lastly, um
AI. We know threat actors are using it, but maybe it hasn't significantly changed this threat. In this scheme, an example, the actors able to really hone in on AI, uh utilize AI enhanced photos, utilize AI and their um coding practices and communication practices, kind of flying under the radar while delivering a lot of output. And of course, this is going to have high impact to us. Our traditional verification methods and ways that we monitor employees are no longer working. So the result of this is that the IT workers from DPRK have evolved. They're sophisticated and it still presents a sophisticated threat um to our environment. So moving into the indicators and warnings, we can now
apply this to help our threat teams and our intel teams. We can look at tactical indicators like requests of new hires to change their shipping address um or locations for payment. We can also look at activity during DPRK worker hours. Um we can also look for high periods of activity followed by low periods of activity kind of simulating multiple people coming into the same machine to get stuff done. Um we can look at tactical variables and known IoC's like um alerting on AstroVPN usage or the same um voice over IP or email addresses. And then we can also in our hiring practices make sure that we're looking for those overenhanced photos that might be AI generated.
Moving to kind of like the intel assessments um we can look at this from an operational perspective. So seeing how they're targeting different types of companies. So is there a surge of applications targeting crypto or AI companies? um is there infrastructure changing new uh KBM device being deployed? Um how is the payment routing changing for these? How are shell companies coming into play? And then how is the social engineering tactics evolving over time? Um from an intel perspective, we also want to take the next step back and look at it from a strategic lens. So what other policy changes, what other sanctions are in place that are driving the DPRK to take these approach and u move forward with
this operation? What about a technical evolution? So as AI grows, how are these sort of a actors going to continue enabling this to disguise their identities and move through our processes and then um from a scale expansion? So like globally, are they going to go to other locations, other companies, target our um foreign partners and then integration too? So how are they coordinating with other DPRK um operations? So now that we kind of have an idea of all of these indicators, we can look at the warnings and set thresholds around them and kind of inform our threat hunt teams of what to look for from a behavior perspective. We can look prehire at those tactical
observations, um, looking at the email, looking at the laptop shipping, just the general behavior. We can look hire for that VPN, maybe remote management tools, KVMs, and then we can build those detections utilizing hiring data from workday. We can also put detections out through EDR and then have um the strategic indicators to monitor from an intel perspective to inform those hunts. >> Awesome. So for the next scenario, we're just going to talk about the side the Iran cyber response. So as we all know there's like a current conflict in Iran/Israel and we thought this might be a good example. So in this uh scenario there's cyber attacks are happening in both countries. nuclear facility strikes, bank data destruction, missile
attacks on US bases and also US counterattacks. So, as you can see, there's like a timeline on the screen. Uh on June 10, there were like a P cyber attacks against Iran. On the 13th, IDF strikes on nuclear facilities and following that, the predator spar destroys the Iran's uh Iranian bank data. uh Iran obviously fires back missiles at the US military base in Qatar and then we hit them uh we hit their nuclear bases. So in this scenario let's say we are trying to like do some tabletop exercise or something like that. The leadership is asking a question uh what's the likelihood that Iran's going to hit the US crypto infrastructure in let's say the next six
months. For this we picked U alternative future analysis SAT. So it's pretty much to de develop a certain number of scenarios for like the question that we've been asked. So just for this example, we're developing four scenarios for the Iranian cyber response against the US critical infrastructure over uh the next six months. So instead of dealing with massive uncertainty, we're trying to predict one outcome or instead of trying to predict one outcome, we're trying to map all the scenarios of how this patient could unfold. So we have like uh four scenarios as you can see on the screen using this SAT is to determine the likelihood of each scenario happening in the next six months let's say. So the first scenario
is which let's say is the most likely one 35% the conflict stays within uh Israel and Iran. So the activist slow down over time the US is critical structure stays out of it. Uh there's not much happening. So as indicators we see reduced attacks frequency or maybe US diplomatic engagement. So the implications here are the return to baseline cyber threats level. The second scenario that we may be developing is hey what if the war the cyber war is escalating. So the Iranian to start hitting the US power grid or maybe the water system directly. So this is a pretty bad scenario. So this is definitely going to trigger the national emergency protocols and the military
cyber response. Uh we're just based our indicators we're just going to give it 25% probability. So we're just looking at like let's say telegram messaging infrastructure reconnaissance capability and the implications here like I said uh US military cyber military is going to get involved. The third scenario, this is in no order. It's just like random scenarios that you develop just through brain just thinking. So for the third scenario, let's say we have a proxy campaigns. So it's let's say 30% Iran is playing the long game through proxies. So they're keeping the activist groups active through certain campaigns or website defacement. Their goal is to like keep enough distance to like avoid direct retaliation. So for indicators we
can again look at telegram or look at recruitment or their like uh funding. So again this is implications these long-term defensive resource allocations. How long would Iran have funds to like fund all these other groups and what's when are they going to run out of this funding and this likely we also have to like include something that hey it's out there probably is not going to happen but we should just add it. So let's say we are taking the negotiation route. Uh we're doing everything to like diplomatic negotiations and Iranian is just not giving up but like they're limiting their cyber operations. So everybody's backing through uh just steps back through their back channel negotiation.
Everything goes quiet. The whole thing is kind of over. So this is possible but historically this has not been dis deescalated that cleanly. So the goal here is like instead of betting on one prediction, uh leadership can prepare for multiple realities and watch for early indicators in which direction we might actually head out to. All right, our last scenario is going to be the cyber criminal attribution case. So um we have a ransomware intrusion at a luxury hotel chain. This started through a fishing attack targeting the help desk staff. Um, one employee was specifically a victim. Uh, they had their credentials reset through MFA fatigue. This enabled access for our threat actors. Um, through investigation
of our octa logs, we're seeing admin role abuse. Uh, we're also seeing some lateral movement to systems that contain financial and legal data. Um, the files are then encrypted with a custom Blackba variant and a $15 million ransom is then demanded. Um, we have a ransom note written in English uh threatening to leak executive misconduct as well. And attackers are using live chat extortion citing past ransomware hits to kind of build some credibility. So, of course, leadership asks, "Is the scattered spider?" While this might be leadership anchoring bias being shown, um, a scattered spider is highly in the news right now, we're not sure if we want to make that conclusion. So, we're going to
utilize the analysis of competing hypothesis to determine that attribution. So what we're going to do is develop four different hypothesis that this was done by scattered spider that this was done by a black basta affiliated group. This was done by a new threat group that we're not sure of yet. And then we'll throw in a null hypothesis is that we don't have enough evidence to determine attribution. So we're going to take the pieces of evidence that we saw in our investigation and put them down uh the table here and put our hypothesis across. We're going to say that we have the help desk engineering. We're going to talk about the abuse of octa and MFA.
We're going to highlight that custom Blackbasa ransomware variant. Um, something that we uncovered in the investigation is that we have a C2 infrastructure overlapping with Blackbasa IPs. Um, so that's definitely something we want to call out, but we also do want to call out the TTP similarity with scattered spiders, not ignoring the previous attacks in the news. We're going to highlight that the target sector was hospitality and then call out the extortion aspect with the data leak threat and the personal misconduct. So, our hypothesis, we're going to score them. Uh, putting a plus sign for anything that um supports that, putting a double plus sign if it strongly supports that, throwing in some zeros if it's neutral, it doesn't really
support, and then u minus signs if it's inconsistent. So, while we are seeing a lot of consistency with the scattered spider hypothesis, what we can hone in on is that hypothesis 2 is the strongest one because of that black basa variant. So we can tell our leadership team that we do have some overlap between these teams, but because we are going through this a process, we're able to confirm that it is a black bass affiliate with moderate to high confidence. So um a all of these SATs, what we can do is kind of come back to our intrusion and our team of analysts here. Um, our intel analysts can truly benefit from a key assumptions check, making sure that
they're not making rush judgments u based off of a reports they previously read. Um, they can also benefit from an a to determine that attribution. And then we can look at our DFIR analysts. Uh, they can also benefit from a key assumptions check, maybe throwing in some alternate futures, uh, depending on how different attack scenarios could play out. Um, red team analysis. I know we didn't dive too deep into that, but I think we're all familiar with blue team, red team analysis. Pretty much the same practice we do here. And then from our threat hunter, let's do some structured brainstorming, but then utilize the indicators and warnings to help do those behavior-based hunts. So, um, SATs, um,
there's a ton of them. Sometimes they take a lot of time, but just throwing one in here and there can help save some time, especially when you're dealing with a lot of pressure and a lot of uncertainty and rush to make decisions. Um, I know I've used it in the past when we've had different hypotheses of what could be happening or what different pieces of evidence could be used and it's been really beneficial in the sock. These aren't going to eliminate that uncertainty. These aren't going to eliminate our time pressures. Um, but they are going to help us make stronger conclusions. So, uh, the next time you're facing that 2 am call or the under pressure decision, um, consider
using one of these and then help improve your defenses overall. So, thank you all so much for joining our talk. I know we ended a little bit early. Appreciate you being here. >> We have time for a couple questions.
>> Uh fantastic talk. Thank you so much. Um so regarding your uh Israel Iran scenario that you showed uh specifically the analysis f uh sorry alternatives futures analysis slide you had some um percentages of likelihood. Can you walk through how you determine and how you calculate those likelihood percentages? >> So let's say so you can like mix the alternative future analysis however you want but we're just going to take like 100%. So for this example we just use four scenarios but you can develop 20 of them and you just try to like put which one you think is like the most probable. So that's why we had like 35 25 30 and whatever the rest was. So yeah, pretty
much you're trying to like find evidence that supports your alternative scenario. Like no matter what type of evidence, you just put it in there and like once you get like everything listed out and like it's out there, you can like try like change those scenarios. So those scenarios are usually good for like leadership because they're not going to ask you, hey, give me 20 scenarios. I want to see the top three scenarios that are most likely to happen. And you just want to make sure that your cover is every single angle. like let's say that this is like a geopolitical conflict. You want to look at how other geopolitical conflict like uh escalated before or what happened. So you look at
historical content too. So yeah I just added like the negotiation part like hey maybe like we are actually coming to a diplomatic negotiation something like that. Uh not likely but some worse that happened. So you just put like a small scenario out there. is just to like cover yourself that hey I actually cover every single scenario that might happen and you just list out the defenses that you're supposed to have like also like um for those that you're more likely to happen you can like spend more time on it >> and then yeah just pulling back old evidence and Iran's pretty easy to predict um especially with how they respond with cyber and their tactics. So
looking at the actions that drove that in this case it was a military action. So the idea that they would um respond in cyber on US critical infrastructure is likely. The idea that they would span into private sector businesses is unlikely because the um action taken by the US wasn't translated to economic or business operations. So yeah thank you for the question. Any other questions?
Uh thank you so much for the talk. The one thing I was thinking about the whole time is it's really fascinating how you can use this to you know kind of adjust and validate but how much time does it take to compile something like this during a real-time event in order for it to be relevant? >> Um that's a good question. I think if you ask some formal intel analysts they would say days to months. Um, we've also done them in coursework or I've done them in investigations that have been tailored down to an hour or two. Um, one of my co-workers is in the audience and I know one time we had an indicator that
we were unsure of and um, was this malicious? Was this not? And so we threw an a on it in two hours. We're able to bring that to leadership to say this is why we're excluding it from our analysis. Um, so that's been really helpful. But I mean indicators and warnings that could be its own intel assessment of however long it wants to take. It could be its own threat hunt, but I think putting a few hours together um putting the right team members on it and then um executing it will just give you that evidence to say this is why we did this. So instead of kind of that gut-based hunt. >> And who do you have doing this analysis
in terms of is it composed of multiple different groups from the people actually executing it or is it a separate team? >> It's usually the analysts involved in the actual investigation. So um multiple IR like whoever is working on the analysis itself. So if you have your team of intel analysts um it's challenging when you might only have like one person working on that obviously. So then maybe pulling in uh people from that same team or in a different case like maybe in a key assumptions check you pull in people from various teams in the sock um to kind of give that different perspective and diversify the answers. Yeah. I think we have one more question or
What kind of tools have you found effective at communicating a lot of these analyses to like you mentioned leadership as a primary consumer of them? Do you like package these up as slideware? What what's been effective for you there? >> Um so in my case I'm lucky that my leadership has a background in intel so they're familiar with these types of techniques. So just kind of walking through like I think the a is a great example of that output of having the matrix there so you can really show your evidence and then show your arguments with it. Um I would say the outputs too of just having like a short write up of the indicators and warnings for example
maybe turning that into an intel product that you can pass to your um threat hunt teams with the actual IoC's and things that they need to go and take and hunt on but providing that strategical lens too. All right. Well, thank you everyone. Have a great rest of your day.
[Music] [Music] Baby, [Music] Heat. Heat. [Music]
Down. [Music] Down.
[Music]
Heat. Heat. [Music] Heat.
[Music] Heat. Heat. Heat. [Music] Heat. Heat.
Heat. Heat. Heat.
Heat. Heat. [Music] Heat. Heat. Heat. [Music]
Heat. Heat.
[Music] Heat.
Heat.
Heat. Heat. N. [Music] Heat. Heat. [Music] Heat. Heat.
[Music]
[Music] Hey. [Music]
Wow. [Music] Yeah. [Music] Heat. [Music] Heat. Heat.
[Music] Heat. Heat. [Music]
Heat. Heat. [Music]
Yeah. [Music] Heat. [Music] Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat.
[Music] Down. [Music]
[Music]
Down. [Music] Hey hey hey hey hey. Yeah, [Music]
down. [Music] Down
down down down up down up down up down up down up down up down up down up down up down up down yeah down yeah down yeah down yeah down yeah down yeah down yeah down yeah down yeah down yeah down
[Music] Hey, hey hey. [Music] Good morning everybody. Welcome back to Bides Las Vegas ground floor. Uh this talk is securing frontends at scale paving our way to a postXS world and our speaker is Aaron Shim right there. >> Wow. Thank you. >> Uh a few announcements before we begin. We'd like to thank our sponsors especially our diamond sponsors Adobe and Iikido and our gold sponsors Drop Zone AI and RunZero. It's their support along with our other sponsors, donors, and volunteers that make this event possible. These talks are being streamed live. And as a courtesy to our speakers and audience, we ask that you check to make sure your phones are set to silent. If you have a question, use you'll be
using the audience microphone that's in my hand. I'll be handing it around so that YouTube can hear you. As a reminder, the Bside's Las Vegas photo policy prohibits taking pictures without explicit permission. So, please do not take pictures even of the screen. These talks are all being recorded and will be available on YouTube in the future. With that, let's get started. Please welcome your speaker. [Applause] Hello. Hello. Can you all hear me? Great. Well, thank you for that warm introduction. So um welcome to this talk. Today we will be talking about securing frontends at scale and how the lessons we learned by doing this um with all of the different web apps at Google have influenced how we imagine the post
XSS world. So as you may have heard cross-ite scripting or XSS is a really hard problem to solve. But this will be a tale of journey through tools and runtime mitigations to show how we can get to a vision of a post XSS world. So before we start um how many of you are security professionals? Great. Uh developers, do you consider yourself developers? Great. Great. We have a few. Um and how many of you have uh web apps at your organization that you have to secure? Great. That's what we like to see. Um so, um hopefully this is a message that you can take back to your developers to use these tools and philosophies to make some of your dev's
lives easier and to ship the best security we can to our customers. And um also as a note, this is also a shorter version of um this talk. So we're going to move really quite quite fast. So let's buckle in. And as a disclaimer, so a lot of this work is what we do on our team, but um the opinions are like my own and not the company and not the team. Great. Thank you. So to introduce myself, um my name is Aaron. I work at Google in the New York City office. I've been there almost eight years. My team primarily focuses on deploying web security mitigations across the entire fleet of Google's web apps across
hundreds of web apps and billions of users. And before security, I worked on some product side uh yeah teams before. So I tried to approach a lot of these uh security mindsets with um a focus on the empathy for developer. At the end of the day, we are all on the same team. We want to ship the most secure product to our users. Um so how can we work together and make um life as easy as possible for the developers who are our partners in this journey. So a quick agenda of what we will talk about today. We're going to do a quick intro of cross-ite scripting so that we're all on the same page. Um we are going to talk
about our strategies to how to write more secure code at scale. We'll talk about our runtime browser side mitigations so that um we can complement this safe coding approach with runtime mitigations to make sure we really caught everything. and then we'll talk about um an extension of these mechanisms and how that um leads to our uh post XSS world. Great. So XSS is something you've all probably heard about before. But let's recap and talk about why this is still relevant in 2025. So Google runs a VRP program, a bug bounty program, and this is a visual of the payouts from about five or so years ago. As you can see, half of these are web issues. Makes sense. Um Google has
one or two web apps. you may have used them before. Um, and you can see that in the diagram just how many of these um, individual vulnerabilities were some form of cross-sight scripting. So, this is really a massive problem for us and we were inspired to develop techniques to remediate these at scale. So, before we get into the mitigations, let's go ahead and recap briefly what XSS really is. Um, the web is dynamic by design. You want your websites to be interactive and react to your users. Um what is dangerous is that amongst many of these moving pieces data and code can get mixed up um with more powerful APIs and it's really not clear sometimes what is
data what is code and um this confusion leads to uh yeah injection attacks um and this could really uh lead to a user having more control over the site than they really should and in this particular example right there is a bit of string interpolation we want to do in the markup and while um while we assume it's a um bit of text. It's safe. But um it could very well be a bit of malicious uh markup that uh executes JavaScript. And if this is user control, we have no guarantees that only safe inputs will make it through to the interpolation. Um but why is this kind of injection dangerous? Well, whenever there's an XSS, an attacker can essentially run
code in the context of a user session. Um this could be anything from stealing credentials to installing malicious loggers. And we really call this um like the clientsized brow client side browser version of the rce the remote code execution because they really share the similar concept right anything that the victim was authorized to do the attacker now can as well um so it usually when we do a proof of concepts for this we pop an alert and we'll see an alert happen and um this is going to um show up in some of the code examples as you go along. So now that we talked about the theory what does this look like in a bit
of code. So this is an example of DOMXS. In the code um snippet, you see that we take variable fu right out of the address bar. And then later on in the code, we will insert that into the DOM. And because you know the user can type whatever they want into the address bar, it could inject um whatever the attacker wants. Um now we might say like no one really like codes like this anymore, right? especially with modern frameworks um that isolate components and give you nicer APIs for dealing with uh refreshing the DOM but obviously assumptions are broken right just because a framework can abstract away the complex DOM operations doesn't mean that it can prevent you from use uh
using unsafe DOM APIs directly and these issues are really easy to introduce it really takes a developer one bad day or one um reviewer to miss a line in the review and then once it's introduced it's really hard to remove systematically ly from your codebase. So, do we stand a chance? And we think we do. So, let's talk about the philosophies of removing entire classes of web injection vulnerabilities at scale. Um, as a quick plug before we begin, a lot of these concepts um are also more than what we have time for today. So, here's a blog post written by someone else on our team. Um, it is on the Google security engineering blog. So, feel free to check out um this
article and many others like it um on that blog. So another way to think about the protections that we deploy as a part of the solution is that they are really like pillars that support our complete defense and depth mechanism. They work together to uphold security in each aspect of the development life cycle. And we will um now talk about each of these individual parts and give examples of how we um add this to our engineering process. So onto our first idea frameworks that give you security superpowers. So the big idea is that the basis of our atscale defense against cross-ite scripting comes down to relying on frameworks that are contextaware templating um have built-in
compatibility with these security headers or mechanisms the runtime enforcement mechanisms like CSP and are secure by default. So we'll show an example of context or templating in a few slides but um very quickly it means that the templating system knows whether a piece of string that is being interpolated um as HTML or as a part of an attribute or a script and does the right escaping and um sanitizing to make sure that that uh context is not broken. Um the and this is really a part of like the safe coding philosophy at Google right and this is a term that we'll use a lot throughout this section. It's giving the tools to developers that take
away allow the uncertainty from the developer on how to use and configure these security features and also make these features hard to misconfigure in an insecure way. And we really want security here to not get in the way of shipping great features to our end users quickly. So we want to work with frameworks that do not work against the developer as they try to write code that are that is vibrant and um yeah userfriendly. Um and hence a framework built with these security related features in mind um with compatibility with security features like CSV but also um with a design um with a focus on API design and emphasis on developer experience is really key and you might
say this is possible at Google because we have a lot of control over our internal frameworks and give guidance um to our in-house developers on what the best practices are and we can really like control that culture tightly. But what if you aren't building web apps inside Google? How can you get some of these benefits? And thankfully, our colleagues have worked hard to ensure compatibility with these powerful security features and some of our some of our other frameworks as well. For instance, all the work that we did inhouse for Angular and Lit um that is also available open source um so that the open source ecosystem can benefit from the the work that we've done there.
But um our contributions aren't only related to um Google sponsored frameworks. Um our friends at Meta have also worked hard to make React compliant with their web security strategy. So it has a lot of goodness baked in and we've also submitted PRs to some um other React based frameworks like Nex.js to work well with some of this uh philosophies. So here is a concrete example of what um some secure by default features at work in one of these frameworks that we just talked about. So the code example here is from OASP juice shop an intentionally vulnerable web app designed for trainings and education written using a web a modern webstack and modern frameworks and um we'll just note that
we are using angular here because of the availability of the code snippet but the philosophies you spread here are not uh framework or like company specific right um we really hope to see more of these uh propagate throughout the ecosystem. So um this particular component it's a part of the angular templating for um the piece of the page that is going to pop the XSS and um here we see that there is something that is bound to the inner HTML attribute inside that node and it is a property search value that we're going to um poke around and find in the um controller and see where it comes from. And let's uh keep in mind
that whatever is set to be bound to inner html here will be treated as markup. So it'll be inserted into the page and it'll be uh running as code. And in the controller we can see that search value property is being set. And it depends on this variable called um query param which depends on the URL of the page. So maybe there is a bit of user control string that is being interpolated into the DOM through inner HTML in this bit. But do you also notice this uh lengthy function called bypest security trust HTML? This really unwieldy name should give us a clue that something is out of the ordinary is happening and this is
the core of how Angular's API design works. Um it's telling us that by like bypassing security with an input that is clearly user controlled and this is supposed to make you feel a bit uncomfortable and it's supposed to make you check this bit of code um a little bit more carefully. By contrast, um what if we didn't have that very uh bulky function in the in the way, right? What if we just like try to set um this user control string directly to the property? Then Angular's auto sanitizer behavior kicks in. Um since we know that um since we know from the template that this is being bound to the inner HTML attribute, this needs to
be treated as a bit of markup. So, Angular will um apply the default HTML sanitizer sanitizer value before putting it in the framework. So the excss will not happen if the code is written like this way. And this is really the core of the secure by default framework strategy. Doing the secure thing is easy. Making the mistake is hard and uncomfortable. Yeah, as you can see here, um you can read more about the best practices in the Angular official documentation and it goes into a bit more depth than what we're covering here. And we also want to stress again that while we use Angular as an example um since this was a code example we had none of these
philosophies are locked down to the framework right um and we'd really like to see more frameworks adopt similar approaches in the future great so now we will move on to our next idea um so we have the building blocks of frameworks secured um we want to like shift even more left and think about the application code that is being written by our developers inhouse and how to enforce some best practices there. And at Google, we uh have a model repo, right? Where we stick all of these uh all of these different web apps in and be and in this model repo, we have all of our JavaScript pass through what we call like a compilation pipeline. It's
like a giant bundler and llinter. And this allows us to insert ourselves there for um static analysis, linting, um checking for best practices. So at its core, what we call conformance is the idea that we really want to automate the checking of the things that a security engineer might call out in code review with a technical control. Um, and at Google, it extends to more than just security features, right? It's for other features that maybe aren't at the top of mind for most developers like performance or accessibility, code health, etc. It's really a way to make uh best practice enforcement sustainable at scale. So really there isn't like a 100% um I guess uh good analogy for conformance
outside of like the Google monor repo but you can really get um the best of its effects by uh combining tools that are readily available such as TypeScript um that'll do type checking that already corrects for a lot of really silly um type mistakes that we used to have in JavaScript. We have ESLint to uh really enforce best practices um coding standards, certain code patterns with more ASD based analysis. We um can insert any of these as a part of your CI/CD like presubmit hooks to make sure that we really don't um push code without doing some of these checks first. And um we'll also talk about some of the tools that we've written on our
own that we will introduce in the later slides um where we do um offer some of these uh benefits through open-source methodologies like this one. This is one thing you can use today. We took the most effective checks that we have on our internal llinter and static analysis tooling and built on top of eslint. So you can run it as a part of the CI/CD the ID setup etc. So this project is called safety web and it's under active development and as you can see in the example here um it'll detect unsafe or risky um APIs like the inner HTML assignment in the code and then um call it out um if you have it as a part of
your yeah IDE setup or CI/CD pipeline it'll really yell at you as you're writing the code so that while you still have the context fresh in your mind you can go refactor it and fix it. Um but this also raises a question. So once these tools warn you about dangerous code patterns, how do we uh refactor this? So that's our next big idea which is how can you ship even more left than actual conformance? Can we make good security decisions and effortless part of the development process itself and our potential vulnerabilities start at code being written by developers? So we should be able to shift even closer towards when the developers are writing the code, right? And let's and I think
it's easier to show this philosophy in action with a code example. So let's walk through this motivating example. So here's an example application with three different sections. I know they're all on the same page, but you can imagine that they're in very disparate parts of the codebase, maybe work done by three different teams. Um in this um application, let's just say that we uh accept user input and then try to format it and display it back to the user. Um so we have three layers. a storage layer where the piece of HTML that you don't trust enters a system and is and gets stored. We have the formatting layer that tries to do the formatting and
create new markup. And we have the actual browser layer which uh runs on the client side for your users and actually executes the code. And here are the three locations that the user input passes through. And it's obvious here um because it's all on the same page that it is passing user input through. It's less obvious if these three lines are very very far apart from each other in the codebase. And obviously, if we have an error message like that, that's going to pop an XSS. So, let's think about how we can replace these three lines. Should he be fixing it before we store it into the storage layer? Should he try to fix it as you
format it? Should he try to make sure that nothing dangerous gets inserted into the DOM at the browser runtime? Should we be doing all of them? And if these three places are work done by different teams, how do we ensure that any one of them has been treated? And here we rely on the TypeScript uh type checking system to offload some of that uh mental work for us by creating this uh wrapper type called safe HTML. We want to make sure that the TypeScript type checker will check that we are essentially um only passing around safer input through. and let's see um how to actually correct the uh the compiler errors that this will create because now
we've changed some of the strings to safe HTMLs. But before we do that, let's talk a little bit about this wrapper type. And the important thing here is that there's really no equivalence between a string and a safe HTML. As you can see in this example here, if you want to make some markup by interpolation, there is no guarantee that interpolating a random variable like um name is going to end up safe because it could as will easily be this and we'll pop another XSS. So um of these constructors, which ones might guarantee a safe construction of new markup versus not? If you said the first two, these are not safe. And this is because the first one
is trying to equate the string type with a safe HTML type. And that's not good, right? We want to create a new type so that the TypeScript type checker can check for it. The second one is a little bit more nuance, but this is also not great because it's kind of just like a wrapper, right? We want to make sure that some transformation happens to the string to make sure it becomes safer markup before we um turn it into a safe HTML type. And that is what these two are focusing on. Great. So back to our code now that the correct TypeScript type annotations will it compile and how do we fix it? Well, we have to first uh we could either fix
it there but that might not work because you want to sanitize rather than escape because um if you just escape then we might break the markup that we are making. Great. So as you can see we also have to use a special wrapper to make sure that the browser side code also doesn't use the raw API but is asserting that we are receiving trusted markup before inserting it into the page. Great. So now we have one more code to talk about the runtime enforcement but we are running out of time so maybe um we'll just do a very quick review of uh CSP. I I'm sure like enough of you have heard about CSP before that we can be a little
more uh faster through it. And the idea around runtime enforcement is that while these safer coding methodologies help us think about uh whether we've written safer code at the runtime, you also do want to block anything that may have gotten through because JavaScript is infamously dynamic. Right. Great. So I'm just going to maybe fast forward through a bunch of the slides so we can get to the conclusion. But um essentially we um recommend that when you set up a CSP we uh use non CSPs because rather than URL based allow lists this is a lot more granular and um and less prone to injection. Um we also use strict dynamic to propagate trust and also another cousin
of the nons only is a hash only CS which is great if you don't have to do the template and the back end and the front end coordination to pass the nonsis around and and this is also great for uh static single page applications because you can um transform during your bundling process something that looks like this into an autohashed uh CSV um page with the meta tags containing the hashbased CSP. We even have libraries that do this for you. Um we also have trusted types which is a formalization of the idea that we showed before with the safe um coding example and the safe HTML type but in the browser to make sure that you don't have assignments
that might um break because there are a lot of these different DOM APIs that might essentially uh introduce markup of strings and turn them into markup on the page. Great. So actually I'm going to now skip um a couple slides um to the end where we can now talk a little bit about this and um end the talk. So we so we can now we saw how these steps interact to really form a web of uh protections, right? But we instead of um these pillars, we can also think of it as a pipeline where we have code flowing from the devs onto the environment that is running on the users's browsers. And this is each step of the way we have a
different protection to catch what hasn't been caught in the step before. And now the real um insight here is that the runtime enforcement CSV and trusted types we can think of it as essentially what if we lock it down even more? What if you lock down the runtime enforcement so that instead of just being the last layer to catch whatever is left over, we have a guarantee that dangerous APIs are completely turned off. And and then what if we use that as a basis to ship new APIs in the web platform that will feed back into the first part of this pipeline. And this is a future we hope to get to when it can safely lock down
access to all of these dangerous DOM APIs along with a suite of new platform APIs to replace those operations in a safe by default manner. And we call this um approach perfect types. And we see it as yeah a evolution of um trusted types and the safe coding approach. Well, thank you. And I guess um yeah, if you're interested in this topic um please join the conversation at the W3C swag group where we um discuss how to yeah how to u further the topic of web security amongst developers and thank you for coming to this talk. >> Oh thank you Aaron. Thank you so much. >> Oh we're done guys. Thank you for that. you if you want you can come up and ask
him questions or something. But >> um can I just unplug? >> Oh, can I unplug? >> Yes. [Music] Yahoo! [Music] Yahoo! [Music] [Music] Baby, [Music] baby. [Music] Heat. Hey, Heat. [Music] Heat. Heat. [Music] Heat. Heat.
Down. [Music] Down. Hey. Hey. Hey. [Music]
[Music]
Heat. Heat.
Heat. Heat.
[Music]
Heat. Heat.
[Music] Heat. Heat. [Applause] [Music] Heat. Heat.
Heat. Heat.
Heat. Heat. [Music] Heat. [Music] Heat.
Heat. Heat. N. [Music]
[Music]
[Music] Hey, [Music] hey, hey. [Music] Heat. Heat. [Music]
Wow. [Music] Wow. [Music] Heat.
[Music] Heat. Heat. Heat. [Music] Heat.
[Music] Heat. [Music] Heat. [Music]
Heat.
Heat. Heat.
[Music] Heat. [Music] Heat.
[Music]
[Music] Woohoo! [Music] Woohoo! [Music] Heat. Heat. [Music] [Music]
Black. [Music] Doo. [Music]
[Music] Heat.
[Music] Heat.
[Music]
Heat. Heat.
[Music] Heat. Heat. [Applause] [Music] Heat. Heat.
Heat. Heat. Heat. [Music] Heat. Heat. N.
[Music] Heat. Heat. [Music] Heat. [Music] Hey Heat.
[Music]
Heat. Heat. [Music]
[Music] Heat. Heat. [Music] Heat. [Music] Heat. [Music]
Wow. [Music] Heat. Heat. Heat. [Music]
[Music] Heat. Heat.
[Music] Heat [Music] up Heat.
Heat.
[Music] Heat. Heat. Heat. Heat.
[Music] Heat. Heat. [Music] Heat. Heat. N.
Heat.
Heat.
Yeah, [Music]
[Music]
yeah yeah. [Music] Hey hey hey hey hey hey hey hey hey hey hey hey. Yeah, [Music] down. [Music] Down
down down.
[Music] Yahoo! [Music]
Boo! [Music] [Music] Fire.
Home. [Music] Daddy.
[Music]
[Music] Heat. Heat. [Music] Heat. Heat.
Heat. [Music]
Heat. Heat. [Music] Heat.
Heat. Heat. [Music] Heat. Heat. Heat. [Music]
Heat. Heat.
[Music] Heat. [Music] Heat.
Heat. Heat. N. [Music]
Heat. Heat. N. [Music] Heat. Heat. N.
Heat. Heat.
[Music]
[Music]
[Music] Heat. Heat. [Music]
Wow. [Music] Heat. [Music]
Heat. Heat. Heat. Heat.
[Music] Heat. Heat. [Music] Heat. Heat.
[Music]
Heat. Heat. Heat. Heat. N. [Music] Heat. Heat. [Music] Heat.
Heat.
Yeah, [Music]
[Music]
yeah yeah. [Music] Hey hey hey hey hey hey hey hey hey hey hey hey. Yeah, [Music] down down. [Music] Down.
Down.
[Music] Yahoo! [Music] Yahoo! [Music] Heat. Heat. [Music] by [Music] Fire
home. [Music]
Down. [Music]
[Music]
Heat. Heat. [Music] Heat. Heat.
Heat. [Music]
Heat. Heat. [Music] Heat. [Applause] Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Heat. Heat.
[Music] Heat. Heat.
Heat. Heat. [Music] Heat. Heat. N. [Music] Heat.
[Music] Heat. [Music] Heat. [Music] Heat. [Music] Heat. Heat. [Music]
Woo! Wow! [Music] Heat.
[Music] Heat. Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat. [Music] Heat.
Heat.
Heat. Heat.
[Music] Heat. Heat. [Music] Heat. Heat. [Music]
Heat. Heat. Yeah, [Music]
[Music] heat. [Music] down. [Music] Shush. Yeah, [Music] down down. [Music] Down
[Music] Hey, [Music] hey hey. [Music] Heat. Heat.
[Music] by.
Heat. Heat. N. [Music] Hey. Hey. Hey. [Music]
Down. [Music] Down. [Music]
[Music] Heat.
[Music] Heat. [Music] Heat. Heat.
Heat. [Music]
Heat. Heat. [Music] Heat. [Applause] Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat. Heat.
[Music] Heat. [Music] Heat.
Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Heat.
[Music]
[Music]
[Music] Heat. Heat. [Music]
Wow. [Music] Heat. [Music] Heat. Heat. [Music]
[Music] Heat. Heat.
[Music] Heat. Heat. [Music] Heat. Heat.
[Music]
Heat. Hey. Hey. Hey. Heat. Heat. N. [Music] Heat. Heat. [Music] Heat.
Heat.
Yeah, [Music]
[Music]
yeah yeah. [Music] Hey hey hey hey hey hey hey hey hey hey hey hey. Yeah, [Music] down down. [Music] Down
down down.
[Music] down. [Music] Heat. Heat. [Music] Black.
[Music] Hey. Hey. [Music]
[Music] Heat. Heat. [Music] Heat. Heat.
Heat. [Music]
Heat. Heat. [Music] Heat. [Applause] Heat. Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Heat.
[Music] Heat. [Music] Heat.
Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat.
[Music]
[Music]
[Music] Heat. [Music] Heat. [Music] Wow.
[Music] [Music] Heat. [Music] Heat. Heat. [Music]
[Music] Heat. Heat.
[Music] Heat. Heat. [Music] Heat. [Music]
Heat.
Heat. Heat.
[Music] Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat. [Music] Heat.
Heat.
Yeah, [Music]
[Music]
yeah yeah. [Music] Hey hey hey hey hey hey hey hey hey hey hey hey hey hey hey. Yeah, [Music] down. [Music] Down
down down down.
[Music] Heat. [Music] Heat. [Music] Woohoo! [Music] Woohoo! [Music] [Music] Booby. [Music] Down. [Music] Fire hey home. [Music] Down. [Music]
[Music] Heat. Heat. [Music] Heat. Heat.
[Music]
Heat. Heat. Heat. [Music] Heat. [Music] [Applause] Heat. Heat. Heat. [Music]
Heat. Heat. N. [Music] Heat. Heat. Heat.
Heat. Heat. N. [Music]
Heat. Heat. N. [Music] Heat. Heat. [Music] Heat. Heat. N. [Music]
[Music]
[Music]
[Music] Heat. [Music] Heat. [Music] Wow.
[Music] Heat. [Music] Heat. Heat. [Music]
[Music] Heat. Heat.
[Music] Heat. Heat.
[Music] Heat. Heat. Heat. Heat.
[Music]
Heat. Heat. [Music] Heat. Heat.
Heat. Heat. [Music] Yeah, [Music]
[Music] down. [Music] Hey, [Music] hey hey. [Music] down down [Music] down down down down up
[Music] You [Music] got to be young. [Music] Heat. Heat. [Music] [Music] Heat. Heat. N. [Music] Hey, hey, hey.
[Music]
Down. [Music] Down. [Music]
[Music]
Heat. Heat. [Music] Heat. Heat.
Heat. [Music]
Heat. Heat. [Music] Heat. [Applause] Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Heat. Heat.
[Music] Heat. [Music] Heat.
Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Heat.
[Music]
[Music]
[Music] Heat. Heat. [Music]
Wow. [Music] Heat. [Music] Heat. Heat. [Music]
[Music] Heat. Heat.
[Music] Heat.
[Music] Heat. [Music] Heat.
Heat.
Heat. Heat.
[Music] Heat. Heat.
[Music] Heat. Heat. Heat. Heat. [Music] Yeah,
[Music]
down. [Music] Hey hey hey. [Music] Yeah, [Music] down. [Music] Y
down.
[Music] [Music] Down. [Music] Down. [Music] There you go. [Music] Black [Music] hey. [Music] 2. [Music]
Down. [Music]
[Music] Heat. Heat. [Music] Heat. Heat.
[Music]
Heat. Heat. Heat. [Music] Heat. [Applause] Heat. Heat. Heat. [Music] Heat. Heat.
[Music] Heat. [Music] Heat.
Heat. Heat. [Music] Heat. Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Heat.
[Music]
[Music] Heat. [Music] Hey Heat. [Music] Heat. Heat. [Music]
Wow. [Music] Hey. Hey. Heat. [Music]
Heat. Heat. Heat.
[Music] Heat. [Music] Heat. Heat.
Heat.
[Music] Heat. Heat. [Music] Heat. Heat. Heat. Heat. [Music] Heat. Heat. N.
Heat.
Heat.
Yeah, [Music]
[Music]
down. [Music] Hey hey hey hey hey hey hey hey hey hey hey hey. Yeah, [Music] down. [Music] Down
down down.
[Music] Burn. [Music] [Music] Heat. Heat. [Music] Hey, hey, hey. [Music]
Down. [Music]
[Music]
Heat. Heat. [Music] Heat. Heat.
Heat. [Music]
Heat.
[Music] Heat. Heat. [Music] [Applause] [Music] Heat. Heat.
Heat. Heat. N.
Heat. Heat. [Music]
Heat. [Music] Hey Heat. Heat. Heat. [Music] Heat. Heat.
[Music]
[Music] Heat. [Music] Hey Heat. [Music] Heat. Heat. [Music]
Woo! Wow! [Music] Heat. Heat.
[Music]
Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat.
[Music]
Heat. Heat. Heat. Heat. N. [Music] Heat. Heat. [Music] Heat.
Heat.
Let's [Music]
[Music]
[Music] give it 10 more seconds and then I will start. Well, until you give me the sign. That's good.
I can start test. Yes. Okay. Good. Welcome to my session on eliminating bug classes using browser security features. Um yeah, I hope uh you are as excited as I am about this topic. Um the the purpose of this talk talk is for you to get new ideas, new maybe learn about new concept that you have not seen before. Um and uh maybe you can think about how you can apply them in your own workspace es as especially the the the scale deploying such browser security features across a big landscape of applications about multiple products that you own or uh multiple servers and yeah let's uh kick this off. So um we are going to talk about common web
security flaws. I will show you the latest version of the UVAS proactive controls. Um I will show you a case study from Google which was like the um the the yeah my I I found this when I saw this research paper from them about security signals or web signals. uh I was so impressed that I wanted to bring this to my own organization and also apply this concept and that's why I want to show you their concept how they implemented this and uh then finally we will talk about uh some of those new modern browser security features uh for defense and dev and um like I said um I was um so there's also um uh I also have a 4-hour
workshop version of this so it's now very difficult to put everything into a 10 20 minute presentation but but I tried and I hope you are going to like it. Um but of course there's uh much to consider especially rolling it out on on a scale and and so on. Um and um yeah the the why are we having this talk is mainly because uh I thought um those topics they haven't surfaced that much yet um like uh moving to a strict content security policy but also other very modern browser security features which are still in development and also not yet supported by all major browsers but it's very nice to have a look at
this development and think about how you can already deploy them um because they already provide very great value and um yeah I wanted to bring more attention to those type of features and um generally moving from having this reactive mindset to this proactive uh mindset. So uh a few words about myself. So um my name is Yavan. I'm based in Germany. Um I work in the application security team at Sage. Uh on the side I'm lecturing secure coding. Uh I'm passionate about web security, Raspberry Pi stuff and home automation. Um I used to used to do software development web and mobile. Then I uh became a patent tester and a consulting uh type of role and now uh
I'm uh like I joined Sage 5 years ago in the application security um team and uh now I'm focusing again in in scaling and building stuff again. Um so there was some change in our mindset how we approach application security as well. Um so before we start let's take a look at the common web security flaws that we have. I took here I mean there are there are many top 10 list that you can use but let's take one top 10 which is the hacker one um top 10 vulnerabilities of the most reported and most rewarded vulnerabilities in their plat platform. And you can see number one is still cross-ite scripting vulnerabilities which is uh yeah a little bit insane
because it's a very old depend uh vulnerability. Yeah. And we we still don't really manage to fix it um properly and um it still surfaces and and I also cycled some other type of vulnerabilities uh which are also covered in with some of the controls that we are looking at and um during this this talk. So um yeah um maybe maybe um let's take a look why is cross-sight scripting still one of the most rewarded or found vulnerabilities um so many organization I think first many organizations still struggle to implement a strict content security policy a strict one um but also having challenging to find all vulnerable spots for where they can do input validation
and output encoding and so on and Second, attackers still find very innovative ways to bypass those um defenses. And this for me this is a a proof of or uh that we need those layer defenses multiple defenses and we will look at those and look at those browser security features which we can use. So even that you even if you forgot to do a proper output encoding that you still have like a content security policy a strict one which then blocks that crosscripting attack. Um so why do those vulnerabilities still persist and let's let's um before we move on so first we have very complex systems um um lots of connected APIs many uh application I think
I don't know do I need to just continue talking and then it's not okay that's good so uh second um we we have a very reactive mindset about u our vulnerabilities um like we we fix occurrences like a new issue that pops in from a buck bounty report or from a pentest report and we only patch those symptoms. Um and we have a maybe third we have limited automation. Um so it's hard to scale those security practice across the landscape of many applications or services. So um I compare this like this a mole game that we are playing. So we have this constant emergence of new issues. We are very reactive not proactive and with that we
have a never- ending cycle and it became very stressful resource intensive and with that of course you don't have time to properly look at the root causes and also no time to implement those proactive controls. So um yeah then um I wanted to show you the the last version of the obas proactive controls which is a top 10 list of very good practices and of course number one number two it's all about input validation output encoding does those usual practices that we already know and that we already apply uh every time we have a vulnerability coming in in a pentest report or from any type of static source tool scanner. So um but I wanted
to show you the number eight which got added in uh the last version last year number eight which is about leverage browser security features. So we now see there's indeed something happening uh which we should take a deeper look during this talk and um for me this also means um we we need a defense and death and this is basically how we scale the vulnerability elimination because we won't be able to patch or all the input validation or output encoding maybe we maybe we won't be able and that's why they still surface is so we need this defense in defaf and here are just a few examples how you start doing that uh like the
first one here session hardening you add a http only flag to your cookie so that a javascript if you have a cross-size step vulnerability cannot access that token or that that a that key from the cookie so there are multiple defens mechanisms um and the last one finally is browser enforced policies which is about having a a secure a strict content security policy just an example for some of those defense and death mechanisms. Now I wanted to show you the that research I was talking about initially from from Google and uh you should really re look that up and download this paper and read it. It's very very good. um it's called security signal and uh
but I'm going to show you how what it actually what they actually did in this paper because I I use it as a blueprint as a template to now work similar as they approached it and I I mean it's from Google uh they they kind of own Chrome and they they worked on those standards they worked on this content security policy version three they worked on those new security headers which we will talk about like sec fetch metadata and some other sec security security headers So it's of course they also have now a very nice approach of uh not only adopting it as a certain scale but also measuring the adoption at a certain scale and that was what I found
interesting if you work in a large enterprise. So um let's take a look um at a challenge. The challenge is of course adopting to those uh security features in a very large scale web ecosystem. And uh what they how they appro approach this the solution is quite easy because most times you anyway already have a reverse proxy like engineix in front of everything or even if you use a commercial version like cloudflare you can you basically already have your man in the middle. you can inspect there and you can put up some tools uh some rules uh first not only setting those security headers but also um yeah uh checking if they are existent and that you can apply that as in one
place for everything basically and that's basically where where this works it's like a proxy level and uh they they use some um synthetic sickness that's how they call it so it's about making it measurable so actually They check on each um service. They check for example is their content security policy existent, is there CSRF uh protection existent? They check are they security fetch metadata header existent? And we will I will talk about those and explain how they work a little bit later. Uh but also also of course interesting do do I have a a legacy application? Do I have a modern framework? Maybe the modern framework is already doing the output encoding by default. So like it's giving
us very good indications which we could inspect on a proxy level. And finally, so this is screenshot from that paper, but I found it very useful uh to show you how you can make it measurable like like a this is a scorecard which you would have for every service. And uh here they show you okay we have a trusted type uh header set. We have um some other headers um defined and this is just the outcome of that of of those metrics. Um yeah uh unfortunately there's no commercial product. This is not open source. We have to build this ourself right now. So vendors please listen. But uh um it's a very good innovative idea.
Um so um let's talk about those browser security features which I just mentioned uh during those slides. Um uh but before I uh talk about those I wanted to show you a cross- site reaches forgery vulnerability. Um for those who don't know it's like imagine you're browsing the web and uh suddenly there's a request made on another website that you also already logged in and then action is performed and you can see that image tag. This is then example how something like that happens because browsers tend to send cookies automatically with each request if you don't have settings like same side attribute set to those cookies. So this is a attack can look like uh this code snippet as well. But
now there's something new which um is those security fetch metadata headers and um with those headers um uh they got introduced in 2021 to all major browsers. So you can start using them. It's another defense in depth how you can protect about cross-ite retest for attacks because now you can start validating server side on or on your antinix proxy level where the request came from. To make this more clear this example, this is how a curl request looks like if you send it in your command to example.com. You just have those this bunch of headers being added to the curl request. But let's compare this request how it looks like if you open example.com in your browser and you
open devtools network inspector and you can see those secfatch headers they give you very detailed and very good information about where this request came from also the target origin on the and the um current origin um like is it a cross-ite request basically so it basically tells you already okay this is cross-ite request so what we now just need to do is verifying that header and just look into the value as a defense in def and this is very easy to deploy. You can just go in your engineext and write that rule. I think this is how the rule can look like that you need to code. So um very powerful has has someone already
noticed those headers in when you look in burp in your proxy and the notice but did you know what they do? Yeah. So that's that's the good thing. I always skip something. Um which slide are we on? Okay. slide. Um, now talking about uh again how they work. Imagine you are you have the site.ample and there's some JavaScript uh sending a fetch request to site.example. So you can see the headers telling you same origin. But if you have evil.ample and you have an request to site.example, it tells you crossite and that's how you can then block this cross-ite reaches for tree basically. So this works really good at scale. Now uh let's talk about content security
policy. You can see here a example of a content security policy. Who has a content security policy deployed for his applications? Ah I see a lot of who thinks you have a secure one. You think so? That's the difference isn't it? Uh moving to a strict content security policy because this really is going to eliminate lots of the cross- types scripting vulnerabilities. And I want to take the example of Google again here because they have a strict content security policy for Gmail and they increased they increased the buck bounty for Gmail to $50,000 just for a cross-ite scripting vulnerability because they didn't see any cross scripting vulnerability report for over one year. So this is shows you how
effective this can be if you move to a strict content security policy. But uh it really is a a difficult balance how you can deploy it. That's I mean guess as a product security engineer is often the the biggest challenge. How can I deploy the CSP? Uh how can I uh yeah have the right balance between marketing and all that type of content that is hosted on my website and without breaking functionality. But here is really the focus that you start with a report only content security policy that you can then monitor your violations that you have and then move iteratively to a more secure content security policy. Uh here I I created like five steps um
how you can uh do this like I said you start with a report only policy. This can be a very strict one, but then you um you generate some violations which you monitor and then you basically add those violations or you you have to refactor some code. And with the latest versions of content security policies, you also have a script dynamic uh directive which allows you to be also more more um it's just got easier to deal with all those Google marketing. those type of frameworks which you unfortunately have to embed to some of the websites. So it's it's got easier and you need this uh this mix between reporting and iterating and moving to a strict content
security policy and of course there will be some automation with you need and this really depends on what application you're dealing with. Is it a legacy app or is a a modern front end with spar? Because uh you could also automatically some of the directives or the rules within your content security policy during the build and for example calculating hashes and nonsense or adding nonses to your scripts that you have in the website. Then finally another modern browser security featur feature which I wanted to show you is trust types. Has anyone heard about trust types? I always good. Um so it's it's part of content security policy. You can see here the the rule how you can enable it. Um and it's
actually very powerful uh for example against DOM based cross-ite scripting vulnerabilities. And this is very helpful if you work with legacy applications where you have maybe yeah lots of DOM based cross-ite scripting vulnerabilities and you don't want to refactor everything because trust types works like um a client side sanitizer or you can utilize it to a client side sanitizer. Unfortunately, it's not yet supported in Firefox, but they're working on it right now. But you already can use a polyfill to import it to Firefox. So, it would work there if you use the polyfill. Um, and um yeah, uh to show you the concept, of course, you have you you have those risky APIs like inner HTML, outer HTML, evil, those are
the most times the the the source of the DOM based cross-ite scripting vulnerability. But now you can start write writing a wrapper around them or like a you can introduce a policy where you then use maybe a secure API like don't purify where you remove all malicious content from the JavaScript and that way you can um eliminate also DOM based crossite scripting and it's easier for to fix your legacy stuff basically just using that. Of course, maybe maybe you don't need it if you already are on a very modern framework, but most of us aren't and this is about deploying it everywhere. Uh here's here an example before um where you have like the vulnerable example, you use inner
HTML, but now you introduce that policy uh which is the wrapper um which uses the dome purify library and then you have a safe u function. So that way it's easier to refactor and if you set up some policies, but this can be also used on in different different situations as well. So it's uh interesting that there's now like a client side sanitizer happening in the browser. Um yeah, I wanted to finish this presentation with uh a quote from uh Fed B from Mozilla. Um so he was saying uh web security is increasing optin approach leaving developers with both the opportunity and the responsibility to protect their applications. So it's it's everything which we just show which
I just showed you is about opting in because we can't break the web. They can't put something out there which changes standards. And I think thinking about cross-ite scripting vulnerability one of the oldest vulnerabilities we have in the web. Maybe it's a design flaw from the beginning, but we can't change. We can't break the web. So, uh some takeaways. Um let's let's shift from reactive patching to this proactive security. Um think about how you can uh implement them in your in your organization because it can be a real gamecher especially if you if you have one application that is has one vulner crossing vulnerability. In my experience, uh depending on the time and effort you put in, you will be finding
more crosset scripting vulnerabilities in the same application. So having a content security policy is is very great. And um adopting to those secure by default principles, those uh headers um and um they recognize the power of those browser security features and with automation you can also scale. You saw the example how Google did this with the engineix proxy and um yeah and uh let's let's start doing this commit to the buck class elimination. Yeah. Um yeah that's uh thank you for listening. Um I hope you enjoy.
Good. Are there any questions? It's fine. I'm I'm also there like Yeah. Good. >> Do you have like tools to craft CSVs because they are hard to craft like >> Yeah. No question. >> Yeah. I I can repeat it. Yeah. Do I have tools to craft CSPs? Um yes, it's very difficult. It's a challenge. I think there because it's also not a a big topic for people like there is a add-on for Mozilla from April. It's called Motilla Laboratory. It's only for Firefox. But this helps you uh generating a CSP. Uh but then like crafting a CSP during automation maybe so actually you need your own scripts in in in your CI/CD maybe that calculate hashes or add those nonses. So
I I only have customuilt stuff for that approach. Um but there's there's some stuff but it's difficult and also another very good tool which will help you uh is the Google CSP evaluator which is also giving you some feedback about the security on the safety of your CSP. Okay. >> Yes. >> You wrote it. >> Ah you wrote that. >> Amazing. It's good.
>> Yeah, you wrote it. Okay. Amazing. >> It's good. Good. I I will be there also in the hallway if you have any questions. So, it's good. Thank you. Yeah. [Music] [Music] Hey, hey, hey. Heat. [Music] Heat.
Daddy, [Music] hey. [Music]
[Music] Heat. Heat.
Heat. Heat.
[Music]
Heat. Heat. [Music] Heat. Heat.
Heat. Heat. Heat. [Music] Heat. Heat.
[Music]
Heat. Hey, Heat. Heat. Heat. [Music]
Heat. Heat. [Music]
[Music]
[Music]
[Music]
Hello. Hi everyone. Thanks for coming out. I appreciate that you made it to our talk even though it's after lunch and it's day two and you're here. So, it feels like you're already exceeding my requirements. Um, so we're here to talk about Vibe check, the dark side of vibe coding. My name is Megan. This is Chloe. Uh I'm going to walk through a quick agenda um of what we're going to talk about today and then we'll jump into it. So first we're going to talk about what is vibe coding in the sense that I think there are a lot of tooling, there's a lot of language and there's a lot of different definitions for how we talk
about these tools. And so we want to start with sort of our framework and our definitions for how we define vibe coding versus other AI enabled or AI assisted coding. So we're all speaking the same language. um and also clarify who we're talking about in these situations given that I think there are a lot of different uses for the different people who are using these tools. Um then we're going to do what we're calling our vibe check which are what are the risks of these tools? Um what are the concerns that we have with them? What are we seeing in the environment? What does this look like? Um and then where are we going? Um which
is sort of our vibe shift. We're going to try to stay on brand today. So thanks for coming along and uh acknowledging that we're trying to be funny here. Um, so my name is Megan. As I said, I worked in cyber security for a while. I started out my career in threat intelligence. Uh, then did some product security and security architecture and now I work in cyber security for financial services. And I'll pass it over to Khloe to talk a little bit about her experience. Hi everyone. My name is Khloe Pottsland. I'm based in New York and I'm a researcher at Reach Security. Uh, we're sponsoring uh a karaoke tonight, so I hope to see all of you at
8:00 PM. Uh, but Reach Security is a platform that help you find and address hidden risk. Um, a bunch of my colleagues are all over Vegas this week, so I'm sure you'll see our name. Uh, prior to Reach, I was working at a major media company doing endpoint and architecture security, and I started out my career at Deote doing cyber risk consulting. All right, so let's let's talk about vibe code. How do we define it? What does the current tooling landscape look like? and who is vibe coding? Uh, so I'm sure at this point everyone has seen this tweet uh back from February from Karpathy. I know it felt like it was an overnight it was an overnight shift
where all of a sudden we had a term for something that we've all kind of been doing or experiencing or seen people talk about. And what what I really like about the way he expresses it is this idea that you forget that the code even exists. And I know there's like a bunch of different definitions of vibe coding, but for the purposes of this talk, we're going to define it as like no code coding, where you're using an LM to generate code for an app and you're not reviewing it that code at all. And bug coding is kind of on a spectrum of how much you understand the code. Like the more you understand, the less you're
vibing. Um, and it's really this idea of fully trusting AI to generate code without manual oversight. And that's really the big thing that we want to separate here as far as vibe coding versus AI enabled or AI assisted coding. When we talk about vibe coding, we mean you are not reviewing the code. Regardless of whether or not you're able to review the code or have the technical expertise to, you're not reviewing the code. It throws an error, you toss it back in the LLM and you hope it fixes your problem. Um, AI enabled or AI assisted coding we think of more as like I'm pair programming with the LLM. Um, I am still testing, I'm still debugging,
I'm still I'm using the tools to do so. Um, but I am actually reviewing the code at the end of the day. And so I think that's just to be clear about the way we talk about it. That's the big difference for us. And also just a couple analogies uh that I've encountered that I really like thinking about when it comes to vibe coding is like think of a chainsaw, right? Amazing technology in the hands of someone who knows what they're doing. It lets you chop code five or sorry, chop wood five times fast as faster than an axe. but in untrained hands. Um, it's responsible for up to 30,000 injuries in the US every year. And also when we were
talking about vibe coding, you know, you you create something, you're using AI to produce something, you get an error and you're fixing, you might want to fix it with AI itself again. And so it's this idea of kind of like using a credit card to pay off another credit card. Okay. So let's zoom out for a second. Like vibe coding really didn't come out of um nowhere. I would argue it's part of like a long arc of abstraction and how we build and interact with software. We started with physical technology but quickly realized we needed more userfriendly way to interact with it. So we developed compilers, interpreters, text editors, then came IDE and GUIs and
more recently we have these web frameworks um low code and now finally vibe coding which is this idea of using this natural language to interact with code and each wave like abstracts more complexity away from developers making software creation more accessible but also sometimes removing guard rails. So vibe coding is just like the latest step in that direction. And also we just wanted to classify different platforms tools we've been seeing. For instance, we have these vibe coding platforms that have gotten really popular over the past couple months like lovable base 44 replet. Uh some databases that folks have been using like Superbase and Firebase and just other tools. But like what we also want to stress is that um this is just a
snapshot on a moment of time. We could imagine that a lot of these categories might change. Um, some new tools might come up and the ecosystem is really changing. Uh, and also we want to talk about who is even vibe coding. I could imagine everyone in this room has experimented to a degree. I know I have. And it could span from non-technical folks who just want to experiment with this technology. senior software engineers experimenting with augmenting their work. Security researchers like ourselves just messing around and trying to figure out um what output and prompt injection risk could look like. We have founders who want to validate ideas really quickly and executives who think this tool in the
hands of their employees could get them to work even faster. And we also have students and junior devs learning through iteration. But of course, because we're at a security conference, we have to call out that bad actors are also equally messing around with these tools. Now I'll hand it back over to Megan to give us a VU check. So what are the risks or how do we see this? We're really going to categorize these in three ways which are we're thinking about the industrywide shifts. Thank you for coming to the philosophy portion of this talk which is basically how do we think these things are changing the industry both cyber security specifically and tech more
generally. Whether or not we can do anything about the ways in which it's changing talking about and acknowledging the things that are changing I think is important. Um, and then talking about inherent flaws in AI tooling. So I like to think of this section as uh how things can go wrong when you're using the tools correctly and there is no mal intent anywhere. Um, so that's everything from model hallucinations to vulnerable code that's being generated. And then I'll hand it back to Khloe to talk about uh bad actors and what kind of attacks we've seen on LLMs uh both on the tools themselves and leveraging them for uh tools. So, the first thing I want to talk about
is industrywide shifts. Um, the first is the most obvious. I like to call it the blockchain test because I think there was a hot minute a couple years ago where everyone was like, I put the blockchain in my thing. And there was not really a lot of conversation around did I need to put the blockchain in my thing. Um, every new tool wanted to say this and 90% of the time I think we could have used a database and it would have been better and cheaper and we've sort of seen that go away. I don't think that AI is in the same category. I think we're coming I do think, you know, I don't want to come across as too anti-AI
because I do think these tools are here to stay. I think they're fundamentally shifting the way we think about technology and the way we think about security. Um, but I also think that we've gotten a little bit of the we're almost too excited as an industry about this and sometimes we're using these tools in ways that don't make sense. And so thinking starting to think about, okay, like I did use AI for this. Can I use AI for this? Does this make sense? Um, I was talking to a vendor a couple days ago that I I won't mention, but they were like, "I have this cool AI enabled tooling." And I was like, "Awesome. How are you using AI?" And
they were like, "Well, our analysts are using it to write the emails we send to customers." And I was like, "Well, that's not exactly the same thing." Um, and I don't know what I would argue this is like the best or most productive use of AI. Um, and so I think it's worth reckoning with how we talk about these tools and how we talk about um, machine learning versus LLMs versus true artificial intelligence. um and and how do we reckon with these things as an industry? And then that leads me to the next point which is how effective are these tools actually? And I think that's an interesting question that we haven't really reckoned with yet. I think the
ways that we have traditionally measured developer output have is often been on the amount of code someone is generating or the amount of pull requests or things like that. And we're going to have to start shifting the way that we measure productivity because these tools are really really good at generating a ton of code. It doesn't necessarily mean all of that is great quality code, but they're good at doing it. And so I think that's interesting now that we're starting to see studies come out that are trying to measure the effectiveness and how effective people are using these tools. So there was one um by Meter that came out a couple weeks ago that was
talking about how um they had a bunch of folks use AI tools for a bunch of different tasks and at the end they asked them how they how much faster they felt they had been. They were all like I'm 20% faster was sort of the average answer. and they measured as far as what they were actually producing against a control group and said you actually you're 20% slower um than you thought you were or than you were without these tools. And I think part of the the reasoning for a lot of it I think that was interesting is that when they were talking to the devs the devs were like well basically I feel like I'm moving
faster because I'm typing in the prompt and then I'm like scrolling Tik Tok as I wait for it to compile or I wait for it to give me an answer. You know there's a lot of things happening. I don't feel like I'm doing as much focused work. Whereas if I just done focused work and not taken any breaks, you know, I might have felt like I was focusing for longer. So even though the task at at a whole is taking me longer, I feel way more effective. Um, and I think that's an interesting mental shift because it might mean that for a lot of us, we become sort of overseers uh to watch over like the AI sort of like a bunch of
assistants or you know your worst summer intern. And I think that that's an interesting shift to start thinking about. I think the other thing that's interesting is that a different study found that generative AI did increase developer speed, but did it less based on the complexity of the tasks? And I think this is really interesting because I think this is something that at least anecdotally makes sense to me and to I think a lot of the people I work with, which is that AI is generally very good at the sort of low-level, well-bounded, well-defined task that I needed to do. I needed to write a script and like great can you ddup this list and use Python
and use a set and excellent go and it does it but then I'm like hey can you fix this big complex system all of a sudden it gets much harder um I think what else is really interesting about this which sort of brings me into my next point about junior devs which is that these how this held up for more senior engineers for junior devs in the same study they found that using the AI tools actually made them seven to 10% slower and I think that's interesting because um AI at least right now and again all of this as as Khloe said and we'll continue to say is very much a point in time analysis of where we're at
with these tools. I think the talk we would have given six months ago would have been very different than the talk we're giving now. The talk we're going to give six months from now is going to be very different again because these tools are growing and changing very quickly. But I think where we are now is that these tools are very good at solving things in a vacuum. Um they're bad at understanding large contextual decisions. And so in many ways I think this is particularly relevant for people who are junior or for people who are students which is that the types of problems you solve for interviews the lead code problems even the problems you
solve in school are the kind of problems that AI is really good at solving. They're problems we already have the answers to. They're very well bounded. Um, and they're also in many ways very unlike the problems that you solve at work, which is that I walk into a system and there's 50 things that are connected and it's in a bunch of different languages and there's this one piece of code over here that was written by Scott 20 years ago and everyone's like, don't touch that piece of code. We don't know what it does. We don't know who wrote it and we can't fix it if it breaks. So, please leave it alone. And I think those
are the kind of problems that AI struggles with because it often lacks contextual understanding of these systems. And so it's harder for it to fit in and figure out how it can solve some of these problems. And so I think that that's interesting because I think the way we've set up the system, we incentivize junior devs and we incentivize students to use AI tools to sort of replace critical thinking skills when you're in school because it makes sense. It's easier to get the answer and the answer is going to be right. Um, and I think we lose something in doing that in that it then we end up in a situation where then you get to a complex
environment and you're struggling to understand how to break down a problem and how to solve a problem because you missed sort of that step where you learned how to do that. Like you're not when you're in school or you're learning to code or you're, you know, however you learn, you're not really trying to solve the problems. You're trying to figure out how to think about the problems in a way that you can break down and then solve much bigger problems later. It's why we all have to learn how to generate algorithms that have been around for a million years because you have to do it once so you understand how we got there. So then I can use them effectively. And
so I do think that that's an interesting thing. I think we're incentivizing students really to use these tools in the wrong ways. Um and I do think that varies based on the people. I think some people are using these tools to help learn and some people are using them to sort of replace critical thinking. And the second one scares me. Um, I think the the best sort of example of this is how we think about mainframes because so many financial services firms are still using mainframes because in many ways they're still the most effective tool for a very specific set of circumstances. They aren't going away anytime soon, but we've stopped talking about them and we don't train people on
how to use them anymore. Like when I've worked at firms with mainframes, one, I had to literally read the [ __ ] manual to learn how it went. But the other thing is that we always had someone on call who was like a consultant who was usually close to retirement age and we would pay Bob to come in once a week and Bob would touch the mainframe and we would hand Bob a really big check and we would beg Bob to come back next week because nobody else knew how to write that language anymore. Um, and I think that's interesting because I think it's potentially something that could happen here, which is that if we transition to
a world in which we're writing all of this code um, via AI and we're not understanding it to the same degree that we end up having to employ a generation of bubs to come in and sort of ensure these things continue to run because maintainability is a really interesting piece. Um, and I think the the problem now is that we're getting to that point where the tools are good enough now that they're starting to replace development at place firms where they can't afford to hire developers. So, small businesses or potentially like I think where I worry about this is like infrastructure. Um, so like your water utility, your power utility, all of these places that
were not paying high dev salaries to begin with. Um, and are sort of seeing this as a great opportunity to sort of cut some of their things, but putting some really important systems in the hands of AI tools. Um, and maybe someday we'll be there where I feel great about this. I don't think we're there yet for a lot of reasons. Um, and I think that that's a that's a scary outcome. Um because if something goes wrong, and when it goes wrong, it usually goes catastrophically wrong, you're going to have devs who have no knowledge of the system coming in trying to fix something that they didn't write, and in fact, no human wrote. All right, so all of this sounds very
doom and gloom. I don't think this is entirely a bad thing. I think as Khloe just walked us through, like I do think to some extent this is just the next iteration. We used to write machine language. We don't do that anymore. you know, we're not we have compilers and we have interpreters and we have guies and we have idees and all of these things are very useful and I'm sure there were people who came up and were very doom and gloom every time one of these things came out and was like well you know if you're you know using a compiler you're farther away from bare metal and what are we doing and so I do think to some
extent this is inevitable and this is the right next step like it opens up a lot of accessibility for people and makes this more um usable and I think that uh that's ultimately like a good thing um even if does insert another sort of piece in our stack which then has the opportunity to be hacked. We're adding another piece of software on um and it is abstracting us one further layer away from the bare metal. All right. So now I finally very briefly want to talk about three things at the bottom. None of which are things that we can do about anything about uh but do are things I think we should be talking about or at least acknowledging. The
first is regulation which is I think unlikely to be a thing that comes up at least in the states at a federal level. Um, I think if it's going to come up, it's probably going to be in Europe or potentially in the states, something like a California. Um, I think because one, we have a federal government that is not a big fan of government regulation right now. Um, and we also have a lot of lawmakers who don't seem particularly interested in learning how these tools work or understanding how these tools work and the lobbyists are mostly employed by the tech firms who are pushing the envelope. Um, so I don't think this is likely to come up or be a
large industry shift in the next couple years, but I think that eventually it will um within maybe five to 10 years. Uh, and then as far as energy output, energy output is a hard one to measure because most of the data we're getting on energy output is coming from the same companies that is putting out these tools and they have incentives not to necessarily give us that information. Um, but this is a study from a French firm called Mistl which is came out a couple weeks ago. Um, and they found that basically one query was the equivalent to essentially 10 seconds of streaming time in the US. In France, it's 55 seconds. Don't think too much
about our energy usage. Um, but I think this is interesting because it does show that the average query isn't significantly adding um to our environmental output, but what is is sort of the fact that mo the vast majority of where the energy is coming from is the training um of these models. And I think as we're in this sort of AI arms race, everyone's trying to come out with the next model, the newest hot model. Um that is causing a situation which we are training a lot and we are using a lot of energy and a lot of water and a lot of materials consumption and I think you know thinking about what the cost of these tools is is worth
doing and I also think that it's worth thinking about what that means for the price of these tools which brings me to cost. Um, my theory on this is that I think we're living in sort of a little bit of a venture capital backed bubble. The same way in like the mid2010s, we had this fun time where venture capital was backing all of these rid share apps and these food delivery apps and things like that. And all of these tools were very cheap um because they were trying to build market share and user brand loyalty. And now we're seeing things like Door Dash and Uber and all of these priced things become much more expensive
um because now they have to give those investors return on their money. and now we're all sort of bought into this ecosystem. I think the same thing is happening with these tools. Um Claude came out with a statement last week that was like we're going to start charging people a lot more for these tools and to use these tools. Um and I think that's going to continue to increase. We're going to see the same thing happen where right now all these tools have like a free or a cheap version. We're going to start seeing them decrease uh those models. We're already seeing you know groups like OpenAI partner with elementary schools and come out with
education specific models. they're starting to partner with universities and eventually with enterprises. I think it's likely we'll see the same kind of arms race where they're looking to get um brand loyalty very young and then sort of bring you along so you're stuck on this um as you continue to move forward. And so I think these tools are going to stop being so cheap. And one of the reasons I think this is going to impact us is at least when I use these tools and anecdotally what I see a lot of other people doing are either we're using these tools in a stack where I'm saying oh I have like maybe I'm using claude for this and I'm using chat GBPD
for this and I'm using um you know a third tool for something else or I'm creating code with one tool and checking having another tool run security scans on it because I appreciate the balance of different tools. I think that's going to become a lot more difficult because these tools are going to become a lot more expensive and it's going to force us to use or specialize in one of them which I think is going to cause a situation in which we're more subject to the flaws of that specific tool and we get sort of less of this balanced approach. All right, done with my doom and gloom. Um now we've talked about sort of my
we're done with the philosophy portion of this. Thank you so much for staying with me. Now let's talk about sort of what are the inherent flaws in um AI tooling as a whole. The first is bad training data. Um we know that a lot of the code on sites like Stack Overflow is insecure. Um I used to be the case that I think seven out of the 10 top CS programs in the country did not require any kind of security class to graduate the CS degree. As far as I'm aware, it's still the case. We for the most part as we train devs, we don't necessarily ask them to think about coding securely. Um, a lot of times that's something you le
you learn afterwards. Um, if you learn at all and so I think on balance a lot of the code online, a lot of the code that we're training these models on is insecure. And so then we act kind of surprised when it produces insecure code. And as VI coding continues to increase and we generate more code with uh these AI tools and then we dump that tool that code online now the AI models are training on that data which sort of reinforces the problem and become this weird bad cycle um where it's really hard to get them to generate secure code. Now I do think that this is getting better because sites like at least like lovable and base 44 are
promising to do more. They're um doing security scans now. they scan the code and they promise that they will produce secure apps. Um, which I think makes sense to people especially who don't know a lot about security or coding. They're like, well, this is, you know, this is great. Um, and I do think that these are good things and I think that they're doing a lot of good things as far as like low-level security scans. Where I think these start to fall apart is that they're not considering like architectural decisions or contextual understanding. They're not looking at your app and being like, oh, you know, what kind of data do you want to store in this database? Is this regulated
data? is this health data? Maybe we should put some authentication on this. They're not asking those kind of questions the way you would if you were developing an app. So, they're really saying this is a secure app because we scan the code, not thinking about those architectural decisions that especially for people who are either not security practitioners or are not tech people are missing. Um, and then I think on top of that obviously like rate limiting becomes a problem. Um, this is a uh tweet from a couple weeks ago of someone who, you know, vioded an app and uh did not account for rate limiting because they didn't know that they needed to. And I think we're going to see more of
this, which is becomes a problem. And then I think the other piece of this that's really important is sort of the big hot topic, which is hallucinations and failures. I'm not going to get too far into this um because Khloe's going to talk a little bit about how this can be hijacked by bad actors. Um but I think it's worth noting here that even without malicious intervention, hallucinations can cause a whole host of problems um from giving someone wrong information to even causing cascading failure. I do think there are protections in place against um some piece of this like I think part of this is better is us writing better prompts um and figuring out how to tune these
and how to set up these tools in a way that makes sense so they're producing better things from the get-go. And I also think that like hallucinations is not a limited to AI problems. How many times do newspapers print halluc um corrections of information like the next day? We all make mistakes. Even as humans, we hallucinate information sometimes or we misremember things. And so I do think that that's um not like an AI specific problem, but I think it's worth thinking about in terms of AI because I think these tools are so right so much of the time that it becomes really easy to trust them implicitly and I think that's a little dangerous. Um that's where we run into
problems. And so there are two issues I want to talk quickly about. One of which is someone using Replet uh found that it had deleted an entire production database and then incorrectly reported that it couldn't roll back the changes. Turns out they could roll back the changes and they recovered their database. Um but it did all of this during a code freeze when it was explicitly told not to make any code change. Um I think this kind of thing is scary because it's what happens when everything theoretically went right. um and you use these tools as intended and you still had a massive cascading failure. Um another one there was an issue with Google Gemini and again both
of these were just what came up last week where it deleted users code. Um it tried to create a folder. It thought that it had created the folder. It hallucinated that that had executed successfully. It had not. It then tried to copy a bunch of data into that folder overwriting the data repeatedly until lost it all. Um, and again, I think that this is like a good example of maybe not letting these tools run loose in production environments. Um, because even though they're right a lot of the time, when they go wrong, they go catastrophically wrong. And then, of course, it gets much worse when we introduce mal intent. And so, I'll kick it off to Chloe to talk about that.
All right. So, for the past couple of months, we've been talking about uh vulnerabilities in vibe coding in general. And this is just a compiled list of a lot of recurring things that we have been seeing. Um, if we're talking about vibe coding vulnerabilities or just vulnerabilities in AI in general, I'd be remiss to not include prompt injection. Of course, I think that was the first example that was very quickly exploited. Um, and the thing about prompt injection is while there may be guard rails in place, figuring out how to get around it by just being very clever and tricking um, whatever bot you're working with uh, proved to be very successful. Obviously, what's also great is that these models
are now trained um, to consider all these tricks, workarounds, what have you. So your grandma a nighttime story or like some sort of recipe isn't going to quite work anymore. But uh I thought this was a really interesting example that I saw that came out last month. So this is a prompt injection v from uh codemini for workspace that allows um a third a threat actor to hide malicious instructions inside an email. The instructions were in white text. So when the recipient clicked summarize this email, Gemini obeyed the hidden prompt and appended a fishing warning that looks as if it came from um Google itself. And also of course social engineering will always be a
vulnerability as long as humans are interacting with technology. Then uh Megan talked about this a bit, but yeah, of course uh the code that these models are trained on have flaws in it. Um humans are flawed. We write flawed things. That's kind of just the nature of it all. But of course, we're concerned about having these security vulnerabilities in generated code. And Veraricode put out this report last week incl concluded that 45% of the code generated um introduced a known security flaw into the code. And obviously that's not ideal, but just like something to think about. Again, not quite sure how this compares to how like how much like human generated code um introduces security flaws, but it's this idea of we
want these models to perform better than humans. Um and I think that's kind of the crux of where we're at right now. Uh some other interesting things that came out of that report was that they noticed that um a lot of the generated code had cross- sight scripting and log injection in it and also they noticed that a bunch of the languages uh perform differently. So, Python, C, and JavaScript um perform better than Java over time. Let's see, we got data leakage. Um again, these these vulnerabilities are just things that we've been encountering since we've been working with technology. It just has a different twist to it because we're talking about AI. So, data leakage, yeah, of course,
it's exposing sensitive information and that could be in the prompts and model outputs, the way the data is being handled. Um, just some other examples of folks f coding, but I thought this example was really I I would have to mention the the tea breach that happened two weeks ago, but I thought the screenshot really demonstrates, hey, we should be we should be cautious about any new applications that come around um that ask for sensitive information and we should look at new applications with um increased scrutiny. But ultimately it seemed like the conclusion was it was just a firebased cloud storage bucket that was exposed to the public. Um classic misconfiguration. Any human could do it. Any vibe coding
could also do this as well. Um another example Jack Dorsey last month wanted to build a secure messaging app. He wanted to also build like a secure messaging protocol. and kind of tricky to do when you're vibe coding because essentially there was no trust or like authentication built into that application. And I thought this screenshot was funny because someone brought up, hey, where could where does security fit in? Where can I um report any vms? And they closed it before addressing. Um, and then just some other examples of these vibe coding platforms having just different various issues. And then just another example of context engineering that Megan was talking about earlier where it's like, hey, fix this
thing. Oh, something else broke. It's this idea of these complex problems are really difficult to address at this point versus targeting certain tasks. And then of course we have various supply chain attacks. Um so we have those attacks on AI tools and platforms, dependency confusion, vibe spamming, slop squatting, package hallucination, typo squatting, like all these names are ridiculous, but um again variations of these supply chain attacks. So like dependency confusion is when an attack vector exploits internal company libraries. Vibe spamming is when you're flooding code bases, repos, or platforms with real lowquality projects. It really pollutes search results and package indexes. It's like the AI version of SEO spam. And then we of course got slop
squatting or package hallucination. For the purposes of this, I'm just going to continue saying package hallucination because I think I'm going to trip up if I say slap squatting. Uh but yeah, it's this idea when a model hallucinates a fake package and an attacker can register that fake package, add malicious code, and wait. So it's like typo squatting and typos squatting is again something that's been happening when you just mistype a different package library what have you. Um and yeah just pointing pointing slop squatting I think folks there had been research over the past few years where um it had been identified as package hallucination but I think slap squatting as a term really caught on
and there was this research paper that came out earlier this year. we have a package for you. And it concluded that over 43% of hallucinated packages were recommended consistently across queries. And they used 16 popular LMS for code generation, these two unique prompt data sets, and found that up to 20% were determined to be hallucinated packages. So we got to thinking about what does that look like on a smaller scale and ended up um ended up pulling these this like cross-section of LM prompts from the Python data set from the research that um those researchers provided and we wanted to see what like vibe coding hallucinated packages looks like on a smaller scale especially in
comparison to that package there to that paper because they were working with like 16 different models over 600,000 different code snippets and yeah so we wanted to detect hallucinated libraries and used a local instance of meta's code llama model via llama because researcher those researchers identified that this model was one of the most prolific code library hallucinators at 21%. And again just like just uh captured a little video of the process. It's nothing fancy. It's just running those prompts and just also verifying if they exist on pi and socket and a lot of the case or rather we found that it was consistent to the research of that paper whereas about 21% of those packages did
not exist. I think what's interesting is that we keep talking about these models getting better over time and we're continuing to see the same problems at similar percentages as far as hallucinations. And I think that's what I found so interesting about this is that we're continuing to see this even months after these kind of papers are released and we're not seeing significant improvement in the number of percentage of hallucinations over time. Right? And again, these same the same same prompts that I pulled out with corresponding hallucinated um packages this time. And yeah, ultimately our results were consistent with the findings of that paper. Just a small example of some hallucinated packages we encountered. And like these packages
could be used as a guide for threat actors to create and upload malicious packages like as a distribution of malware or crypto info steelers. And by validating this paper's findings, we want to emphasize the scale we're working with. Imagine imagine an individual who's, you know, just coding on their own with no assistance. They make a typo um with some package or library that they want to import. They could quickly figure out, oh, there was a typo here. When we're working at scale, when we're using these tools and we're generating up to 21% of hallucinated packages, it gets that much harder to determine what's what's what's real and what's not. Um and that also makes it much more difficult to mitigate
going forward. And then also just another thing we wanted to point out uh from the hallucinated packages we saw again just in our like small example we noticed that there were like four trends of p of hallucinated packages where it's like hallucinated they're completely false. Um confused where they recommend a package but it doesn't exactly do what it's supposed to do. typos obviously where it's similar and some of these typoed um halluc like packages are actually malicious known malicious packages and then also deprecated packages where um we can imagine that this model was trained on older data they don't register any updates and that sort of thing. So these are like some of the categories that boiled up um and
obviously some characteristics the malicious packages um include that there are these brand new accounts it's not really clear who contributed uh there are newly introduced dependencies or in the case of like info steelers there are URLs that reference a host by IP address or there's these like outbound communications to non-standard ports and of course we can't talk about all this without suggesting ing some mitigations like of course manually verifying P package names like what I was doing in the video um identifying the history of contributors seeing if you notice that the you know the accounts are newer the contributors are newer that's something to be aware of um of course you can maintain these like valid package directories I think
in general that will be harder to do um but just something to consider and also some more technical techniques including like post and pre-generation prompt engineering model development rag and also with the rise of um aenic AI I think having a trained agent to specifically look for hallucinated packages um will also be helpful. So I'll hand it back over to Megan to shift us. I think we just want to close with a couple of thoughts. Um these tools are here to stay. they're changing the way everyone here works. Um, and I think that's sort of it's an interesting thing to say. We're really in the era of vibe coding. Um, and there's a couple of and
I think our big problem is really that these tools are in fact so right so much of the time and people are using them so frequently and that they're being so heavily pushed both by enterprises by executives because they kind of hope that we'll all be out of jobs soon. I think um I think it's worth sort of thinking about a lot of the things we've talked through recognizing when these pro when these uh tools are flawed recognizing what we can do to address that recognizing what's coming as far as these broader shifts in the industry recognizing that when we're using these tools ideally they're helping us um and not uh outsourcing our ability to think
critically which I think scares me a little bit um but ultimately um coming to terms of sort of where would we go from here and what does this mean as a security professional which is like now we have the internet flooded with add with vulnerable applications and things like that. Um I think the other piece of this that's interesting which is that we're on an exponentially increasing um trend as far as improving the complexity of things that AI of tasks that AI can create. Um right now we're seeing a doubling of roughly every seven months as far as them being able to handle tasks twice as complex. And so seven months from now, we're probably
going to be looking at a very different set of tooling than we do now. Um, we don't know exactly what that will look like, but I think that as these tools get better. Um, we've seen a lot of these tools get better, but we haven't necessarily seen a lot of the underlying issues go away. And I think as Khloe mentioned, like a lot of these are not fundamentally new attacks or fundamentally new things, but we're seeing them happen faster and with higher volume than we've seen before. Um, and I think that the problem with that is that we're seeing all these attacks come in. They're faster, they're higher volume, and now we're using the same kind of AI enabled tools to defend
against those attacks with the same underlying problems. Um, and so we're stuck in this sort of self-reinforcing cycle hoping these tools get better and trying to figure out, you know, where are the lines as far as how do we figure out how to protect against these things. Um, and so not all doom and gloom. I think don't think that we've seen a lot of fundamentally new um moves, but I think we've seen a lot of things happen faster and at more volume, which I think is is scary. Um with that, any questions for us?
>> Sorry. >> Uh >> maybe Yeah. or if you want to shout it out. >> You just want to protect. >> Yeah. >> So if you want depending on how you
>> Oh, that one. >> Oh, yeah. That was >> Thank you. >> Speak into this with me. >> Hello. >> Sorry, that was way louder than I was expecting. How do you think the dopamine loop of, you know, hallucination, maybe just one more prompt, one more pull at the slot machine is impacting or encouraging people to vibe code more or vibe debug more or think that this is like a a thing that they're going to solve with vibe coding? >> I think it's really addictive. I think it's about as addictive as gambling is. Maybe. It feels good. Like I know that I get stuck and it becomes really easy to be like, "Oh, maybe I'll just like fix
this one thing." And it's so easy, I think, in so many ways to outsource your thinking. Um, I think I know that I know that something that I think about when I think about when do I use AI tools, and I'll let Khloe talk to her experience, but um is that I've actually leaned out a little bit and I think I'm using them a little less now because I think the idea of outsourcing too much of my critical thinking scares me a little bit. And so I feel like I'm treating it a little bit like a smartphone addiction where I'm trying to limit it because I do think that there's a really fine line because I think these
tools are so everpresent and they're so [ __ ] excuse me. They're so good at what they do. Chloe told me I was only used allowed to use the fbomb once. They're so good at what they do. Um that it's it's unavoidable. We're all going to be using them or we're going to be replaced by people I think who are using them. But I think the thing that scares me is losing my underlying tech skills. Um, and so I try really really hard to use them sparingly, if that makes sense, or to focus on when I do or to recognize the dopamine loop as much as I can. Um, I don't think this is like a broad
answer to this problem. I think that'll probably be something we deal with the same way we deal with smartphone addictions. I'm sure AI will probably start giving us suggestions for how to deal with AI addiction in about 30 in 30 seconds. Yeah, I think the dopamine of it all is a really interesting way of framing it. I know in all the research and investigating I've done, I haven't seen that many people even calling it that, but I agree. I I do think there's a lot of like secure code guidelines and frameworks I also encountered. And a lot of them recommended just having a very um strict focus whenever you're vioding, by debugging. There's a certain point
where you just rabbit hole in such a degree that like it you're not getting anything valuable out of it. So, no, thanks for that question. It was really good. >> Um, I have my next I'm going to ask the next question. Um, that was a great use of slot machines. We're in Vegas like ace ace work. Um, awesome presentation. Um, two things I've been thinking about. One is uh not exactly this year, but maybe a gap of five years or so that we're going to look back and go, okay, if there was we're going to start to think about when the code was produced. If it was produced before 2020, like 100% chance it was human between 20 and 2025 or
maybe a few more years hybrid and then after this point, mostly machine generated. And I've been thinking about how that's going to change. I'm curious your thoughts of how that's going to change, what we think about when we think about code security, quality, maintenance. That's number one. And there's there's parallels to this around like steel manufactured before World War II, right? You knew the demarcation. It was before or after. Second parallel is around um manufacturing. If we think about code, code code factories, logistics, manufacturing and as more of a rote thing that way in the 80s we went from mostly humans assembling things to robotic and you know uh using automation. So I'm curious if you saw
any of that here. we're aware of this and so it feels like we might be losing something but is there maybe other things we can look at where there's been this sort of step change maybe not quite as fast where we can go oh actually this is a kind of automation that we've experienced before it's fine I don't actually believe that but I'm curious yeah what you think about it thank you yeah another amazing question thank you so much I agree there are so many real life like parallels to what we're seeing and I think in Megan and I's conversations that's why we wanted to talk about abstraction of technology. This idea of like this has been
happening, this has happened in every other industry. And I also think the framing of looking at code going forward as before and after is also very interesting. I could imagine that that was part of the reason why I brought up that screenshot about the T app where it's this idea of all these applications we're interacting with asking for sensitive information. We need to scrutinize it to a different degree than before. We need to be more cautious about the way we give our information. And I I I can't say I've thought a lot about like how this has impacted other industries and how ultimately we'll probably be fine. I would say so, but at at the same time, I think everyone's
given a lot of takes in all these different directions. And I can imagine Megan has some thoughts, too. I actually think the T app is interesting because I think there was a lot of conversation when the breach came out and everyone was like, "When was that code written?" And they were like, "It was before 2024, so it couldn't have been vioded." And like maybe and I so I think we're already starting to have that conversation which is like okay well maybe it was just bad human code. Um and I I imagine that there's going to be a lot of conversations in marketing and communications departments and we're all going to have like incident response plans that have a sub plan for was this
vibe coded and did we um did we get hacked through vibe coding? Did we get hacked through AI or was this human? And I feel like we're going to have um that's sort of where I see it going is that we're going to probably have a situation in which we're I think we're paying like individuals or developers almost to put their stamp on things. I think in a way that maybe we don't now. I don't there is absolutely no basis for this. This is my philos philosophical thoughts on this five years in the future because I think we're starting to see it in other industries like in places like music or in places where art is being created. I
think we're seeing a rise of AI art. um and AI music. And I think some really smart people have talked a lot about um the way for artists to survive in an AI world is to really lean into the brand and the connection to their fans because we're still seeing this like human connection. We're still seeing and now I think there's a return in a lot of industries to like slowm moving in the sense of like farmtotable and like handk knit and the sort of like highest end things we do are often like we're looking back and being like was this humanmade or was this um you know created by machine. So I can imagine
that we're going to see a big shift in the industry and that probably yes, we're going to have almost entirely machine generated code. And I suspect or my guess is that we're probably going to have a similar thing happen where we're going to have a weird thing where we're like, oh, we're going to have like custom shops that are like, oh, this app was human-coded and so, you know, maybe with AI assistance or something and we sort of start rebranding things that way. um or you know we have a couple bad AI breaches and then we start rebranding our instant response frameworks to think about no no we have like humans on the ground kind of thing. Um so that's sort
of I think where I think it's going is that this is going to be almost like a branding conversation more than anything else. Um, functionally I think the the best parallel to like the takeover A of AI is is maybe like the introduction of the internet in the sense that we were really right about a lot of projections and we were really really wrong about a lot of other projections. Um, and I think the only thing I could really say is that I have no idea how this is going to shape up in the next 5 to 10 years. Um, and I but I do think it's it's changed the way like we as security
professionals and as developers in general have to stay connected to what's going on. um most of the examples we pulled like we had them all set and then things would happen and we'd be like okay well like this is now a newer example of the same thing we're talking about um or like a a better example or more catastrophic example and I think that's been interesting too just sort of like the speed at which these things are happening um and the the speed at which we need to react to them
>> a time check we're good >> thank you Um, yeah, just quick question. So, uh, I don't know if there's any studies on this. It's probably a little bit too early, but I was curious if you like in your research you found our company's potentially like laying off senior developers in favor of VIP coding or vice versa where it's harder for entry level like college students to get into a position because they're laying off they you know you don't need lower level entry- level positions in favor of having senior developers you live by coding to increase their productivity when clearly you have studies that show that there's been 10% you know decrease in So I was just curious if any of that
has come up uh during your research as well. Thank you. >> I would just say in general I think as an industry we have flooded um we've flooded our industry with a lot of entry or claims that we need more people in security and in software development which we very much do. But the issue is we now have an influx of more junior folks who um learn very quickly and are struggling to find entry- level positions. So ultimately I think that a lot a lot of executives are in favor of focusing in on more senior um engineers and outsourcing more junior positions and outsourcing it to AI tooling as well. Yeah, I think we've seen at least from
my experience, I think where I've seen the most big change thus far has been in places where like they wouldn't have hired in developers or maybe they're hiring one developer instead of two. I feel like like the small businesses, the places that really couldn't have afford these tool these, you know, hiring a full-time developer instead are going to like Fiverr or something. I think we've seen a lot of that be replaced by sort of like vibe coding. Um, and I think we've seen it a lot in places where maybe you don't need a full-time developer, but like you're a company that needs to show like an MVP and previously maybe you hired a developer who was just creating MVPs and now maybe
you've got project managers who are like by coding your MVP where it never is intended to be used at scale. Um, so I feel like that's where we've seen the most. I think uh the next thing will be exactly what Khloe's saying, which is I think we're going to see less junior positions. Um, and I think that that worries me a little bit because I think there's going to be a little bit what we had in security right now, which is that there's such demand for senior positions and there's often very few junior positions. And what that happens is then then we aren't training the junior people to like take on the senior roles and then we're like, why don't we have
enough senior people? And you're like, well, feels like we've created this problem ourselves. Um, and I I think that that's what I worry about, which is that like because we're creating a situation in which it's harder for junior devs to find jobs that then they don't have time to grow into senior developers and then we kind of have probably going to have a moment where either we lean harder into AI um and hope that that can replace people or we sort of run out I think of of developers and things like that. >> Question uh do we have time for one more question >> or Yeah, >> probably one more. Okay. >> Hi there. Great talk. Um, I just kind of
wanted to get your thoughts on one thing. You know, uh, you said something about, uh, being most concerned about us outsourcing our critical thinking. And I kind of feel like that is especially becoming the case as we see search engines getting polluted with AI generated content and things that the AI models are then regurgitating and recycling through themselves, creating more and more hallucinations. So, I'm worried about something catastrophic coming out of that and I wanted to get your thoughts on how soon or how you feel like that may play out. >> I think this is just like a continuation of a trend we're seeing already. I think this is going to get worse. Um, I think
actually I think Chloe, you had a really good example of this. There was someone who tweeted a couple days ago that was like uh some they they had asked AI about some kind of natural disaster in their area and they got terrible information and then they were like, "How could this happen?" And I was like, "Oh, it really scares me that we're going to AI to ask about like urgent natural disaster information in your >> Yes. >> It was like the the tsunami that turned out to not really be a particularly large tsunami in Hawaii, so everyone's okay." But yeah, it was really scary. Um, >> yeah. >> And I think so I think we're already
starting to see this. I think it's only going to get worse. Um, and I think my my like optimistic side of my brain says that we're gonna sort of realize that this is a problem and we're going to start reinforcing things. I think the best case scenario for this and I say that with like a huge grain of salt and a huge asterisk which is that OpenAI has just released what they're calling like an education model which is supposed to like not give you the answers as much. It's supposed to ask more probing questions and help students learn and they're partnering with schools to release it. I the the good part of potential part of this I think would be
that this something like this works and you actually have models that try to incentivize people to learn and become actual effective learning tools as opposed to I asked for it I gave you the answer and then I you know didn't do this. Um >> I don't think it's that likely though. Yeah, I think in general we just need to continue to empower folks with training and education on what are the best use cases to use um a chatbot, a different model and also I don't know I grew up learning to uh verify all your information. I think that's kind of fallen off over the past few years just by virtue of the prolification of being
able to get results immediately. I think there needs to be more of a focus in education today on how you interact with technology and responsibly using it. Yeah, thank you for that question. [Applause] >> Thank you. Thank you. [Music]
Hey, [Music] [Music] down. [Music] Boo. [Music] Heat. Heat. [Music] Black.
[Music] Hey. Hey. [Music] Hey hey hey. [Music]
[Music] Heat. Heat.
Hey. Hey. Hey.
[Music] Heat.
Heat.
[Music] Heat. Heat.
Heat. [Music] Heat.
Heat. Heat. Heat. [Music]
Heat. Heat.
Heat. Heat. N. [Music] Heat. Heat. [Music]
[Music]
[Music]
[Music]
[Music]
Woo! Wow! [Music] Heat. Heat.
[Music] Heat. Heat. [Music]
Heat. Heat. [Music]
Heat. Heat. N. [Music]
Heat. [Music] Heat. Heat. Heat. N.
[Music] Heat. Heat. N. [Music]
Heat. Heat.
[Music] Black. [Music]
Yeah. [Music] black
back [Music] it back. Yeah, [Music]
down. [Music] Down
down down down down down down down down down down down down down down down down down down down down
[Music] Hey, [Music] hey hey. [Music] [Music] Do [Music] you? [Music] Fire.
Hey. Hey. [Music] Hey, hey hey.
[Music]
[Music] Heat. [Music] Heat. [Music] Heat. Heat.
[Music]
Heat. Hey. Hey. Hey.
[Music] Heat. Heat. [Applause] Heat. Heat. [Music] Heat. Heat. Heat. [Music]
Heat. Heat.
[Applause] [Music] Heat. Heat. N.
[Music] Heat. Heat. [Music] Heat. [Music] Hey, heat. Hey, heat. [Music]
[Music]
[Music] Oo. Hey. Hey.
[Music] Heat. [Music] Heat. [Music]
Wow. [Music] Heat. [Music]
Heat. Heat.
Heat. Heat.
[Music] Heat.
[Music] Heat.
Heat. Heat.
Heat. Heat.
[Music]
Heat. Heat.
[Music]
Heat. Heat.
[Music] Heat. Heat. [Music] Heat. Heat. [Music] Black [Music]
[Music] hey [Music] yeah hey yeah hey yeah hey yeah hey yeah hey yeah hey [Music] Hey hey hey. [Music] Yeah, [Music] down. [Music] Down down down down down
down down down down down down down down down down down down down down down down down down down down
[Music] Heat. Heat. [Music] [Music] Baby Lou. [Music] Heat. Heat. N. [Music] Black hey black. [Music] Heat. Heat. [Music]
[Music] Heat. [Music] Heat. [Music] Heat. Heat.
[Music]
Heat. Heat.
[Music] Heat. Heat.
Heat.
Heat. [Music] Heat. Heat. Heat. [Music]
Heat. Heat. [Music] Heat. Heat. N.
Heat. Heat. [Music] Heat. Heat. N. [Music]
Heat. Heat. N. [Music] Heat.
[Music]
Heat. [Music]
[Music] Hey. [Music] Heat. Heat. N. [Music]
[Music]
[Music]
Woo! Wow! [Music] Heat. Heat.
[Music] Heat. [Music]
Heat.
[Music] Heat. Heat.
Heat. Heat. [Music] Heat. Heat.
[Music]
Heat.
[Music] Heat. Heat. Heat. [Music] Heat. Heat.
Heat. [Music] Heat. [Music] Yeah. [Music]
[Music] Hey. Hey. [Music] Hey, [Music] hey hey. [Music] Down down down down down. [Music]
[Music]
Hey, [Music] hey, hey. Heat [Music] up [Music] here. [Music] Yahoo! [Music]
Woohoo! [Music] Heat. Heat. N. [Music] Heat. Heat. N. [Music] Heat. Heat. [Music] Fire down.
Heat. [Music] Hey Heat. [Music]
Down. [Music]
[Music] Heat. Heat. [Music] Heat. Heat.
Heat. [Music]
Hey. Hey. Hey.
[Music] Heat. Heat. [Applause] Heat. Heat. Heat. [Music] Heat. Heat.
[Music] Heat. [Music] Heat.
[Music] Heat. Heat. [Music] Heat. Heat. [Music]
[Music]
Hey.
[Music]
[Music] [Music]
Woo! Wow! [Music] Heat. Heat.
[Music] Heat. [Music]
Heat.
Heat. Heat. [Music]
Heat. Heat. N. [Music] Heat. Heat. Heat. [Music] Heat.
[Music] Black. [Music]
Yeah. [Music] Yeah. [Music] It works. Yeah, [Music]
down. [Music] Down
down down down down down down down down down down down down down down down down down down down down
[Music] Heat [Music] up here. [Music] Heat. [Music] Heat. [Music] [Music] Okay.
In the beginning was the internet. Okay. pay the arponet and it was good and many protocols sprang forth FTP gopher finger but one protocol exceeded them all http and http grew and racked strong and mosaic came forth and mosaic begat Netscape the navigator and it was good. But there was one among them named MCI who pondered within themselves and said, "This is fun and all, but how do we make money? For requests are all alike, stateless, and cannot be separated one from another. And frankly, you try building a shopping flow in these conditions." So, a young engineer stepped forth named Lou. And Lou created the cookie. And with the cookie was born the session. And with the session came those
who desired the session of another. And that was a problem. So welcome everyone. I am thrilled that you have come to hear me speak at five o'clock on the next to last day of a conference and uh I'm standing between you and beer. So I guess beer just happens. So maybe that's why you're here. Um I I have some notes for this little portion. Uh, first off, how many people attended the clock the talk at 2 o'clock by Raphael Felix that was also talking about device-based session credentials? I was cringing as he was going through that because the level of detail he went into about cookie storage on disk. If he had done that with device-based session
credentials, this would have been a 15-minute talk. But he didn't. He left me some content and so all is good. Um, a couple of uh disclaimers here. Uh first off, pictures. Uh more than welcome. You're uh please take pictures of me, take pictures of the slides. There will be a Q QR code and a URL at the end if you would like to download the slides. Uh so, you know, save your disk space if you prefer on your camera roll. Uh I would like to say that these thoughts do not represent my employer, but I'm self-employed, so that won't hold up in court. Um so, who am I? I have I started my career monitoring
websites like red green it works it doesn't and then I started a career or a job talking to executives why websites were broken and then I got into being a project manager for developing websites and eventually I guess more of a product manager and finally I decided all that was super boring and so I got into security. So that was about 10 getting on getting close to 15 years ago and I've been an application pen tester primarily application ever since. So hopefully that gives me some basis to talk about these topics today. It might also mean I'm just old and opinionated. Uh so I have a question. I I have an assumption about the t the audience
here. Uh, and so I would like to ask how many of you would consider yourself a de developer as your primary job role? We've got a couple. Okay. How many of you write authorization checking, session management code? Hey. Hey, we've still got some. Excellent. How many of you spend more of your time talking to the people who are writing that code? A lot more. Yes. Okay. This is Bides. That is expected. Um, I've actually given this talk once before and the the focus was very different, right? I feel like in this case I'm sort of preaching to the choir a little bit. I don't need to convince you that adding the secure attribute to a cookie is a good idea.
Um, so instead of doing that, uh, I hope to present you with some stories or some kind of logical ideas, uh, that you can use when you're having those conversations that will hopefully make that uh, I'm going to use the word battle with probably the product manager of why this particular feature should be put in this sprint instead of being punted out to Neverland. And besides, I'm really tired of writing the secure attribute finding in pentest reports. Okay, so first story. Um, about two years ago, how many of you remember the Conti leaks, right? Conti was a malware group. Uh, some presumably disgruntled member uh took all of their Discord, I think it was Discord, uh, chat logs,
zipped them up, and put them out on the internet, right? And so a bunch of security folks such as myself, uh, just for kicks and giggles, I went through it and just, you know, saw what was there. And other people who paid a lot more attention than I did realized that there was a section of Telegram channels that they said, "Hey, y'all ought to hang out here." And so I did that because you can do that on Telegram. Why not? And so I was in these groups for a while and didn't really say anything. And there was mainly just news, you know, ex new tool released, so and so got arrested. Sometimes there were cheer emojis,
sometimes there were tear emojis. Uh, but it was just, you know, obviously just a telegram group in that community. And then one day somebody hopped in and said, "Hey, the support channel for XI Infosteeler, I honestly don't even remember which one it was, is now going to be on this channel." And so I thought, golly, that's where I need to be. And so I joined the channel and just continued to lurk and it took about three days before I got this chat. Um, so for those who don't speak Russian, uh, so somebody jumps in and says, "Hey, I heard you're a cookie market and they said, uh, basically a little conversation." Well, that's not here.
This is just a simple Telegram channel, completely innocent. But if you want to go over to our other chat or the actual market, go ahead and sell away, right? And I'd always heard that there was a marketplace for stolen cookies. But I always thought that's kind of dumb, right? Like cookies are short-lived there. Why would you do that? Um, but apparently it's it's valuable, right? and we just won't mention uh like Google where that session lives for a month and kind of drives me crazy but not crazy enough to delete cookies so I guess I can't complain anyway. So I I guess story number one to me that you should use is that cookies actually need to be
defended. So I made this lovely graphic that tries to capture the majority maybe all of the ways that cookies get attacked. And there's bombs, which are attacks, and there are shields, which are defenses. And they all kind of boil up into categories of how a cookie might be compromised. And so today, we're going to really speedrun, but but we're going to go through and address most of these columns. Um, I'm going to skip guess the token. Keeping your tokens random. Good idea. There we've covered it. All right. So, uh, session protection on the wire. Uh, hopefully you don't mind. I was fascinated by the trivia, the history of some of these defenses, and so I'm going
to share it with you. It's not going to help you win an argument, but it's fun. So, the first one, RFC 21109. This is dated 1995. This is the original cookie RFC written by Lou Mont Montego, Monte, something like that. I can't remember his last name, but this was Lou who quote unquote invented the cookie. And so he realized right from the start that one of the risks of protecting a cookie is that somebody could see it as it was transmitted over the wire. Now when he was writing this RFC, SSL wasn't a thing or at least not a public thing. It was actually being developed in parallel there at Netscape. And so in the RFC he says there should
be to the secure attribute says only use an unspecified I think I actually highlighted there a secure method right that was always SSL right which is then been succeeded by TLS. So elsewhere in the specific well no let me back up here. So secure was that very first defense. If you put the secure attribute on the cookie the browser knows only send this over an encrypted connection. And if everybody would just do that, for the most part, we wouldn't need any more defenses. But as we all know, people don't just do that. And so we have more layers. All right. So the risk we're defending against is here. I've got this lovely picture here, a a an MITM attack, right?
The secure attribute, like I said, would be great if people used it, but 16 years ago this week, some jerk named Moxy Marlin Spike, let me back up a second, uh, released uh, MITM proxy, I think, is what it was. Actually, I could be wrong with on that name. Anyway, so so he realized that people weren't actually doing this, weren't using the secure flag as much. And so he wrote a tool that had been theorized and possibly implemented before, but it made it really easy to uh if I can convince you to talk to me as your proxy server, then MITM proxy will uh strip out the HTTPS directives in all of the HTML. And so
even though you might start No, that's not true. uh anywhere there's an unencrypted connection, anywhere you make an HTTP, MITM proxy will actually inject requests and and cause your browser to hit other websites and it will strip the secure flag off that off the protocol, right? HTTP versus HTTPS. And therefore, if your cookie is not protected, it will slurp it up. Very handy, very useful, ruined the day for a lot of people. So, how do we defend against that? uh 2009 we've got uh this one which wow you can't read that http strict transport security right uh I'll move a little quicker here that obviously directs once I've made a request and the website responds with that that header
my browser will never again at least until a time expires but essentially never again make a connection to that website that doesn't use HTTPS right okay so this is why I've gone down this rabbit hole What about that first connection? How many of you are familiar with the HSTS preload list? We got a handful, but enough that I I don't feel stupid for spending time on this slide. So, there is this website HSTS preload managed by Google, but used and implemented by all of the major browsers. If you register your site, put it on this list, the browser will never ever make an HTTP request to your website. Not even the first time. Not ever. Done. Right? So
that's a defense you can put in your pocket and to be honest, your developer doesn't have to implement it. You can do it and it'll be done. All right, but is that really a problem? Like I mean network segmentation, client isolation, who really can get a man-in-the-middle proxy or machineinthe-middle proxy running these days? Some bloke in Australia gave us fodder for that? And I say bloke because Australia. Anyway, uh so last year, uh there was a gentleman, we shouldn't call him a gentleman, there was a crook who uh was setting up Wi-Fi access points that was named the same thing as like the Sydney airport Wi-Fi, probably also Starbucks there, probably a hotel there. And then he would sit in airports, wait
for people to connect to him, and at that point he is literally the proxy, right? That's what they thought was going to happen. And so he sat there monitoring all their traffic. Now this is great for story time. In reality, he wasn't slurping up session tokens. He was just looking for nudes, which really I don't know. But the uh the article still works for our story time on why we should spend a little bit of effort and keep our cookies safe on the wire. I've got a checklist here. It's there in the slides. Not going to speak to it. All right. So, next up, uh we've got JavaScript injection. And if you'll pause again for a little
bit of trivia history. Why is it called cross- sight scripting? Does it have anything to do with say cross- sight request forgery? No. I mean a little but no. Right? When we're talking about this attack that is known as cross- sight scripting, what we're really talking about is injecting JavaScript into the client's web page. Right? And it has a whole lot more to do a lot more similarities with SQL injection or command injection. An earlier speaker referred to it as client side remote code execution which is pretty descriptive. I could I could see that. Uh anyway, so I'm on a little crusade to have us refer to it as JavaScript injection instead. The history behind it in one minute.
Back in 1999, Microsoft, so we can blame Microsoft for the name here. Uh Microsoft was monitoring their forums and they started seeing links going off to American Express. Now I've got it listed out. No, it isn't. So there's a UR an image tag there. It's a broken image is is what that is, right? And the URL, like I said, was going off to an American Express site and it had a bunch of HTML encoded nonsense at the end or URL encoded anyway. Uh, and so they decoded that and realized it was JavaScript. And they realized that what was going on was there was a reflected cross-sight scripting or reflected JavaScript injection attack on the American Express site. And they were
using the popularity of Microsoft's forums to have people come and visit. And anybody who had an active session with American Express, their cookie was getting slurped up and sent off to evil attacker. Right? So there's your trivia. That's where cross-ite scripting came from because it was going from the Microsoft forum to American Express and then off to the attacker. How do we defend against it? Everybody knows this one, right? HTTP only on your cookie. Easy. This has been around since 2009. Curiously, the same year MITM proxy was released. I don't think there's a connection. Anyway, uh that's when this RFC was there. Uh it would be easy to use. Oh, but let me set up a question.
How many of you have had to fight the battle of whether or not HTTP only should be on your session cookie in the past year? We got one. If I go two years, do I get any more? Okay. If if I could, why why was it an issue? >> I don't have a mic for you. I'll repeat what you say. >> I think it was a single page app access. API. >> Yep. >> Yep. Yep. >> Okay. So, let me repeat that single page application. So, a JavaScript front end and as most of us are aware, right, that makes a bunch of API calls back to the server. We need to authenticate those API calls because that's where all the
good stuff happens. And so, the developers in their mind say, I need to have that cookie so that my JavaScript initiated request can send the cookie with it. Is that actually the case? No. Thought exercise. I have a cookie. I have successfully attached the HTTP only attribute to my session cookie. Right? If I go into my Java my console and I type document.cookie, I'm going to get undefined. Right? That's what the browser is supposed to do. It blocks that access. But what happens if I set up an XML HTTP request or a fetch request, right? and I go off to the URL that is within the scope of my cookie, right? I can't see the
cookie, but the browser knows it's there. The browser knows it's supposed to go along and so it will be sent and the API request will work and the front-end API didn't actually have to have the cookie at all. So, you can hopefully win this argument. Now, there are some architectural choices that make this null and void. uh bearer tokens, the the authorization header. Uh in order to add that to a an XML HTTP request, like you obviously have to value have to have the value to do that. The browser won't do it for you. Now, I would argue that this is a reason that cookie authentication is actually superior to bearer tokens, but JWTs and bearer tokens are the new
hotness. So, good luck. That that's a battle to fight. Um, I will say if they're going to use the authentication uh header, you want to store that cookie or that that token at this point uh in a worker thread. I'm going to skip going through this workflow, but that worker thread isolates it from any other JavaScript that'll be running on the page. And so it can't be stolen through a JavaScript injection attack. However, the JavaScript injection attack can call the worker thread. So anyway, moving on. Blind session attacks. What happens if I can't read the cookie? Can I still do bad things with it? Turns out yes, right? We can abuse the fact that the browser automatically attaches
that cookie to requests to other websites. Right? Cross- site request forgery. I'm going to skip this. Right? It's very similar in the attack flow. That's why in this case, the cross site actually makes sense. Um but in but instead of shipping off cookie running JavaScript it's just making a legitimate call that is beneficial to the attacker right and so traditionally there have been four requirements to make a CSRF attack work we got to be using cookies first off right um the endpoints have to use get or post. If you try to make a JavaScript request with put it's the browser is not going to send the cookie with that unless you've enabled it with access control allow origin. But let's
stick with this here. Uh, and then you can't have any unpredictable parameters and the result must be state changing, right? The attacker is not going to get the response. They're just going to benefit perhaps from the results of that query. So, if I find an endpoint that adds myself to the admin group of a site and then I lure an admin to my malicious site and their cookie gets sent to the server, the server is going to be like, "Yep, the admin just said add this user." and it'll go ahead and do it. Right? This is cross-sight request forgery, but browsers are getting really good these days. Uh, raise a hand. The same site attribute on a cookie. What is the
default value these days? Anybody gonna yell it out? >> Lax. Okay, which sounds bad, but isn't as bad as you might think, right? Uh, lax means unless the user is like it will still send a cookie with a quote unquote uh normal request like an image request or if a user clicks a link, the cookie will go with it. That's what we expect to happen. But anything else, any background requests that get generated, it will not send that cookie unless you're already on the same domain, right? We're following the same origin policy. Now, it turns out it's even a little more strict than that. I had a surefire finding. I had my proof of concept
running like last month. All of those first four conditions were true. Uh there was no same site cookie on it. Anyway, it still blocked the request even though the user was clicking a link. I don't know. The browser is getting really object aggressive about that. So unless you've set the same site none attribute on your cookie, you're probably secure even without all of those other defenses. So my story here is if you've got limited political capital, don't fight this battle, right? Make sure they're not doing same sites none or yeah, same site none. Make sure they're not doing access control allow origin and access control allow credentials, right? Because that will break this. But if they aren't doing
those things, then it's probably okay, fight another battle. And I feel a little dirty saying that, but like that's kind of reality. All right. Uh quick mention of a loophole. If you have JavaScript injection on your site, all is lost, right? Uh cross-sight request forgery is designed to protect against cross-sight. If you're on the same site, all of those things that you might do with cross-sight request forgery, you can do because you're on the same site. And this isn't really news to people, I think. Um but a lot of times when we write up or when we're talking about a JavaScript injection attack, we focus on stealing the cookie, right? And if HTTP only is on the cookie, we can't steal
the cookie. And so the developer is like, what can go wrong? Well, I can still make requests with the authority of the user within that same website and I can probably do bad things. Uh yeah. Uh clickjacking. I have a slide because I have to have a slide. I haven't seen a a clickjacking attack in the wild since 2004. Um, so it exists, but there's good ways, really easy ways to defend against it. Uh, again, I wouldn't I don't know how much time you spend fighting this battle, but the X-frame option header has been the favorite. Uh, it's actually being deprecated, which was surprise to me. They would rather people use the content security policy directive frame
ancestors does the same thing. I imagine they'll be really slow to actually stop supporting uh the X-frame options, but they want people to migrate. So now, you know, last topic and then we get to what I think is the fun stuff, the device based sens device bounce session credentials. One more topic though, post compromise. What happens if I steal the cookie? Then what? Well, it turns out there's a lot I mean I shouldn't say there turns out. You're all probably aware there's a lot of defenses designed to sometimes we use the phrase limit the blast radius of our uh attack. So if a if an attacker gets the value of the cookie somehow, right? No pointing fingers, they just got the
cookie. Uh what can they do? Or more importantly, how long can they do it for? And so, um, my suggestion is that actually drawing a timeline might help in these conversations. Um, so you've got log in to log out. How many users actually log out? Not very many. So that's why we have idle timeouts. That's why we have absolute timeouts, right? And as we start drawing these out for our developers, for our uh scrum masters, pro program managers, hopefully it helps illuminate this picture, right? We also have JWTs, which I'm just not a fan of. Uh but uh sorry. Um the problem with JWTs is that their expiration is hardcoded in the token. And so even though the user
presses log out and it can't be stolen out of the browser anymore because it's been cleared out of the browser, if the attacker already has the value, they can continue to use that token for the full lifespan of the token. Right now, you can actually build in an identifier and then you can store on the server side this token, the user's logged out, it's no longer valid. But if you're doing that, which I encourage, but if you are doing that, why are you using JWTs in the first place? Right? the the benefit there is supposed to be that you're not supposed to have to do server side uh session tracking. Um again, my opinion, I'd rather use a cookie. I just think
it's better. All right, timeline does that. And the summary. All right, how am I doing on time? I didn't restart my timer. I have no idea. All right, we got 15 minutes just about right. So, let's talk about the new ideas. Okay, how do we defend a cookie given all of the demands uh that applications have for using cookies? And I I keep using the word cookie, session token, right? Most of these defenses are cookie based, but the session token is really what we're talking about. All right, so fortunately, crossdevice cookies aren't a thing, right? If I log in on my laptop and I get a cookie, when I go to my phone, I don't expect that
the cookie is there. I don't expect to be logged in. I expect to send my credentials again and get a new cookie. Whoops. This way. All right. So, we can use that to our advantage. We have an expected behavior that gives us a foothold. So, there was a proposal that was put out by Microsoft in 2016 called uh token binding. And uh if you are at all familiar with this basically it takes advantage of the TPM the trusted platform module that is more and more on devices. And the idea of the TPM is you can you can ask it to create a key and you give it an identifier to so you can refer to that
key in particular in the future and you can say hey encrypt this right and the private key resides inside that chip inside that uh that module and so it can do the encryption but if it's done right there is no way to get that private key or or the I think it's always public private key anyway we'll say private key you can't get the key out. So your the ability to encrypt the lifespan of that key is absolutely divi uh absolutely bound to that device. And so if we implement a protocol where I get a cookie or some kind of identifier and then I encrypt it using a key that is unique to the website right that TPM can
hold oodles of keys. That's not a limited resource. So every website I go to, I can ask it to create a new key and then I can use that to prove that yep, this request came from something that has access to that TPM, right? So the Microsoft proposal thought they they were, I think, rather clever, but unfortunately it didn't work out. Sorry, spoiler. Um they they said the client knows the constants that were negotiated during a TLS session and the server knows the constants that were negotiated during a TLS session and the attackers don't. Right? We're pretty confident that TLS works that we can do that the that exchange and generate those constants so the two endpoints
know and nothing in the middle does. So why don't we use that as our verification? And so you'll the the proposal here is when in fact I'm going to skip words are hard. We have pictures. Um so the browser initiates that TLS connection and it's a TLS option that it's sending that says, "Hey, I'd like to use token binding." And the server says, "Great, I know how to do token binding." And so it responds. And so then this the browser handles all of this, requests the public key and then takes those well just hold on for a second. It requests the public key and then it submits that the the the public key back to the server which then
stores it with the session, right? And so then when I make a request, we've got a new TLS section session, but both sides maybe we do. Anyway, let's pretend we do. uh both sides know those constants. So the browser can encrypt them, send them along to the server, the server can use the public key to decrypt them and trust that if it gets the right constants back only the trusted client could have sent this request, right? So this is kind of beautiful, right? It's very little extra work on developers. Um the problem is this last box. The server needs to read the session content and then it needs to get the key that is associated with the session.
Think back to your if you're a website developer or maybe a web server developer. How far apart are those two bits of code, right? Like I write a lot of websites. I never look at TLS constants. And so the overhead that was being generated to somehow create an API, create an interface that the application code could use to get access to those TLS constants which it needed in order to do this decryption was just too wide of a gulf to back to uh cross. And so people kind of like the idea but it just died. Um all the major browsers I think even Edge have stopped supporting this protocol. So, sorry to waste your time, but Google
has come along and generated a similar uh proposal and they're trying to have it get traction, right? I don't think it's perfect, but I think it's good enough that it should get traction. So, I'm here talking about it. Again, words are hard. Let's talk about a picture. Oh, I got to say one thing. uh in the RFC in this proposed standard there are some disclaimers. So Google recognizes that if the attacker is on the hardware, if it's still if they've still got access to your uh laptop, they can possibly call the TPM and they can probably call the API in the browser that is going to call the TPM. And so if they're still there, this isn't going to
work. But if they're still there, you've got bigger problems. So that's disclaimer one. Number two, if the attacker has machine in the middle position to look at this traffic as it's being exchanged, you got bigger problems because you would have seen the plain text or the the password go across. So the cookie is is going to be subject to or could be sniffed in that case, right? So yes, those are valid attack vectors. They're not worried about it because we're just at a different point in the attack phase. So similarly although the devicebound session credentials starts after authentication. So the user has submitted their username and password and two-factor authentication of course and then the
browser is building this session and they send a header that looks like this uh this sex session registration header. Okay. So this is a response header. It specifies the encryption methods that are supported. Uh, currently Chrome, which is the only browser that implements this, won't actually support RS 256. It's only uh ES. Anyway, trivia. The in the the response header, there's a path, and this is a web endpoint. We'll call it an API endpoint that the developer has to implement. And then there's a challenge with random string and there's an authorization with another string. I haven't figured out why there are two yet. Uh but there are two. So we'll we'll watch how they flow through. So
the developer also continues with the normal session building. You still use a traditional session identifier uh but it has to be a cookie. Okay. So, oh, there there's all the things I just said. Um, oh, I I will say in the at least the blog posts that are describing this, they're suggesting that you could continue with a a long lived session. So, this could be completely normal. And if a user visits your site with a browser that doesn't support DBSC, then it's just business as normal. So, that's probably a good thing. That will ease the transition path. Uh, but eventually, we'll want that to die. Uh, okay. So, now we're in the middle square. So, the browser has received the
response header and it goes off and uses the TPM the same way, more or less the same way. Um, the the the response header included a challenge. And for the longest time when I was reading the uh RFC, I thought that the challenge was the thing that got encrypted. Uh, turns out it's not. What it actually does or the browser does is builds a uh a JWT, so a payload, JSON payload, and it sends that off to the TPM to get encrypteds, right? And then the thing that gets sent back to the server is this signed JWT and it's signed with the private key that only lives in the TPM. Uh, and that looks like this, right? So
in the previous request we said you start your session at this endpoint and so we post this JWT this is the payload of it to that endpoint and the developer has to have written code to do something with this JWT right what they're supposed to be doing is taking the public key which oh by the way is right here and use that to verify the signature on this JWT. Now again, network sniffing out of bounds. They know it's vulnerable to that. Normally sending the key that would decrypt the token would be a bad idea, but in this case, we're okay. All right. So, if that validates, and there's no reason why it wouldn't, now the server knows this session is
associated with this public key, and it tucks that away. So uh again, just pointing out we've got that same challenge here and we've got the authorization here. I don't know why there's two. Uh, there's the key. All right. So, the response to that reg that session start endpoint looks like this. Uh, and there's stuff that I'm not sure is really needed to be returned, but it is. Um, but the main thing you want to see, I've got boxes here. You've got that same identifier. Don't know why. Uh you've got the refresh URL. This is key, right? So we've got a registration URL and we've got a refresh URL. And then we've got a scope. So part of the
specification is that the server can tell the browser which of my endpoints do I care about this session token. So you could like exclude slashstatic and then if the browser loads slashstatic uh the browser will know not to send the cookie to that particular endpoint. I don't know that it would hurt if it were but you can exclude it. Um and then the second thing is the endpoint tells the browser the name of the session cookie, right? uh because the browser is now going to take on the responsibility of keeping that cookie refreshed. So it's following the what's become somewhat of a standard. You issue an access token, you issue a refresh token, right? And the
refresh token lives longer and and typically it's been your client side code that has to keep track of or detect when that cookie has expired and then go get a new token, right? the browser is going to start doing this, which is a weird mishmatch of responsibilities to me, but it it provides some benefit, so maybe it'll work out. So, when the browser realizes that the token is expired, and I I haven't seen this in action for reasons I'll get to, so I'm not sure if that's purely timebased or if it's just waits to get a 401 unauthorized error. Um, but somehow it detects that. And so again, the browser says, "Oh, my cookie is expired.
I'm going to automatically call the refresh endpoint. I'm going to get a new challenge." That's going to get uh sent back to the browser. The browser is going to get signed by the TPM. It's going to get posted to the refresh endpoint. The server is going to get that public key, verify the signature. If it matches, then it's going to issue a fresh session token. Now, the session token is going to be quote unquote short-lived in this environment. The examples are generally like 10 minutes. Um, I would I could see that dropping down to like a minute, but regardless, 10 minutes or less is pretty much too fast for an info stealer to get that cookie
and go sell it because the cookie is still going to exist, I believe. again get to get to some of the challenges in a minute within the uh like if I go to developer tools I think I'm still going to be able to see that cookie it's just going to expire really quickly and the the client side application is not going to worry have to worry about refreshing it so we now have a cookie that only lives for five minutes and and a refresh mechanism that is tightly coupled with the device that was used to authenticate to the server and that pretty much renders the info stealer attack void, right? It doesn't work anymore. And and
that's that's our goal. Okay. So, tips. The RFC's out there. There's at least one website which is trying to let you see this in action. Uh it's written by a Google Google employee. It's a little flaky. Um I think it's got some backend issues and it crashes a lot. Uh but when it works, you can you can capture some of these exchanges. But this is a uh an inprogress standard and so you have to set four different flags within Chrome in order to get it to respond to the the headers. Um the website you're talking with must have a valid HTTPS certificate. One of the other rules that they're implementing with DBSC is that uh if it's not
encrypted, the cookie does not go. and the the headers are going to be ignored. So, you can't really do local, right? You've got to spin up something and get a let's encrypt or whatever. So, that has to be there. And then the other wrinkle is that the uh refresh requests don't show up in the developer tools. So, if you're trying to debug your server, you've got a challenge. I've gotten some of the requests to get captured by Burp and but some of them don't which is probably my server code error but I don't know now you can use uh net export which is built into Chrome and this basically logs a whole slew of network packet information to disk and
then there's this other tool hosted on AppSpot that will parse that mess of data and present it in a way that you can read. There's kind of too much data here to really I mean you can find what you need but it's not user friendly. Um okay, right about at time challenges that I see for adoption. Uh like I said, I spent probably only a week but a week trying to implement a server that would get this DBSC uh exchange to complete and I failed. Um, so it's fiddly. Uh, eventually people are going to get it to work. I'll get it to work. At that point, this code really be needs needs to be made public. I don't believe it's
practical at all. If we go to a developer and say manually implement these endpoints, I don't think it's going to happen, right? Not that they can't, it's just if we can't get them to add a response header, we're not going to get them to do this. So, it's got to be incorporated into a uh framework or at least reference implementations that we can just cut and paste, right? Java, Python, Ruby, whatever. I believe that needs to be um I'm still struggling a little bit with having the browser do my cookie management. Um I guess that's something I just need to get re used to. There's no reason I can think of that it shouldn't, but that's a different model.
And so, developers and application architects are going to have to get used to that. Uh, and then I've already gone over the the problems with debugging. So, that's it. Uh, we've got the QR code. The, uh, slides are hosted uh, on my website, and we've got LinkedIn, email. Uh, I'm happy to field questions now or later via other methods. Um, thank you. [Applause] Uh, I don't see anyone kicking me off the stage quite yet. So, are there questions now?
Okay, I've confused you all. Thank you. Have a good rest of your con. [Applause]
[Music]
I [Music] wonder. [Music]
[Music] [Music] Heat. Heat. [Music] Hey,
hey hey. Heat. Heat. [Music]
Heat. Heat. [Music] Heat. Heat.
Heat. Heat. [Music] [Applause] [Music] Heat. Hey. Hey. Hey. Heat. Heat. [Music] Heat. Heat. Heat. [Music]
Heat. Heat.
[Music] Heat. Heat. Heat. [Music] Heat. Heat. Heat. [Music] Heat. Heat. [Music] Heat. Heat. N.
[Music]
[Music] Heat. Heat. [Music]
[Music] Hey. [Music] Heat. Heat. [Music] Wow. [Music] Heat. Heat. [Music]
Heat. Heat. [Music]
Hey, heat. Hey, heat. Heat. Heat.
[Music]
Heat.
Heat.
Heat. Heat.
[Music] Heat. Heat. Heat. Heat. [Music] All right, you're
>> all right. Let's get started. a Chico for security programs. Greetings and salutations. My name is Oshan Marshall. I code, I teach, I hack. I'm a product security engineer at Google Cloud. And the topic of today, of course, is product security. And before we like dive into the nitty-gritty, we really need to answer a pretty basic question. And that is why when engineers talk about security often we're just like jumping in into apps or plat or some specific vulnerability or some cool control but often product security is overlooked and there are some fundamental reasons for that. After all, integrating security engineering into the product life cycle is pretty difficult. And why go through all that
trouble when you can do vulnerability research, pen testing, security operations, and patching all with talking to as few product managers as possible. Um, we so is the important question. Why are we doing this? What problem are we trying to solve? And the answer to that question, we need a more holistic view. Focusing solely on the application miss is all of the threats and all of the risk at lower levels of the stack and then spreading your likewise spreading yourself too thin over an entire platform. Often you forget some specificity and some nuance. Product is the answer. you product is where you get the priorities from businesses get involved in software to present something useful to the user
and so this talk is for anyone in a software development or security role whether you're an engineer product manager or security professional I want to share with you some perspective about bringing security into your life so we'll start by defining product security means versus the other paradigms like apps SE and platac and then we'll focus in on the core concepts of shifting left right up and down. By the end of this talk you should be able to pitch a product security program within your organization. So appsc is the tip of the iceberg. It focuses on the security of the applications itself. But that's not really sufficient. This overlooks the underlying problems in the platform and a vulnerability in the
platform regardless of how secure the application on top of it is still exposes the user to risk. So apps is the most visible part of all security work. Beneath the tip of the iceberg lies a vast ecosystem and complex ecosystem. It's product security, but it actually includes everything from the OS it's running on to the databases to the network configurations and to the hardware. So product security is also platform security because a perfectly secure application on a vulnerable platform is still a weak application. And therefore a comprehensive product security approach means that you're looking beyond just the application. You're embracing a holistic view and when you do this of course you make products more secure.
Now platform security product is the purpose of the platform. And so when we talk about platform security it's really tempting to view this as a vast disjointed collection of assets. some servers here, some services there, each with its own health checks and uptime. This is technically correct, but the perspective is too broad to actually be useful. Why? Because these systems don't exist for their own sake. The they're part of a greater ecology. Again, delivering a product. And that's where we need to refine our thinking. There's a popular principle like treat your servers like cattle not pets. Now, this is great for building resilient systems, but it's not helpful to say the farm has livestock. Like, you know
the song like, "Oh, Macdonald had a farm, E-I-E-I-O." And on that farm he had vertebrates. E I No. Um it's not helpful to know that Old McDonald has vertebrates. To keep the farm functional, you have to know the difference between chickens and dairy cows. Each have different needs. Each h have present different vulnerabilities. each have different risk and value rewards depending on the farm's operation and adopting a product ccentric view helps us do exactly that. It forces us to ask the right questions. Which of all of these systems point processes sensitive user data? Which of all of these systems have key pathways that if compromised would have a direct impact on the user's trust? By focusing on the product, you can
prioritize our security efforts to the apply the most vigorous controls on to the most important parts of the platform. And we need to make these ideas concrete. So, we'll turn over to this old standby classic cheat code. You got to memorize. Thumbs twitching a bit to play the game today. I will we'll simplify it be because some of y'all don't know the directional buttons. So, we'll be focusing in on all of the directions here. repeat for just the four shift left, shift right, shift up and shift down. Now, shift left is an idea we've heard over and over again. It's powerful concept means moving security into earlier and earlier phases of the development life cycle
and from being an afterthought into a core consideration. But cloud development is complex. You need more than just one direction. it playing field is multi-dimensional. We need to look at the whole entire board. So today we'll talk about shifting left, but we'll also talk about shifting right, continuing the security work or through production through active monitoring, robust incident response and continuous testing of live systems. We'll also shift up making a security an organizational priority ensuring that security is part of the strategic conversation and not a just a technical one. and then we'll shift down which is building security into the very foundations of the stack which is all the way down to the metal. So let's first talk with shifting left.
Now again it means lots of things to lots of people but today we're focusing in on integrating it into the product life cycle. So for the purposes of our discussion there are three foundational building blocks. Product is what the user gets. Straightforward. Systems are assets that the product uses. They're closely tied to the product. So in cloud, these could be things like a cluster. It could be a a bucket. And if you're an older enterprise and you have production systems, all those weird quirks about your production environment are also just assets, things that the product can use. Now the final part is teams and obviously teams and people are very important right because software still
needs people. Yes, we agree. Yes. And so the dreams and visions of product leadership as well as the realities of being a software engineer frontline engineer is really important when you're trying to integrate security into that life cycle. And every company does it a bit differently. But in general, you can think of the product life cycle into four key phases. Phase one is just design. It usually means means knowing something about what the users want or need. Phase two is experimentation. It may be one stage or several stages, but the goal from experimentation is just to get some feedback, some live feedback. and quality assurance. Phase three is the implementation or build and phase four is the release of the product. Now
there's always something that there's always new feedback, new things that have to be integrated into the product after the launch that can be put in in the 2.0 no version or just the next version and where that's where you can start again with guess what design and again this is just a generalization but it's getting us to start to think about critical queries and critical questions how and where do we embed our actual security capabilities into this life cycle. So of course the first thing is to make sure you partner at the very beginning and we could talk about the tools that will help you build that partnership. On the far left of course this is uh threat modeling. Threat
modeling is a way you can find weaknesses in the architecture before a single line of code is written. But it also can be used reactively for systems that are already built. You can enhance your threat modeling through targeted threat intelligence. Threat intel shows you how real world adversaries are targeting products like yours. And with that information, you can design proact