← All talks

Quantifying Breach Impact Mitigation by ZTA

BSides SATX · 202538:218 viewsPublished 2025-09Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
Alexey Malashev presents a practical evaluation of zero-trust architecture's ability to mitigate breach impact, focusing on micro-segmentation as a core control. Using infection monkey for automated attack simulation in a lab environment, he quantifies the reduction in lateral movement and system compromise across multiple attack scenarios ranging from unprivileged user to domain admin compromise, demonstrating breach impact mitigation between 60–90%.
Show original YouTube description
BSides San Antonio 2025 June 21 at St. Mary's University
Show transcript [en]

And I think it's time to get going, right? Um, thank you again. We have we're down to our last two official sessions with Bside before everyone adjourns to the bar downstairs or not everyone, but those that want to if if for no other reason you could warm up down there just by going outside. It is cold in this room. We're doing all we can to try to mitigate that. I'm going to repeat only because thanking USAA and St. Mary's in particular for making all this work is really important to us. So going to say it again, thanks to them. Thanks to all of you for coming out and and joining in. And believe it or not, on a June day

in San Antonio, braving the cold to come to this session. I know Alex likes it cold. He's he's in this element right now, which which is good for him. I also I want to thank Spectre Ops, who happened to be the presenter for the last session, but also a sponsor that I failed to mention last time. So if you were here for that, you know, let Josh know I did mention his company. And we are now going to hear about quantifying breach impact mitigation by zero trust. And the speaker is someone that I've known I guess what, four years? >> Four or five years. >> Yeah. It was it was it was during COVID that we met. Um Alex Mallesv is um very

well versed. has done a bunch of great things on this and I know you're going to enjoying this session. I know he's going to enjoy it. So, Alex, please take it away. >> Thank you, Jeff. Hi, everyone. Thanks everyone. Uh my name is Alex Malash. I go by Alex. It is Alex though. Uh but I do want to quick do a quick introduction for myself. So, I have about 20 years of experience uh working in across multiple fields in the industry. Uh so I work for weapons uh weapons development companies um energy nuclear energy companies. I've also worked for small businesses here in San Antonio uh managed service providers uh US courts administrative office of US

courts. Um so I really have diverse experience across all fields all possible u walks of life for AT specifically. Um Jeff actually uh was my instructor for CSSP exam about four or five years ago. uh passed the exams. Since then, I've been working for DHA and now working for Phillips um as a senior uh senior cyber security manager for product security and services. Uh mostly supporting government contracts cuz government is what I know best probably. Um I graduated from um American Military University with a 4.0 no GPA uh with a masters in information assurance and uh since CSSP I probably got just about every single security certification I could get my hands on uh SISM uh CCSP pentest um again just a

huge nerd like to play with computers like like to code like to uh break infrastructure uh which is kind of the reason for this presentation and what what led me to this experiment in general uh so first I want to cover the agenda uh for the for the agenda. First, we're going the mission statement. Uh and the mission statement is important here because we all know how horrible uh scope creep can be within projects. Uh by defining the mission statement, we define our parameters for our experiment and make sure uh creep does not occur. Um or at least we control it to certain extent. Uh then we're going to go into the introduction to zero trust

framework. Uh this is going to be really high level just to make sure everyone understands the difference between uh zero trust and the typical network segmentation uh traditional network segmentation. Um then we're going to talk about micro segmentation as one of the ways or one of the tools to achieve zero trust architecture within our environment. Um I'm going to go over some of the uh slightly outdated stats at this point. they're between 2021 and 2023 uh for the current state of cyber security uh cost of breaches in across globally as well as nationally. Um and how how mature how well implemented zero trust architecture uh into a current IT infrastructure uh globally. Um then we're going to go over the

experiment design, the experiment limitations, uh kind of go over the uh lab uh lab configuration or lab environment configuration and finally onto the results and the final tips and takes away takeaways. Uh so first of all the uh experiment uh experiment goal or mission statement. So the goal was to evaluate micro segmentation as a standalone control. Uh why am I looking at the micro segmentation? because uh micro segmentation is at its core of all all of uh zero trust architecture. Uh the way we achieve zero trust is we ensure that every every flow or every connection is verified and is explicitly allowed. Uh this micro segmentation is a great tool or a great way to uh to make

that possible because it allows us to uh submit and control uh control flow across infrastructure uh down to very granular granular levels down to uh user and applications and exactly which protocols are allowed to talk to what resources on the network. Uh so what is zero trust? Zero trust is a security model. It is not a tool. It is not a a thing you buy out of the box. It is a security framework and it is a model. Uh it relies heavily on enhanced identity governance. Uh so we need to be able to identify all of our resources on the network and we need to identify and allow them access to the resources they need to access. uh the difference

between uh so micro segmentation uh micro segmentation is one of the tools can be leveraged uh to employ zero trust architecture and in this specific case uh was using softwaredefined networking uh for my cyber micro segmentation uh tool specifically uh VMware NSX 4.0 No. Uh it does require continuous monitoring of the entire infrastructure analytics of logs and uh access and network flows. Uh and relies really on principle of lease privilege at its core. And I know the principle of lease privilege is really has been around for well as long as the computer security existed. Uh but zero trust really takes it to the next level and uh allows only exactly what is allowed versus uh blanketing statements

or um open open or implicit allow rules. Uh so why is micro segmentation different than traditional networking or traditional network segmentation? Uh so micro segmentation uh separates our data plane and control plane into two separate into two separate uh mechanisms. uh and the control plane uh controls the access and the routing through the environment whereas the data plane controls the actual data packets and actual information that reaches those systems. Uh the reason why the separation is is important is because with these micro segmentation uh we have uh 24 bits reserved for the virtual network identifier. 24 bits for the VNI means we have over 16 million possible uh segments that can be created. Uh this

is exactly what allows us to create those small small segments uh and small uh policy and control rules uh who can talk to who versus uh for example if you're talking about layer 2 switching 802. 802.1Q uh reserves only 12 bits for our uh for the uh for the VLAN ID. Uh that only allows us 4,000 possible network segments for the layer 2. uh layer three obviously it's a little bit more but then there's also all kinds of rules uh and which IPs can be allocated for what purpose uh bayana controls the public IP space um and also you have to give up the number of networks you can deploy uh versus how many how many hosts each

network has uh from the segmentation perspective as well typically with a layer 2 you end up controlling it at choke points uh with micro segmentation it's it's it's controlled at an interface level. Uh so basically any device or any object that connects to the network can be can be isolated independently. Um and last and but but not least specifically with the software defined networking uh we have full layer 7 layer 7 integration. Uh VMware NSX integrates into active directory uh it integrates into layer 7 uh of the network packets. So we can build our policies and our uh and our flow flows uh based on very granular and very specific rules that are dynamically updated as well. Uh so you don't have to

have a network admin sitting there uh filling in these ACL constantly updating them every every week or every every day when a new user comes on board. So uh the current cyber the current state of zero trust architecture uh first of all between 20 2021 and 2023 uh the number of the number of breaches has increased by 30% year-over-year. Um co probably had a huge portion to play to that. Um in 2021 I was actually working as a um on an incident response team for a managed service provider. I think responded to four or five incidents in the first eight months of co of the initial lockdowns. Uh so co did play a large role in uh in the

increase of the incidents uh and it wasn't prepared for this. Um at the same time if you look at the zero trust implementation across across industry uh we see uh Baka's report and it's a self-report. It is a survey. So take take those results uh for what you will. Uh but the self-reporting uh showed that in 2021 35% of companies employ employed zero trust architecture uh versus 61 uh% of companies in 2020 23. So uh the number of organizations employing uh deploying zero trust architecture uh has doubled. uh at the same time if you look at the uh at the uh cost of uh cyber security breaches between 2021 and 2023 it has only increased by 2%

year-over-year. So we have almost uh almost uh three times more incidents happening between 2021 and 2023 but the cost has only gone up by two by 4% over those three years. um we have gotten better at responding to incidents uh but I do think that zero trust architecture is it has played a part in it um and as we go through this uh through this experiment through these practical analysis of u how effective zero microsation mitigating the impact uh we'll see why I do think that there is a there is a big uh big uh correlation there so first of all the experiment design uh this is going to be the infrastructure the infrastructure itself I'm running a

threehost VMware cluster with an SXT uh 4.0 with Genevie overlay uh software defined networking. Uh Gen Genevie is the new uh overlay SDN protocol that is going to be the standard that's being implemented in Cisco. Um before there used to be a VXLAN and uh and VGREE where were the previous protocols uh there was a lot of compatibility issues. So now the industry is sweeting switching to gen is a standardized protocol across the industry. Um I'm also running on premise uh Splunk server. Uh so I'm using Splunk for the data for the log analytics for and for visualization. Uh primarily uh right here at the bottom is is the infection monkey reporting reporting uh graphs.

You can see they're completely useless. Uh so I I opted for Splunk uh to code my uh to to write out my diagrams and analyze the traffic as it flows through the through the environment. Uh infection monkey is what I'm using for uh for the bridge simulation utility. Uh again it's an automated bridge simulation. So you basically uh point it towards the network. You you say which exploits you want it to run and it runs through the environment compromising systems one by one. has access to uh resources within the within the environment are built uh based in alignment with a typical uh role based access control model. So we got our uh desktop users, we have our help desk

users, our uh server admins, database admins, uh domain admins. Um users are randomly generated with a PowerShell script. Five domain user groups. So we again we have um the desktop uh help desk uh desktop users um HR finance department uh and server admins. Uh the environment is going to be rolled back between every single test using uh using snapshots. Again everything is scripted an automated to make sure uh to make sure we eliminate as much variance as possible. Also I'm lazy. I don't like clicking on stuff that often. Um all nodes do ship logs to uh to the Splunk system. Um once again uh analyzing logs through Splunk is just a whole lot easier than trying to go

through the systems individually and try to aggregate them through an Excel spreadsheet. Uh each user, this is going to be one of those things that I know uh most cyber security folks will will frown upon, but each user is going to be an admin on their own desktop. Um while it's not the best practice, I do see it happen quite often and this is typically the result of a lot of breaches. Um overall we have 36 systems on our network. Uh we have two domain controllers, two IAS servers, two four file servers, uh three financial database servers, three HR database servers and five desktops for each type of a uh each type of a user. So uh for

financial users, they have their own desktop. HR has their own desktops. Uh the test is conducted in two separate phases uh with multiple scenarios tested. So uh the first the first phase is going to be uh completely unsegmented. So your flat typical network. Um I will say that technically there is going to be a layer three segmentation. They do have different subnets created for all of them. Uh but there is no ACL. I'm not controlling any traffic between those subnets. So uh you might call it segmented but it's not. There is really no ACL across the board. Uh the second phase is going to be a segmented test. Uh so this going to be enabling our software defined networking

uh default uh default rules switch to deny versus allow. Um that that's all it does is it enables our security groups and controls of those flows. Uh the controls and flows in an unsegmented section are going to be still set up and controlled. Uh but then the default rule is going to be applied or allow. That's actually what allows me to see uh the network movement through the network using uh using my NSXT as well as the Splunk infrastructure. Uh hardware again this is just more for hardware nerds cuz I am one and I get excited when I see hardware in home build labs. Uh so I'm running three super microser uh 512 gig uh gigs of RAM

on each. Um everything is running VMware. Uh the storage is delivered using uh VSSON. Uh so VMware's hypercon converged offering for the storage. Um the VSSON itself is all flash. U not not expecting whatsoever. This is the eBay drives for $35. U did take me a lot of work to make it work within the VMware environment because they don't like unsigned drives. But I also wouldn't do it in the in the in the production for sure. This is this is a home lab only. Uh the fabric itself uh the fabric for the storage is delivered ver uh via 10 gig fiber switch micro tick uh with two bonded cards. So 20 gig uplinks um across all the all the

storage uh and hostto networking is done with a bonded network cards with a one gig micro with a one gig manage switch again micro tick uh so two gig uh backbone for uh server to server communication or host to host. Uh this is the this is the network uh layout network topology. So fairly basic, fairly straightforward. Uh we got the tier tier zero router uh tier zero router up here at the top. That's our off and on ramp into the physical infrastructure. Uh over here to the side uh can barely see it because of the brightness. Uh is where the Splunk server sits. I call it the audit domain or the audit subnet. uh and the tier one

uh tier one rather is where the remainder of the infrastructure sits. Uh this allows me to separate the traffic within my home environment. Again, this is a a home lab environment. There is a lot more infrastructure around it, but this this is how I segmented it and separated it from the rest of the uh the rest of my lab u I guess shenanigans you want to call them. Uh, one thing I do want to point out is I do uh I do have NS NSX IDS uh deployed and turned on. I did turn it on u I did turn it into an audit mode uh specifically for this test because I want to see whether IDS is going to uh

be triggered easier or harder the deeper the more complex attack path is with an environment. U to no surprise that was the case. Uh but I did I did turn it off because I also didn't want it to interfere with the test and actually test the micro segmentation as a standalone control. So the scenarios we're going to be testing is going to be unprivileged user compromised uh desktop privileged user compromised server privileged user compromised and a domain admin compromised. I know when we hear domain admin compromised we think that's it uh everything is over. We can go home that day. And honestly, I would agree with it if I was dealing with it. That's not

that's not a scenario I want to deal with. But in these tests, you'll see that micro segmentation does prevent a lot of the impact even in the worst case scenario. Uh so from the reporting perspective, infection monkey is used for the initial report or the initial log analytics. Uh but then Splunk is what's really used for the final analytics and the and the graphing and the visualization of all of the attack paths. Um and then the impact score is calculated. Uh so the imp the impact score is calculated based on the depth of the path required for an attack to to to reach a system as well as whether the system was compromised or not. So the deepest possible path within

this environment is five hops. Uh so each system can be uh can can have a value of five points. Uh for every hub to reach that system uh it take for every hop to reach that system one points is subtracted. So if if if you if the system is compromised in one jump, it's a five point. If if system compromised in four jumps, that's a onepoint compromise. Um the exploits I enabled within the infection monkey are going to be the PowerShell exploiter, WME, WI exploiter, and an SMB exploiter. Again, I know this environment. This is a Windows environment. Uh everything is going to be 100% Microsoft based. There was no reason for me to run 50 different 50

different exploiters with tests because they do take quite a bit of time to run. So trying to optimize this test. So experiment limitations. Uh this is a biggie. Uh I'm using infection monkey as the breach automate automated simulation tool. Uh what does that mean? It's a script kitty. This is not going to be your advanced persistent threat or somebody who hacks your environment and sits in there for months on end. uh this is going to be uh a kid clicking on scripts and running them across the environment. Uh to enable reporting, one of the one of the big uh things is to enable the reporting within the environment. I do still allow each system to talk back to the command and

control center. Uh that would not be uh that would not be the case in the true zero trust infrastructure. You will not allow those outbound connections unless they're verified. uh but in this scenario specifically for this test in the experiment uh solely I did allow it because I want the reporting to be accurate uh and one point that's a that's a really big point actually uh defense and depth is not uh is not is considered during this test but it is not implemented uh that that takes me back to that uh to that comment about the IGS IPS being disabled in the environment um if those if the defense and depths is implemented that will

greatly increase the increase the uh efficiency of zero trust uh architecture to to to prevent breaches or not necessarily breaches but at least the spread through the environment. So this is the overall results. Uh the for the for the overall results uh we can see uh the scenario column HR account only uh help desk uh HR and the server account compromised and the compromises the scenarios are working in in stages. So in initially you would have let's say uh your your HR users compromised credential and the attacker gets footoothold into the HR systems. Well, then uh they can use uh tools like mimikats to scrape uh to scrape passwords of the HR of the HR systems

and if there is a cache password for one of the other domains that could potentially uh if yeah so if uh if we if we're uh with the scenarios if an HR user is compromised and the credential for a server admin for example is cached in one of the HR desktops then the attacker can steal the credential. So scrape it and then use that uh you pass it to the remainder of the infrastructure for the compromises. So these are the scenarios I'm testing. I'm trying to keep them to what realistically happens in the real environment. Um hopefully we're not posting our domain admins passwords on forums so that they're not going to be directly compromised. It's typically

going to be a scrape credential that gives the gives the attacker access into those uh privileged resources. Um I'm also uh limiting the uh the column of number of exposed systems will specifically say how many systems are potentially can be compromised with each scenario. So with a help desk only uh the help desk user has admin rights on 20 uh desktop users. So therefore there is 20 uh exposed systems in this scenario. Uh segmented total score. So that's going to be or segmented segmented total compromised. That's the number of systems that were compromised in the segmented scenario. This is going to be the number of systems compromised in unsegmented scenario. Uh the segmented impact score. So this is going

to be the scoring uh unsegmented impact score. Fairly explan self-explanatory. Uh and then the percentages. So the percentages are just the percentages of the total of the total exposed. And finally the delta. Delta is what we're really looking for identified. what's the difference between a segmented or a non-segmented compromised uh compromised scenario. So um at a high level at an overall uh we can see the medium the median uh the median uh unsegmented compromised was unsegmented compromise was 95% whereas the uh median impact score for the segmented tests was at 20%. That gives us a delta of 75% and an average delta of 61% between 32 and 94%. Um, so that's significant. But there is a couple

things that I do want to point out. U with with some of the scenarios and domain ad domain admin scenarios I'm going to show uh the delta is is very large. Uh so the del the delta is going to be significant. However, that doesn't tell us the true the true story. Just because we have only one system compromised like a domain controller that means mean means we don't have a domain controller that that's that's the whole environment basically uh but it could it could play a role in in recovery or uh in the recovery stages of your of your uh incident. So this is what this is where I want to go in and touching some of the

highlights or outliers within the within the uh tests. Uh so this is the help desk uh help desk uh user was compromised and had the exposed delta of 90%. Uh on the left hand side uh this is the reporting out of Splunk. Uh this is visualization out of Splunk I should say. This is the visualization out of infection monkey. This is why I did not use the infection monkey as my sole source of of data analytics. Uh this can be easily organized. We can see that every single system was compromised from every single other system. Whereas here I'm not really sure what's going on over here. Um all we know is that this was just bad.

Uh with the with the segmented scenario uh at the same time we only see a foothold system was compromised. Uh we can see with infection monkey reporting other systems were were discovered but they were not compromised. That that's that's what the yellow path means here. Uh why did this happen? Well, help desk user is an admin user in all of the desktops within the environment. But help desk user does just because they're help desk user coming from a help desk system doesn't mean they should be able to access all of the desktops in the environment using WI protocols doesn't mean they should be able to access SMB within uh within all of the desktops on

the on the environment. Um if we're sharing files uh so help desk sharing files with their end users or trying to get files across the envir across the systems they should be using their file servers their central uh central points to distribute those files and this is exactly why a single system was compromised which was only the foothold within the environment. Help desk user doesn't have admin rights on their file servers. So file servers are not compromised or not at risk. Whereas here, our entire desk subdomain is compromised. And honestly, if this was the real scenario, uh I'm sure they would have been able to scrape some other credentials of those systems. So, this is the worst case scenario.

This is what uh every single AT person in the world fears uh with to no end. Uh that's your domain user compromised or domain admin user. Uh this scenario the idea behind this scenario is that the HR user is compromised initially. Uh but then uh domain naming credential was uh was cached in one of the HR desktops and the malicious attack malicious user was able to uh retrieve the credential using mimicats and spread through the through the domain controllers or through the rest of the infrastructure. Again on the left hand side uh this is the representation of what happens when a domain admin is compromised in a in a flat or unsegmented or uncontrolled environment. On the left on the right

hand side we can see the monkey monkey island is my uh command command and control center. it compromises the HR system and then it's only a uh scrap scrapes the domain admin credential from this HR system and then it's only able to reach uh the file servers directly and the domain controllers in the back end. Uh why is that? Because the only protocols that are exposed to the HR user uh or to the HR system with the with a domain admin credential are going to be RSMBB and WI and not even WI. There is no reason why HR users should be talking to our file servers over WMI or Window RM um SMB uh Windows uh the

Windows file sharing protocol not the most secure thing in the world. Uh so that that that's going to be a fairly easy compromise. Uh that's exactly what happens here. Uh so our HR HR HR user uh gives up the domain admin credential uh to our attacker. uh file servers are compromised using SMB and then the domain controllers are compromised using SMB. Again uh if you look at the delta the delta is is one of the larger deltas. It's 80% uh 80% delta uh between the segmented and nonsegmented. But at the same time if your domain controllers are compromised recovery is still going to be really bad. And honestly I I don't see how the remainder

of the environment is not going to be compromised given enough time.

So this is this was another uh another uh test or another scenario or I want to go over uh because it shows the importance of the uh how complex the attack path is. Um again the left this this scenario shows a help desk user was compromised and then the database user credential was scraped off of one of the help desk uh desktops or one of the of one of the database uh admin desktops. Uh left hand side again everything is compromised single jump across the board. Um open network there is nothing really to control it. But on the on the right hand side in a segmented test uh we can see the monkey island went over

into the uh into the help desk user. Uh then was able to spread from the help desk user uh onto the other desk on onto the other desktops. uh from the other desktops. It was able to scrape the uh the server admin credential uh from the uh server admin desktops and it was able to compromise the uh the the uh database servers uh database servers from the uh from the desktops of the database admins. Uh why is this important? Again, this is an automated test, so it's really kind of dumb and just scanning the networks. uh in in a real life scenario, if especially if you have a less than uh advanced persistent threat in your environment, um chances are is

they're going to compromise a couple desktops. They might try um they might find these these these uh database servers. They but they could potentially fail to find uh the credential cache in one of these systems or they might not realize that these are systems of of uh database admins. They might not realize that they're looking for database uh servers at this point. Uh so complexity of a path matters. Uh in this scenario the the exposure delta was only of 5% because the exact same number of systems was compromised. Uh but we can see because this path was quite a bit longer. I shouldn't say quite a bit longer the whole environment is fairly small. Uh but because it did take an

extra two jumps to reach the actual critical systems your database servers. Um it it does it does increase the complexity of that path and it does give us some protection. Uh one one other thing again we're going back to that IGS IPS uh if we have the AGS IPS enabled in the environment with the with the drop rules enabled every single one of these compromise was setting off so many bells and whistles in my environment that it's not even funny. I mean my uh my alert counter was just going up and going crazy. If this was set to to block the block the block the block the compromise it would have stopped probably at this level right

here. would have never gone past it. Uh again, those are big ifs. You can never rely on a single on a single control and that's exactly the reason why I always say defense and depth is really your best key uh across across all systems. So uh the zero trust takeaways uh zero trust architecture this is this is actually after I conducted my experiment after I was playing around for it for a while I I came across this paper by uh by Naredin Basta uh and Narin Basta done something very similar but he used a mathematical model to evaluate uh breach impact mitigation of micro micro segmentation or how he called it towards a zero trust uh s not

well surprisingly not surprisingly uh he his his mathematical model showed uh bridge impact mitigation of zero trust architecture or micros micro segmentation specifically is between 60 to 90%. And that's exactly what my uh practical experiment showed as well where we had the average of 60 uh with a median of 75%. Um increased complexity of the of increased complexity of lateral movement through the environment. This one is really difficult to capture because again if you have a if you have a script kitty going through your environment they're not going to be able to the more complex the path is they're not going to be able to move through it very easily. If you have uh nation states uh going after

your environment complexity is is is a little bit of a less of an issue for them. Uh but it also gives us more time to catch them. It gives our tools more time to detect them and set alerts up. Uh lower attack surface. So it's not even just uh preventing the lateral spread in the environment, but it's also having that initial foothold into the environment. Uh the zero trust architecture does prevent uh call backs to command and control centers if it's properly deployed. So even if a system is compromised within your environment, it's very possible the attacker will never know that it was compromised. Therefore, never actually continue with their attack through the environment. Um

if it's an automated tool, uh you might still spread, you might get still get ransomware. uh but at least the attacker doesn't have foothold in the environment and it's more difficult for them to establish it. And then last but not least and this is in highlighted bold uh because defense and depth is really uh really needs to be implemented across the board. Uh defense and depth greatly increases effectiveness of our zero trust architecture. Easy example is IDSAPS. I keep going back to it. Another example and I I pointed out right here is every single every single one of these scenarios I was testing the idea or the reliance was that the attacker compromise somebody less less cyber

security educated or cyber security savvy compromised their their system first and then was able to scrape a credential off of the off of the systems. uh if you have uh defensive depths implemented and you're not allowing caching passwords or caching credentials on your local desktops um a lot of this a lot of this attack vector is going to be um nullified essentially. Uh so we're going back to defense and depth is really key um defending all of our infrastructures and that is it for the presentation. Uh any questions? All right, you make it easy for me. Just offer please your first name and your question. Um, Dernandez, so for someone that lives breathes zero trust

architecture, N less than two weeks ago just came with 19 ways to help build zero trust architectures. What are your thoughts? Are they helping? Are they hurting? 19 to me seems exorbitant. Um, and from a GRC perspective, controls perspective, I'm a little bit worried about those 19 ways. So from that perspective, so I don't know if you saw list 853, rev 5 included another 20 controls that are specifically for zero trust architecture. Uh so as as far as the recommend the recommendation, the in 19 ways uh to meet their guidelines there, this is what should be done. Uh I know 853 is less of a shoot than here's how you're going to do this. Uh but

that's to me what the guidance are. uh zero trust again it's not a tool it's not a package thing you buy it's it's a architecture it's a model it's a framework and uh from the governance perspective I I can see where uh governance I mean keeps us employed that's all I got to say >> thanks any any other questions this is >> available anywhere I'm kind of driving the micro segmentation ZTA stuff in my organization now and some of these these numbers might be useful for me in the future. >> Sure. Uh on my LinkedIn, I actually have a small article posted uh with the content derived for these slides and then there's also a link to a research

paper that actually has all details as well as uh scripts and everything else I've used to build it. >> And and if all that should happen to fail, we also are recording this and you'll you'll be getting a notice from Bside saying when the recordings are available. So you have a lot of different ways to get it. >> Yeah, that's exactly why I actually kind of thought of this topic in general because zero trust architecture is expensive to implement. Uh so when you come come to your budget planning meetings and you're like I want $20 million, you're like nope. But that's that's exactly why I went through this exercise.

As someone who suffers through research, I appreciate what you've done. Um because if you do research, it is suffering. I'm I'm sorry. Right. Would you agree? >> Yeah. I I the research part is fun. The writing part is not so much. >> Fair enough. Yeah. >> So building out infrastructures, planning research is actually extremely rewarding. Writing reports is a little >> Yep. And I'm about to write my next research paper. So I know that that's the suffering. So thank you. I do appreciate that. Uh please everyone join me in in thanking Alex and appreciation to this. Thank you very much. We have a few minutes um if Ale if you wanted any one-on-one questions. We

I think are starting our next session at 4:25. So um that'll be our final session of the day in here. Thank you.

Very

cool. I'll definitely be messaging you on LinkedIn information.