
[music]
Thank you for that. Hello everyone. Good afternoon. In cyber security, we don't get to trust by default. So, why do we let vendors earn our trust by just slapping the label or the terms zero trust on their products? That question stuck with me and it's not just theoretical. It's the the catalyst of a lot of the content that we're going to cover today. and how zero trusty is your network access was based on a research paper that I wrote a couple months ago and it's published over as a white paper with SANS and the overall approach grew from a specific frustration that I had a while back I was asked to recommend a zero trust
network access solution for a large organization that was going through their own digital transformation on paper every vendor that they looked at claimed claimed that they supported zero trust. The marketing was strong. It was slick. Everyone was zero trust capable. But when I started to look deeper, I couldn't find a consistent way to measure whether these solutions actually enforced zero trust principles. Principles like lease privilege, device trust, data inspection, and assumed breach. So it became clear to me very quickly that without a replicable framework to test and compare the products it was nearly impossible to separate the meaningful capabilities from the marketing buzzwords. So that's what this talk is about. That's what we're going to cover today. I'm going to walk you
through a hands-on testing framework that maps directly back to zero trust principles and show you how I used it to essentially evaluate several zerorust network access providers in a controlled environment. And before we do that, I'll at least give you a little bit more content and context as to my background and why I'm here speaking. But f first of all, Magneto, that was fantastic. I appreciate that. I if you're available in the spare time, I may have to hire you as my intro guy. As mentioned, I am a currently a security architect for a global reseller. A lot of my time is spent trying to solve customer security problems, prescribe the right solutions, and provide good architecture. As you
can see here though, a lot of my background is also in education. I thoroughly enjoy punishing myself by going through a lot of industry certifications. I'm a glutton for punishment. And I also am a huge flag hoarder when it comes to CTFs. I love CTFs. I am absolutely that guy that holds on to my flags and waits till the last 30 minutes for the scoreboard to go down and then I submit everything and rise a little bit up. So feel free to throw all that hate at me. And then as a number of us, I'm also a big motorcycle enthusiast. I love anything that can go fast. I love being able to go and race.
Um, I I would say anything that can go fast, get me hurt, or get me in trouble with the law, I'm generally going to be a fan of, which is probably why I'm a natural fit into cyber security. So, with that additional context, let's talk about what we're going to talk about. So the overall agenda of what we'll go through today is we'll start off by just kind of reiterating the zero trust marketing problem space which will set the stage for understanding how we need to focus this. We'll talk about and help define what zero trust actually is. And I'm just going to do it at a quick and dirty high level. We don't have the time to dive
deep into zero trust, but I want to make sure we're all on the same page. We're speaking the same language. you understand some of the terms and the concepts that I used to build into my framework into my methodology and my approach to doing this. We'll of course go into the lab architecture. What did I actually build for lab? How did I test and what was the environment looking like? We'll break down what each one of the tests were and talk about how they were relevant, how they mapped back to the zero trust uh architecture. And ultimately, we'll get to the point of why you're all really here. You just want to see the results, right? You want
to see how these players stacked up and where they were strong, maybe where they were deficient. We'll try and round things out with just a quick highle how can we if we have residual risk, how do we mitigate that? What are some recommendations to to put for mitigation controls? And then of course, we'll close things out with just key takeaways and I'll touch on areas where this can still be expanded, how you can leverage this for your purposes and in your environment and give you some takeaways. Hopefully that sounds like a good plan. Before we dive in deep, I would say put your seatelts on, put your helmets on because this is going to be a lot. But I
I do want to kind of get a poll. I want to tailor this a little bit to how everyone is familiar. Just a show of hands. How many folks are confidently comfortable with the concepts of zero trust? Oh, I love it. Okay. How many folks are pretty confident and comfortable with zero trust network access or have zero trust network access in their environments today? Great. Okay. So, I will breeze over some of these components. I'm not going to try and bore you with knowledge that you already know, but I want to make sure anyone that's not familiar at least understands what we're talking about. First and foremost, and for those in the back, I apologize. I wish there were
more screens in the back. Uh there are going to be some charts and some pictures that come up. you're not going to hurt my feelings if you get up and and want to come closer. Fundamentally, zero trust network access. When I talk about that, I'm there's a lot of components that can go into it, but what I'm describing and what's purposeful for this conversation is we're talking about a hybrid workforce or remote workers and replacing that traditional remote access VPN. You know, what I'm going to be discussing is all going to be focused around remote workers on their laptops with an agent installed that connects up to the zero trust network access provider's cloud service. From there,
the traffic is tunnneled. It's proxied. Ideally, it's inspected. It's compared against policies. And assuming it's allowed, right, it's going to travel back down to our data center or to our lo our location where our applications reside. Almost everybody in this space will provide virtual machines, right, for you to deploy within your environment that should be adjacent to the applications that your users are accessing. So, it's quick, high level, but paints a picture for what we're going to be describing. Then we get into the marketing problem that I was describing. I took some excerpts from some of the web pages of the vendors that I selected and tested and they're all advertising the same thing. Achieve true zero trust security
built on zero trust principles. Zero trust is built in seamless zero trust connectivity. So, if you look at these and you look at their data sheets and their white papers, it makes it very difficult to determine what the differentiators are. And if they come out and they're pitching to you and they're going through their pitch deck, I I I would give you a a bet. Take their decks, remove the logos, remove the the colors. They're all pitching you the same thing. They're making the same promises. They're saying that they can deliver the same capabilities. They're all basically synonymous. And that's why I felt like there needed to be some comparisons, there needed to be some evaluations.
So how I started out was starting with zero trust as a whole. And a lot of you are familiar. So I won't spend too much time going down this this uh this rabbit hole, but you know, John Kindervag from Forester coined the term back in 2009 2010. From there, I would say a more recent architecture and reference is NIST. I I'm sure most of us are fans of NIST. We like standards. You know, they built out special publication 800 207. It's a long dry read. If you like technical material, knock yourself out. But ultimately, it breaks down zero trust. And it says here are seven tenants and principles that should be guiding you as you go down this journey. And you should
be applying that to the different information technology pillars, right? You should be doing these zero trust things in identity. You should be doing it in devices with networks, applications, and workloads. And of course data around all of that. We should be getting visibility. We should be having analytics to understand what's going on. We should be automating our manual tasks and we should be orchestrating communications and capabilities between disparate products and technologies. Ideally, we have governance wrapped around all of that to help give that guiding light on where we need to go. Ultimately, without going into the tenants, right? This is all about never trust, always verify. But I needed to build upon that. And this is where I started to pull some
test criteria and what I wanted to use for measurements. So the Cyber Security Infrastructure Security Agency, CISA, it's a mouthful. They published the zero trust maturity model. Most recent one you can see here is version 2.0. The document itself goes into a lot more detail, but this starts to paint a picture on okay, you say you're doing zero trust and you're mapping back to NIST, but how do you measure your progress in your journey? How do you measure where you're successful? How do you measure where you still have room to improve? And so they provided some categories that you can fall into for each one of the pillars, right? Are you a traditional state? Are you still stuck
in the '9s? Are you starting your journey and you're in the initial phase? Are you more advanced and you started to implement a lot of zerorust capabilities? And of course, are you in an optimal state? Because in security, as you all know, we're never done. It's just are we optimized? So I started to handpick components from these. And what we're going to talk about when we get to the tests is I looked at things in the identity pillar like differentiated access, multiffactor authentication. I pulled items from the devices side. Okay, can we measure device trust and device compliance? What about networks? Right? Can we do some network differentiated access? Can we provide some network-based protections
from the application side? How are we segmenting? How are we providing application visibility and protection? And then data. Uh I I took some standard bait and tackle. Let's just try some data loss prevention capabilities and some data inspection. Once I had picked out a sampling of test criteria, I then needed to decide, okay, who do I want to measure this against? There's a number of players in this space. Everybody in their mom is advertising some of these capabilities offers a solution. So, I wanted to test my hypothesis of could a framework actually provide metrics and results that we can use to make decisions. Can it be effective? I did anonymize the vendors that I chose
because I didn't want anyone here to have bias in the results. I also didn't want to necessarily have anyone's name drug through the mud. Um, these are constantly quickly changing environments. You know, all of this data was from just 2 months ago. I would say it's still very accurate and relevant for where things are at today. But if you were to look at this a year from now, results may change. And this is why I think it's overall more important to have a methodology and an approach that you all can take and leverage in your environments as you start to look at these solutions. So there are a few sources and I have my opinions on who
the leaders in the space are. I grabbed Gartner because ultimately we have Gartner to blame for secure access service edge terminology for zero trust network access and I would say they're directionally accurate when it comes to looking at the leaders in the space. I think it's reasonable for you all to assume. I probably picked some of the leaders that they like to call out. And so I grabbed three of those, but I wanted to have a variety. So I also grabbed a niche player, someone that may or may not be based in the pack northwest, may or may not be a developer of operating systems that we're all intimately familiar with and is not necessarily a huge player, may not even
show up on some of these charts, but they have good market share. And then I wanted to also compare to somebody at the other end of the spectrum. So I grabbed a player that's more home user, lab user, smallmedium businessoriented. And let's see how all five of them stack ranked up and how well they could show if this hypothesis of this framework was effective. So let's talk about the lab environment and what was actually built. Hopefully everyone can read some of this, but I will summarize it. So I had to create two environments to test all of my criteria. One was an untrusted environment where I essentially had built out and grabbed two Windows laptops, Windows 11, called them client
01, client 02. I had two users, user one, user two on corresponding machines. One, client01 I used for more of a compliant secure device. Let's let's turn all the security controls on. Client two, I said, no, let's work under assumed breach. Let's reduce some of the security capabilities that it has. Let's maybe equate that to a compromised machine. In the untrusted scenario, I was replicating a coffee shop, a home situation where we're working from home or where you're at bides and you really can't trust the other people around you, right? They are absolutely, you know, people that could potentially attack you. That is untrusted. On the data center side, I built out three services, three servers. One was
that Ztna proxy which I would change out for each one of the vendors and that was basically just built with their custom images. I'd also built out a Windows server 2016 to act as a file server and then I'd built out a Linux abundu server running DVWA or damn vulnerable web app to emulate a vulnerable web application. And I called those file01 for the Windows server, web01 respectively for DVWA, all sitting behind a firewall. On the trusted environment, the data center stayed the same. Didn't change anything there. But for emulating a a branch office, a remote office, a headquarters, I said, okay, we're sitting behind a firewall. We're in a trusted network, and ideally we're going
to have other local applications and services we need to access. So I built out another Abuntu server, threw Apache on it and called that web02 to basically be a localized resource in a trusted environment. So this paints a picture. This gives you an idea of what my basic testing environment looked like. Now we needed to actually look at what are we going to test. So I started with the identity pillar and I tried to start with something that I felt everyone should be able to accomplish which was just differentiated user access. If I have user one and they're more of a highly privileged user and I have user two which maybe is somebody I don't trust as much, can we
provide different levels of access? So specifically my success and failure criteria was based on user one should be able to access both applications, the file server and the DVWA web server. User two should not be able to access the web application. They only get access to the file server. So pretty fundamental, pretty basic. Everybody should be able to accomplish it. And then I added another identity factor. Let's integrate with with MFA. So in this case I used Office Authenticator. Pretty standard. Almost everybody leverages it to some point or another. And let's do stepup authentication. We know that we have a vulnerable or more sensitive web app. So when user one attempts to access DVWA, we should be able to prompt them for
stepup authentication and say, you know what, we want to trust that this is a legitimate request. Let's have you go through Office Authenticator and validate that request. Then you get access. If user one was able to access the web server without passing multiffactor authentication or the solution lacked the ability to do that, that's going to be marked as a failure. Then we shifted over to the devices category. Again, tried to start out with some relatively easy things that everybody should be able to accomplish. Keyword there being should. So let's I used Bit Locker configured disc encryption on client one kept it disabled on client two and said okay let's determine if compliance and device trust can be established by the ZTNA
solution validating whether or not disc encryption is configured and in action then we shifted to endpoint protection again just leveraged Windows Defender pretty straightforward very common and let's start out with basics can you even just detect and evaluate compliance status based on is defender present. But then let's build on that. Let's look to see okay is defender it's there but is it actually running? Is the process in memory? Is it enabled for real time protection, real time enforcement? Then I expanded one step deeper and said okay let's start to do some adversarial work. So defender is there. We evaluated that its real-time protection is enabled in some situations. And I'll just use a basic one of let's say I have some malicious
tooling that I need to leverage and for whatever reason I can't or I choose not to obiscate it. Another approach aside from disabling real-time protection uh because I I I don't want to throw up huge red alarms for the sock. I I want to be able to at least try and fly a little bit under the radar even though I'm not necessarily using all of my tactics. Let's just remove the defender signatures. So we keep it running. It's doing real-time protection, but it doesn't have signatures to match anything against. So, I can load malicious tooling on there and fly under the radar a little bit. And so, I just leverage the MP cmd run executable for that and just remove
the definitions from client 02. Then, let's aside from determining device compliance status and evaluating successes there, we moved on to the networks. And this is why I had to build out both an untrusted and a trusted network. Starting with the basics of okay, if you're in an untrusted environment, if you're at bsides in Portland, we should be able to give you access to your private apps, but we should be able to also block. We'll detect that you're in an untrusted environment and we should be able to block any local area network traffic so that people besides can't attack you or you're not going to respond to any of those attacks. The the solution should be able to identify
that, should be able to enforce that ideally. If not, we're going to mark that as a fail. But then let's flip that and let's say okay now you're at the trusted location. You are at a headquarters or remote office or branch office. You do have local resources that you need to access. So I'm going to expect that Ztna solution to be able to identify that and be able to provide you access accordingly. So let's test both of those sides. And then going back to some more adversarial work. Okay. If I compromise someone that's a remote user that's using zero trust network access for their apps, one of the first things I might start to leverage is to
do some recon. What do I have access to? So in this case, kept it easy, kept it simple. I threw in MAPAP on the Windows laptops and I said, "Okay, let's start scanning the services across the Ztna solution." Ideally, if you're successful, I'm not going to be able to enumerate ports that are open. I'm not going to be able to fingerprint those services and the versions that they're running. if I can that's going to be a failure. Then we moved into the application side. So here expanded a little bit into okay let's just start with some fundamentals. I expect these solutions in today's world should be able to do some layer 7 capabilities. We should be able to see
application protocols. In this case I picked out SMB since we have the file server. So I expect a success is going to be we can log and get visibility to the SMB protocol and the traffic itself. If it can't and it doesn't recognize SMB, it's limited to layer four and can only see the port and call that SMB. That's not good enough. But then let's build on that. Let's say, okay, you can see layer 7. Can I start to build policies around layer 7? Can I say I'm going to say a rule that in there user one and user two or those groups are allowed to access the file server using SMB. I don't care what port
it's running on. So success criteria there is we can actually build that policy with real layer 7 conditions not just based upon port mappings. And after that I started to have a little bit more fun. So I said okay I specifically picked out server 2016 because it's still vulnerable. It still is vulnerable to MS17-010 or commonly referred to as the eternal blue exploit. Let's test fundamental security inspections and vulnerability protection. And I threw metas-ploit on the Windows machines. Super common. I know we all love running metas-loit on Windows. And I used the MS17-010 PSAC module to essentially try and exploit the Windows server. And in this case, I wasn't going to go for a reverse
shell. I just wanted to see could I issue commands and have them execute it as system on the Windows box and create a local file on the C drive. We can use that to test whether or not there's protection in place, whether or not it's successful and whether or not it can be alerted to and detected. Then we got into the web attacks. I threw DVWA in there for a purpose. I want to see what kind of web protections do we have against the OASP top 10. I just grabbed a sampling and I said, "Okay, let's test for some local file inclusion." a good solution should be able to pick up some very basic fundamental items like easy let's just
check for Etsy password and look at what accounts are on there and then I put one more test in there to say okay if I create a local text file and so this case I specifically put it under the var directory still wasn't directly accessible from the web application but with local file inclusion I would be able to access it this is similar to somebody going and trying to grab like the web config which shouldn't be accessible Then we moved on to command injection. Again, interesting to see how are things going to turn out, but let's throw some commands at here and see if we can exploit those on a vulnerable web app. So, I threw ID in there. Very basic, but
let's just check to see if we can determine who the identity is of the user running that service. And then for variety, I tried a few other commands, but I I standardized on pwd, right? Let's just check to see what present working directory that we're in. Should be table stakes. Similarly, I said okay, let's do the hello world of SQL injection. Let's just do one equals 1 and see if we can dump a table for variety as well as for some testing methodology. I said, okay, let's let's expand that. Let's also try dumping the table with a different equation. Let's just throw eight is less than 9. Everybody should be able to pick up on
these right? And then finally on the data pillar, basic testing, but also looking for basic data inspection. If it's capable, I wanted to be able to look for data loss prevention and have that in place. So I grabbed text file and a PDF that had PII, you know, personally identifiable information in there, full names, addresses, phone numbers. It's a success if we can detect and block that. It's a failure if I can download that file or files that were staged on the Windows server to the remote Windows machines. Similarly, I had a text file and a PDF with PCI data in there. Build out some PCI rules, standard dictionaries. Let's just look for credit cards, CVV numbers, full names. Should
be able to block that. And then for fun, I added in some malicious file detection and reverse the data transfer and said, okay, if I'm on a compromised endpoint, what happens if I leverage some malicious payloads and just try and upload those to the Windows file server for staging? So, I grabbed a couple iicar files. One's the text file, another one was the COM or batch script. And then I also leveraged since I already had Metas-ploit on the Windows machines, let's use MSF Venom. Let's create a very simplistic vanilla reverse shell executable from interpreter 64-bit. That should be very easy to pick up signature based, nothing fancy. I can always expand. I can always build
more. I can always do other trade and tactics, but this all should be easily testable things. So let's see how things resulted, what you're actually here for. How did things turn out? So I categorized these into the market leaders being A, B, and C, the niche vendor being vendor D, and then the small medium business vendor as E. As I hoped and expected, hopefully everybody can see this, but the userbased differentiated access was actually pretty straightforward. Everybody was able to do that just as I had hoped because I was like, "Okay, you're not even a ZTNA provider if you can't do the most fundamental thing. This all goes back to identity." But where I was a little bit surprised was
when it came to the multiffactor step-up authentication should be table stakes easily part of a zero trust approach. But that's where vendor C being a market leader failed. The reason that they failed wasn't that they didn't support MFA. it was that the only ability or the only configuration that they supported was doing a multiffactor authentication upon logging in or authenticating to the ZTNA service. So the front door, but it did not have the ability to configure this or any kind of multiffactor step up off for specific applications or specific services that you're trying to access. As being a market leader, I was a bit disappointed in that. Everyone else was able to achieve it. Then we got to the devices side. Again,
some interesting results, but not necessarily for the reasons that you're expecting. So, disk encryption, endpoint protection, is it there? Endpoint protection, is it running? Is it doing real-time enforcement? And even the endpoint definitions, vendor D, the niche player, failed all of those. It wasn't for the fact that they couldn't do it. It was for the fact that this is a large platform player and if you want to be able to have these capabilities, oh, you got to pay more money. That's a separate licensing. That's a separate product. That's a separate dashboard where you have to go and configure those things. It's not natively or inherently part of their zero trust network access solution. I'm not playing those games.
That's a fail in my book. Like these are inherent components that a ZNA solution should be able to provide. Everyone else was able to pass those checks except for the small medium business vendor which ultimately they could detect defender being present. They could detect the real-time protection being enabled and enforced. They fell short and they were not able to look at definitions. So I could remove the defender definitions and play around to my heart's content. It just didn't have the ability to recognize that and to change compliance status based upon that. From there things started to get interesting. So the network segmentation being able to detect am I in a trusted location where I do allow local network access or am I
in an untrusted location where I need to restrict and block local area network access. Vendor D once again failed ultimately not necessarily because of a separate product issue or licensing issue. They just don't have a big network security play. They're not a networking vendor and haven't been. So they just did not have that capability. Vendor E failed both of those network location checks because they are just not that big of enough of a player. They don't have those capabilities. They will treat you the same regardless of if you're in either location. They just don't have any network recognition. They cannot block or restrict local network access. Then we got to the service cloaking. I came in with no real expectations, but I
will still was still kind of surprised by the results. So with the service cloaking again I'm trying to identify can I move laterally or at least start to do some recon laterally from the remote endpoint to the applications in the data center. Vendor A who I thought was going to be able to do this didn't do it at all. I was able to leverage end map. I could scan and see what ports are open and I could do version fingerprinting and determine exactly what applications were running over there. That was a big miss and one that I didn't necessarily expect. Vendor B, they were able to prevent me from fingerprinting. So, I couldn't see what
versions of applications or specifically what applications are running, but I was able to identify what ports are open and I could just use some manual efforts from there to do some version fingerprinting. So, I give them a partial win, partial success, partial, you know, failure, glass half full, glass half empty, take your pick. But, they were able to do a little bit of it. Vendor C, I actually didn't expect uh necessarily for them to be able to do this. They were the best. They were the only one that could say, "Oh, you get nothing, Darren. You don't get to see what ports are open. You don't get to do any version fingerprinting, but if you
open up Explorer and you navigate to that file share, here you go. If you open up a web browser and you want to go and and browse the web application, here you go. Those are legitimate requests." So, that was the only vendor that I found was actually successful at doing proper service cloaking. And vendor D also surprised me because again not a strong network player, not a strong network security player. They did also still stop me from fingerprinting. I could see what ports are open, but they were just like vendor B. So I gave them a partial. So that was where things started to become interesting. But then I got to the applications pillar. And
this was where we started to see that divide between the players get a lot bigger when it came to layer 7 capabilities. who can actually identify the SMB application and who's going to allow me to build policies like that. It was only vendor A. Regardless of what everyone else advertised, marketed, promised, they really were still doing static mappings between ports and applications and they were not able to build out policies around that. They weren't able to actually look at the SMB protocol itself. So, player A or vendor A was the only one that had that capability. And then we got to exploiting MS17-0. Again, I expected a little bit more from the market leaders, but vendor A also
was the only one that was capable of detecting my potential exploit, blocking it, and alerting to it and saying, "Hey, big red flag. Someone's actually trying to exploit a vulnerable app over here." And we stopped it. So, I was happy that they did that, but I was disappointed that everyone else said, "You want to exploit it, feel free, go ahead, have fun." And then things continued to grow in my interest. So, the local file inclusion attacks, everybody, well, vendors A and B were both able to block my LFI attempts. Great. But when we got to command injection, this surprised me and I wasn't expecting. So vendor A blocked both of my command injections with basically
just issuing ID and PWD. I played with other commands, ls, etc. And they picked them all up. Great. Vendor B only picked up the ID command. I could run pwd. I could run a lot of other arbitrary commands. It just let it fly right through. Didn't pick up on it. Didn't alert to it. Didn't block it. which tells me and should tell all of you, they're essentially working with like a very small subset of a dictionary or of signaling that they're looking at. So, there are plenty of ways to bypass that. Plenty of ways to be able to perform command injection against a vulnerable web app. Just don't do the obvious ones. But then things flipped on me when it
came to SQL injection. Vendor A that was doing so strong in all of these areas, they blocked the one equals one. Great. Everybody should. But then when I said eight's less than nine, here you go. Here's your table. Um, totally didn't pick that up. And again, it's like, okay, you're telling me that you basically just programmed this with a tiny little dictionary. You're only going to catch the hello worlds of, you know, of the space and easily obiscated, easily bypassed with anything else. And vendor B, because of the command injection issue, I expected them to fail this either partially or fully. Nope. They caught both of them. They caught all the other commands or all the other
SQL injection attempts that I made. It's like, oh, okay. So, you're better at SQL injection than you are at command injection. Interesting. Everybody else, including vendor C, failed all of these attempts. Not even close. And then finally, we got to the data pillar. Also, some fun nuances here. So, vendor A, well, vendor B was the only one that was able to do PII and PCI detection and blocking. That was fantastic. I actually wasn't sure if they'd be able to do it. Vendor A, I configured for DLP. I said, "Please look and detect. Use your PII PCI dictionaries." I could copy those files down from the server to my remote machine as I wanted. And it really confused me. I'm like, why
am I able to xfill this data when you're so strong everywhere else? When I dug into it, come to find out their DLP inspection is limited on what protocols it supports. If I was doing this over HTTP, HTTPS, okay, it doesn't support SMB. Total blind spot, as well as some other protocols that it totally doesn't care about. That was a big miss for me. Now, if you look into their whole portfolio, they have a solution that they can sell you that'll do that, but it's a separate product. It's a separate purchase and it's a separate agent that you have to install if you want to cover those other protocols like SMB. So that was a huge miss I didn't see coming. And
then lastly, taking my IICAR files, taking my interpreter binary reverse shell and uploading those from the remote machine up to the Windows file server. Everybody had a bit of a miss here. Vendor B said, "Yeah, go ahead. Pass those files all along." Part of it is because I think that they're lacking in some good malware detection and another part of it is I found that they only really inspect traffic like malicious content and arguably DLP one way. They don't actually look birectionally to do these inspections which was a big gap. So I could pass malicious content up all to my heart's content. Vendor A I marked as a partial because in Oh, was there a
question? Do you have time for a question? >> I I should at the end if I land the plane on time. >> Okay. All right. If not, you can easily pull me aside. So, vendor A, I gave it partial because I again they detected the IICAR files. Great. Easy. The binary executable that I had developed with MSF Venom again didn't do any obiscation. Didn't do any didn't do any like exor encoding. Nothing fancy here. Super vanilla. but it had a it didn't match any signatures. So this solution would allow me to do the first upload. It would sandbox it, but based upon obviously leaning towards operability, it wouldn't get a response back from the sandbox in time and it would allow me to
upload that file for the first time. If I did subsequent uploads, you know, try 2, 3, 4, 5, it would block it. It has signatures for it. But weirdly enough, if I just regenerated the binary with MSF Venom, every time I generated a new one, same thing. It would try and sandbox it and it would allow the first upload. In a real world situation, I think that you can tweak the configuration, especially if you're willing to impact end user behavior and end user operations and say you're going to wait. I think that you could turn that into a full block. But a lot of the basic and default config and the fact that most environments will lean towards user
operability that was that was not a big win. So that was a partial but I was surprised that nobody else picked this up. So as a whole here's your full table and it really told a story and it proved my hypothesis even with just basic testing. I was like okay should I get more advanced? Should I do some stronger adversarial emulation? I didn't need to. Just fundamental tests like we talked about here showed that the small medium business vendor, they fell right off as soon as we got into the network applications and data pillars. The niche vendor could do a lot more than what's shown on here, but it's going to require more of your money and it's going to
require more admin work and it's going to require you to go into more dashboards and do more config. Vendor C did pretty strong right up until we got to the applications and data side and then they fell off. even being a market leader, being one of the top three, still couldn't achieve these capabilities. And then A and B, those guys were duking it out. They were going at it. They were exchanging blows. Both of them had pros and cons, but neither one's perfect. So, it really showed that when you're looking at down selecting, which of these providers is going to make the most sense for you, you really should be testing these things out because, as you can see, the results
will vary and may not be up to your expectations. Now, I did mention I'll talk or touch on risk reduction and I'll I'll probably breeze through this a little bit quickly just because we don't have a lot of time left and I want to be respectful of all of your time, but I I had been asked by another customer in a situation like what's an easy win? Obviously, I can do a lot of identity controls, device controls, but what's the biggest easiest win I could get to reduce this residual risk from the ZTNA provider? And where these folks will come out and tell you, hey, here's our deployment architecture. just drop this virtual machine in where
your applications are. You don't need firewalls. Don't worry about that. We can even replace them. Please, please, please don't. Okay, if you test this out, one of the first things that you will learn and what I absolutely strongly recommend, don't get rid of your nextg firewalls. Right? A lot of the deficiencies and the risks that these guys still are having and that you will be owning can be mitigated just by deploying a true next-gen firewall, configuring it appropriately and placing it as a boundary in between their virtual machines and your actual applications that your end users need to access. And then lastly, if there's anything to take away from this, it's do not trust. Always always
verify and test test test. Use a lot of the framework that I started here and build out your own testing methodology. Put some of these use cases in your environment to when you're doing a proof of concept. Build out applications in your local segment and subnet that are relevant for your business, for your environment, and test all of these concepts and break it down to these pillars to see which player is going to fit best for you. a year from now, use some of this to start to evaluate where are they at because I guarantee you things are going to change, but the question is is how much will they change and what are some new or still remaining
risks that they have. Lastly, I did want to thank I'm not the only person doing research in this space. I'm not the only one that likes to break things. That's why most of you are here, I'm sure. But if you were at Defcon this year, I highly highly recommended that people went and looked at this talk. It's also publicly available. They even recorded it. I think the video's on on Vimeo, the PowerPoint is available in PDF. But these guys from Amberwolf, uh, David Cash and Rich Warren did some phenomenal research and took a totally different approach than I did. I'm looking at data in transit and a lot of the network side of things. They said, "No, let's see if
we can break the authentication. Let's see if we reverse engineer the client and the binaries. What does that look like?" We all assume that these security companies are doing good development and security secure development practices. in reality, they're making some really boner mistakes and they're doing some things that are just shocking and that's what a lot of the content that Amber Wolf uh provided. So, I highly recommend going and taking a look at that because it may be additional use cases and test criteria that you have, especially if you're a more highsecure environment or if you don't want your endpoints and the zerorust network access provider software that you're installing to become a potential compromise.
And with that, I do want to thank everyone for their time today. I am always available. You can grab me in the hallways. Uh you can reach out to me on email or LinkedIn. If you like dry reading material and some other crappy pictures, uh you can always reference my white paper and download that. And with that, I will open things up for questions. [applause]
>> Yeah, by all means, go ahead. Hi. The test that you were showing that had to do with layer 7s conjection and all that >> in your testing was all the enforcement done on the endpoint or were you also using the proxy or server side uh components of enforcement and detection? >> Yeah, solid question. So all of the enforcement and inspection was being done in the cloud. So the data had to actually transfer uh from the remote user up to the cloud service and then there is where the inspections occurred. It wasn't at the end point. Yeah. So, so those solutions were not doing even basic layer 7 firewalling in their cloud firewalling. >> Correct. Yeah, very surprising and
shocking even for me. >> Thank you. >> Yeah, absolutely. >> So, uh I'm not too familiar with um zero trust or wasn't until your talk, great explanation. But see, seeing as there was first deficiencies in in some of your test categories across all these different vendors and also since a lot of these different um capabilities seem to be like like you said duplicative of like the nextG firewall or I was thinking about web app firewalls, other security solutions, does this kind of call into question like the value proposition of one unified uh zerorust network access solution? like why buy this extra tool that's doing it all when it's not really doing all of it well I
guess you know that's a solid one so a lot of it came down to they are promising all of these things and these capabilities but they're just not delivering on them so I think there's a couple of reasons why folks would still want to transition over part of it is moving from like a capital expense model to an operational expense model and being able to have less on-prem requirements um and less on-rem equipment although as you saw you still have these virtual machines that you may be responsible for, but you're not necessarily responsible for the upgrades or the updates of those. So, ideally, you're picking a solution in a ZNA provider that will be operationally
easier and have less uh employee time that can be used elsewhere. But yeah, this was a bit of a reality check to say, yeah, they're promising you can do all these things. And like I said, some of these vendors will come in and say, "Oh, you can get rid of those firewalls. We got all the security. Don't worry about it. Trust us." But this proved that it's not actually there. And you definitely need to identify what those residual risks are and adjust accordingly. And yeah, you're not getting rid of firewalls. At best, what you're doing is getting rid of VPN concentrators. So, great question. Thank you. Thank you for this uh very informative. I was just
wondering if you repeated the same testing matrix uh against the other vendors um that you had listed in the Gartner Magic Quadrant. And my assumption is you did uh magic quandrant and then Microsoft. So I guess I will dance delicately on that subject. Uh Microsoft may or may not have been one of the vendors that I tested and I'll be happy to have that conversation with you on the side because I felt like they may be worthwhile contender. I get a lot of conversations especially as companies are going Microsoft allin. I'm already paying for all the licensing so why not? Especially from the sea levels. So yeah, I it would be a safe assumption that I
probably included them in that in that matrix. So solid. Anybody else? Um from like a career pathing perspective, like I feel like a lot of what you described was be beyond like identity access management and also like deep into networking. >> What would you uh what advice would you give in terms of like skills to pursue to pursue a similar path? Oo. >> So that is a tough one. Often times I and I take this for granted. You as you start out in in information security or cyber, you'll probably want to pick a path and pick a vertical where you can get deep and then expand from there. I I started out doing a lot of CIS admin
work, doing a lot of Windows and Active Directory. I expanded into networking engineering and then more into dedicated cyber security. Especially as I started to see how complex and difficult security was. I was like, "Ooh, I like that. That's hard." Um but generally going down like the zero trust pathway is difficult. Um there are folks that will solely stay in one of those disciplines. Um I always assume that everybody has some kind of a networking background and that we understand what subnets are and some fundamental network architecture. But there are some of my co-workers that they live in identity. They do identity access management, privilege access management. That's all they know. they can like write down an
IP address, but they have no idea what subnetting is or a cider mask or anything like that. And they're absolutely phenomenal at what they do. Um, so I think it's a matter of if you want to start going down that zero trust route, you should be decently deep on at least one of those disciplines like identity or networks or data protection and then expand out and start to learn more about those other disciplines. Um, myself, I don't see myself as an expert in most of those disciplines. I lean on my co-workers and my colleagues, but I at least know enough to be dangerous and I enjoy learning and I enjoy breaking things and I'm always humbled by the
research that other folks are doing as well. So hopefully that makes sense. >> Yeah. Yeah. So you're essentially like picking from one of those pillars that you just mentioned in your testing, going deep in that and then Yeah. >> Absolutely. And expand from there. Yeah. But don't And I I see some if you try to be super deep on all of those pillars, you will not have any free time in your life or family. your family will give up on you and say, "You know what? That guy's too focused." So, all right. Thank you all for your time. Greatly appreciated. [applause]
>> [music]