
In the beginning was the internet, okay, the arponet and it was good and many protocols sprang forth. FTP, Gopher, finger. But one protocol exceeded them all. HTTP. And HTTP grew and racked strong, and Mosaic came forth, and Mosaic begat Netscape the Navigator, and it was good. But there was one among them named MCI, who pondered within themselves and said, "This is fun and all, but how do we make money? For requests are all alike, stateless, and cannot be separated one from another. And frankly, you try building a shopping flow in these conditions." So a young engineer stepped forth named Lou and Lou created the cookie and with the cookie was born the session and with
the session came those who desired the session of another and that was a problem. So welcome everyone. I am thrilled that you have come to hear me speak at 5:00 on the next to last day of a conference and uh I'm standing between you and beer. So, well, I guess beer just happens, so maybe that's why you're here. Um, I I have some notes for this little portion. Uh, first off, how many people attended the clock the talk at 2 o'clock by Raphael Felix that was also talking about device-based session credentials? I was cringing as he was going through that because the level of detail he went into about cookie storage on disk, if he
had done that with device-based session credentials, this would have been a 15-minute talk. But he didn't. He left me some content and so all is good. Um, a couple of uh disclaimers here. Uh, first off, pictures. Uh, more than welcome. You're uh, please take pictures of me, take pictures of the slides. There will be a Q QR code and a URL at the end if you would like to download the slides. Uh, so, you know, save your disk space if you prefer on your camera roll. Uh, I would like to say that these thoughts do not represent my employer, but I'm self-employed, so that won't hold up in court. Um, so who am I? I
have I started my career monitoring websites, like red, green, it works, it doesn't. And then I started a career or a job talking to executives why websites were broken. And then I got into being a project manager for developing websites and eventually I guess more of a product manager. And finally I decided all that was super boring and so I got into security. So that was about 10 getting on getting close to 15 years ago and I've been an application pen tester primarily application ever since. So hopefully that gives me some basis to talk about these topics today. It might also mean I'm just old and opinionated. Uh so I have a question. I I have an
assumption about the t the audience here. Uh and so I would like to ask how many of you would consider yourself a de developer as your primary job role? We've got a couple. Okay. How many of you write authorization checking session management code? Hey. Hey, we've still got some. Excellent. How many of you spend more of your time talking to the people who are writing that code? a lot more. Yes. Okay. This is Bides. That is expected. Um I've actually given this talk once before and the the focus was very different, right? I feel like in this case I'm sort of preaching to the choir a little bit. I don't need to convince you that adding the secure
attribute to a cookie is a good idea. Um, so instead of doing that, uh, I hope to present you with some stories or some kind of logical ideas, uh, that you can use when you're having those conversations that will hopefully make that uh, I'm going to use the word battle with probably the product manager of why this particular feature should be put in this sprint instead of being punted out to Neverland. And besides, I'm really tired of writing the secure attribute finding in pentest reports. Okay, so first story. Um, about two years ago, how many of you remember the Conti leaks, right? Conti was a malware group. Uh, some presumably disgruntled member uh took all of their Discord, I
think it was Discord, uh, chat logs, zipped them up, and put them out on the internet, right? And so a bunch of security folks such as myself, uh, just for kicks and giggles, I went through it and just, you know, saw what was there. And other people who paid a lot more attention than I did realized that there was a section of Telegram channels that they said, "Hey, y'all ought to hang out here." And so I did that because you can do that on Telegram. Why not? And so I was in these groups for a while and didn't really say anything. And there was mainly just news, you know, ex new tool released, so and so got arrested.
Sometimes there were cheer emojis, sometimes there were tear emojis. Uh, but it was just, you know, obviously just a telegram group in that community. And then one day somebody hopped in and said, "Hey, the support channel for XI Infosteeler, I honestly don't even remember which one it was, is now going to be on this channel." And so I thought, golly, that's where I need to be. And so I joined the channel and just continued to lurk and it took about 3 days before I got this chat. Um, so for those who don't speak Russian, uh, so somebody jumps in and says, "Hey, I heard you're a cookie market and they said, uh, basically a little
conversation." Well, that's not here. This is just a simple Telegram channel, completely innocent. But if you want to go over to our other chat or the actual market, go ahead and sell away, right? And I'd always heard that there was a marketplace for stolen cookies. But I always thought that's kind of dumb, right? Like cookies are short-lived there. Why would you do that? Um, but apparently it's it's valuable, right? and we just won't mention uh like Google where that session lives for a month and kind of drives me crazy but not crazy enough to delete the cookies so I guess I can't complain anyway. So I I guess story number one to me that you should
use is that cookies actually need to be defended. So I made this lovely graphic that tries to capture the majority maybe all of the ways that cookies get attacked. And there's bombs, which are attacks, and there are shields, which are defenses. And they all kind of boil up into categories of how a cookie might be compromised. And so today, we're going to really speedrun, but but we're going to go through and address most of these columns. Um, I'm going to skip guess the token. Keeping your tokens random. Good idea. There we've covered it. All right. So, uh, session protection on the wire. Uh, hopefully you don't mind. I was fascinated by the trivia, the history of
some of these defenses, and so I'm going to share it with you. It's not going to help you win an argument, but it's fun. So, the first one, RFC 21109. This is dated 1995. This is the original cookie RFC written by Lou Mont Montego, Monte, something like that. I can't remember his last name, but this was Lou who quote unquote invented the cookie. And so he realized right from the start that one of the risks of protecting a cookie is that somebody could see it as it was transmitted over the wire. Now when he was writing this RFC, SSL wasn't a thing or at least not a public thing. It was actually being developed in parallel there at Netscape.
And so in the RFC he says there should be to the secure attribute says only use an unspecified I think I actually highlighted there a secure method right that was always SSL right which is then been succeeded by TLS. So elsewhere in the specific well no let me back up here. So secure was that very first defense. If you put the secure attribute on the cookie the browser knows only send this over an encrypted connection. And if everybody would just do that, for the most part, we wouldn't need any more defenses. But as we all know, people don't just do that. And so we have more layers. All right. So the risk we're defending against is here. I've got this lovely
picture here, a a an MITM attack, right? The secure attribute, like I said, would be great if people used it. But 16 years ago this week, some jerk named Moxy Marlin Spike, let me back up a second, uh, released uh, MITM proxy, I think, is what it was. Actually, I could be wrong with on that name. Anyway, so so he realized that people weren't actually doing this, weren't using the secure flag as much. And so he wrote a tool that had been theorized and possibly implemented before, but it made it really easy to uh if I can convince you to talk to me as your proxy server, then MITM proxy will uh strip out the HTTPS
directives in all of the HTML. And so even though you might start No, that's not true. uh anywhere there's an unencrypted connection, anywhere you make an HTTP, MITM proxy will actually inject requests and and cause your browser to hit other websites and it will strip the secure flag off that off the protocol, right? HTTP versus HTTPS and therefore if your cookie is not protected, it will slurp it up. Very handy, very useful, ruined the day for a lot of people. So, how do we defend against that? uh 2009 we've got uh this one which wow you can't read that http strict transport security right uh I'll move a little quicker here that obviously directs once I've made a request and the
website responds with that that header my browser will never again at least until a time expires but essentially never again make a connection to that website that doesn't use HTTPS right okay so this is why I've gone down this rabbit hole What about that first connection? How many of you are familiar with the HSTS preload? We got a handful, but enough I don't feel stupid. So, there is this website HSTS preload managed by Google, but implemented by all of the major browsers. If you register your site, put it on this list, the browser will never ever make an HTTP request to your website. Not even the first time, not ever. Done. Right? So that's a defense you can put in your
pocket. And to be honest, your developer doesn't have to implement it. You can do it and it'll be done. All right. But is that really a problem? Like I mean, network segmentation, client isolation, who really can get a man-in-the-middle proxy or machineinthe-middle proxy running these days? Some bloke in Australia gave us fodder for that. And I say bloke because Australia. Anyway, uh so last year uh there was a gentleman, we shouldn't call him a gentleman, there was a crook who uh was setting up Wi-Fi access points that was named the same thing as like the Sydney airport Wi-Fi, probably also Starbucks there, probably a hotel there. And then he would sit in airports, wait for people to connect to him, and at
that point he is literally the proxy, right? That's what they thought was going to happen. And so he sat there monitoring all their traffic. Now, this is great for story time. In reality, he wasn't slurping up session tokens. He was just looking for nudes, which really, I don't know. But the uh the article still works for our story time on why we should spend a little bit of effort and keep our cookies safe on the wire. I've got a checklist here. It's there in the slides. Not going to speak to it. All right. So, next up, uh, we've got JavaScript injection. And if you'll pause again for a little bit of trivia history, why is it called cross- sight
scripting? Does it have anything to do with, say, cross- sight request forgery? No. I mean, a little, but no. Right? When we're talking about this attack that is known as cross-sight scripting, what we're really talking about is injecting JavaScript into the client's web page. Right? And it has a whole lot more to do a lot more similarities with SQL injection or command injection. An earlier speaker referred to it as client side remote code execution, which is pretty descriptive. I could I could see that. Uh anyway, so I'm on a little crusade to have us refer to it as JavaScript injection instead. The history behind it in one minute. Back in 1999, Microsoft, so we can blame
Microsoft for the name here, uh Microsoft was monitoring their forums and they started seeing links going off to American Express. Now, I've got it listed out. No, it isn't. So, there's a UR an image tag there. It's a broken image is is what that is, right? And the URL, like I said, was going off to an American Express site and it had a bunch of HTML encoded nonsense at the end, or URL encoded, anyway. Uh, and so they decoded that and realized it was JavaScript. And they realized that what was going on was there was a reflected cross- sight scripting or reflected JavaScript injection attack on the American Express site and they were using the popularity of Microsoft's
forums to have people come and visit. And anybody who had an active session with American Express, their cookie was getting slurped up and sent off to evil attacker. Right? So there's your trivia. That's where cross-ite scripting came from because it was going from the Microsoft forum to American Express and then off to the attacker. How do we defend against it? Everybody knows this one, right? HTTP only on your cookie. Easy. This has been around since 2009. Curiously, the same year MITM proxy was released. I don't think there's a connection. Anyway, uh that's when this RFC was there. Uh it would be easy to use. Oh, but let me set up a question. How many of you have had to fight the
battle of whether or not HTTP only should be on your session cookie in the past year? We got one. If I go two years, do I get any more? Okay. If if I could, why why was it an issue? >> I don't have a mic for you. I'll repeat what you say. >> I think it was a single page app. API. >> Yep. Yep. >> Okay. So, let me repeat that single page application. So, a JavaScript front end and as most of us are aware, right, that makes a bunch of API calls back to the server. We need to authenticate those API calls because that's where all the good stuff happens. And so, the developers in their mind say, I need to
have that cookie so that my JavaScript initiated request can send the cookie with it. Is that actually the case? No. Thought exercise. I have a cookie. I have successfully attached the HTTP only attribute to my session cookie. Right? If I go into my Java my console and I type document.cookie, I'm going to get undefined. Right? That's what the browser is supposed to do. It blocks that access. But what happens if I set up an XML HTTP request or a fetch request, right? and I go off to the URL that is within the scope of my cookie, right? I can't see the cookie, but the browser knows it's there. The browser knows it's supposed to go along and so it will be sent and
the API request will work and the front-end API didn't actually have to have the cookie at all. So, you can hopefully win this argument. Now, there are some architectural choices that make this null and void. uh bearer tokens the the au I always mix this up authorization header uh in order to add that to a an XML HTTP request like you obviously have to value have to have the value to do that the browser won't do it for you now I would argue that this is a reason that cookie authentication is actually superior to bearer tokens but JWTs and bearer tokens are the new hotness so good luck that that's a battle to Um, I will say if they're going to use
the authentication uh header, you want to store that cookie or that that token at this point uh in a worker thread. I'm going to skip going through this workflow, but that worker thread isolates it from any other JavaScript that'll be running on the page. And so it can't be stolen through a JavaScript injection attack. However, the JavaScript injection attack can call the worker thread. So anyway, moving on. Blind session attacks. What happens if I can't read the cookie? Can I still do bad things with it? Turns out yes, right? We can abuse the fact that the browser automatically attaches that cookie to requests to other websites. Right? Cross- site request forgery. I'm going to skip this. Right?
It's very similar in the attack flow. That's why in this case, the cross site actually makes sense. Um but in but instead of shipping off a cookie running JavaScript, it's just making a legitimate call that is beneficial to the attacker, right? And so traditionally there have been four requirements to make a CSRF attack work. We got to be using cookies first off, right? Um the endpoints have to use get or post. If you try to make a JavaScript request with put, it's the browser is not going to send the cookie with that unless you've enabled it with access control allow origin. But let's stick with this here. Uh, and then you can't have any unpredictable parameters and
the result must be state changing, right? The attacker is not going to get the response. They're just going to benefit perhaps from the results of that query. So, if I find an endpoint that adds myself to the admin group of a site and then I lure an admin to my malicious site and their cookie gets sent to the server, the server is going to be like, "Yep, the admin just said add this user." and it'll go ahead and do it. Right? This is cross-sight request forgery. But browsers are getting really good these days. Uh raise a hand. If the same site attribute on a cookie, what is the default value these days? Anybody going to yell it out?
>> Lax. Okay, which sounds bad, but isn't as bad as you might think, right? Uh, lax means unless the user is like it will still send a cookie with a quote unquote uh normal request like an image request or if a user clicks a link the cookie will go with it. That's what we expect to happen. But anything else, any background requests that get generated, it will not send that cookie unless you're already on the same domain, right? We're following the same origin policy. Now, it turns out it's even a little more strict than that. I had a surefire finding. I had my proof of concept running like last month. All of those first four conditions were true. Uh
there was no same site cookie on it. Anyway, it still blocked the request even though the user was clicking a link. I don't know. The browser is getting really object aggressive about that. So unless you've set the same site none attribute on your cookie, you're probably secure even without all of those other defenses. So my story here is if you've got limited political capital, don't fight this battle, right? Make sure they're not doing same sites none or yeah, same site none. Make sure they're not doing access control allow origin and access control allow credentials, right? Because that will break this. But if they aren't doing those things, then it's probably okay, fight another battle. And I feel a
little dirty saying that, but like that's kind of reality. All right. Uh quick mention of a loophole. If you have JavaScript injection on your site, all is lost, right? Uh cross- site request forgery is designed to protect against cross-sight. If you're on the same site, all of those things that you might do with cross-sight request forgery, you can do because you're on the same site. And this isn't really news to people, I think. Um but a lot of times when we write up or when we're talking about a JavaScript injection attack, we focus on stealing the cookie, right? And if HTTP only is on the cookie, we can't steal the cookie. And so the developer is
like, what can go wrong? Well, I can still make requests with the authority of the user within that same website and I can probably do bad things. Uh yeah. Uh clickjacking. I have a slide because I have to have a slide. I haven't seen a a clickjacking attack in the wild since 2004. Um, so it exists, but there's good ways, really easy ways to defend against it. Uh, again, I wouldn't I don't know how much time you spend fighting this battle, but the X-frame option header has been the favorite. Uh, it's actually being deprecated, which was a surprise to me. They would rather people use the content security policy directive frame ancestors does the same thing. I imagine
they'll be really slow to actually stop supporting uh the X-frame options, but they want people to migrate. So now, you know, last topic and then we get to what I think is the fun stuff, the device based sens device bounce session credentials. One more topic though, post compromise. What happens if I steal the cookie? Then what? Well, it turns out there's a lot I mean I shouldn't say there turns out you're all probably aware there's a lot of defenses designed to sometimes we use the phrase limit the blast radius of our uh attack. So if a an attacker gets the value of the cookie somehow, right? No pointing fingers, they just got the cookie. Uh what can they do? Or more
importantly, how long can they do it for? And so, um, my suggestion is that actually drawing a timeline might help in these conversations. Um, so you've got log in to log out. How many users actually log out? Not very many. So that's why we have idle timeouts. That's why we have absolute timeouts, right? And as we start drawing these out for our developers, for our uh scrum masters, pro program managers, hopefully it helps illuminate this picture, right? We also have JWTs, which I'm just not a fan of. Uh but uh sorry. Um the problem with JWTs is that their expiration is hardcoded in the token. And so even though the user presses log out and it can't be stolen
out of the browser anymore because it's been cleared out of the browser, if the attacker already has the value, they can continue to use that token for the full lifespan of the token. Right now, you can actually build in an identifier and then you can store on the server side this token, the user's logged out, it's no longer valid. But if you're doing that, which I encourage, but if you are doing that, why are you using JWTs in the first place? Right? the the benefit there is supposed to be that you're not supposed to have to do server side uh session tracking. Um again, my opinion, I'd rather use a cookie. I just think it's better. All right, timeline does
that. And the summary. All right, how am I doing on time? I didn't restart my timer. I have no idea. All right, we got 15 minutes just about right. So, let's talk about the new ideas. Okay, how do we defend a cookie given all of the demands uh that applications have for using cookies? And I I keep using the word cookie. Session token, right? Most of these defenses are cookie based, but the session token is really what we're talking about. All right, so fortunately, crossdevice cookies aren't a thing, right? If I log in on my laptop and I get a cookie, when I go to my phone, I don't expect that the cookie is there. I don't expect to
be logged in. I expect to send my credentials again and get a new cookie. Whoops. This way. All right. So, we can use that to our advantage. We have an expected behavior that gives us a foothold. So, there was a proposal that was put out by Microsoft in 2016 called uh token binding. And uh if you are at all familiar with this basically it takes advantage of the TPM the trusted platform module that is more and more on devices. And the idea of the TPM is you can you can ask it to create a key and you give it an identifier to so you can refer to that key in particular in the future and you
can say hey encrypt this right and the private key resides inside that chip inside that uh that module and so it can do the encryption but if it's done right there is no way to get that private key or or the I think it's always public private key anyway we'll say private key, you can't get the key out. So your the ability to encrypt the lifespan of that key is absolutely divide uh absolutely bound to that device. And so if we implement a protocol where I get a cookie or some kind of identifier and then I encrypt it using a key that is unique to the website, right, that TPM can hold oodles of keys. That's not a
limited resource. So every website I go to, I can ask it to create a new key and then I can use that to prove that, yep, this request came from something that has access to that TPM, right? So the Microsoft proposal thought they they were, I think, rather clever, but unfortunately it didn't work out. Sorry, spoiler. Um they they said the client knows the constants that were negotiated during a TLS session and the server knows the constants that were negotiated during a TLS session and the attackers don't. Right? We're pretty confident that TLS works that we can do that the that exchange and generate those constants so the two endpoints know and nothing in the middle does. So
why don't we use that as our verification? And so you'll the the proposal here is when in fact I'm going to skip words are hard. We have pictures. Um so the browser initiates that TLS connection and it's a TLS option that it's sending that says, "Hey, I'd like to use token binding." And their server says, "Great. I know how to do token binding." And so it responds. And so then this the browser handles all of this, requests the public key and then takes those well just hold on for a second. It requests the public key and then it submits that the the the public key back to the server which then stores it with the
session, right? And so then when I make a request, we've got a new TLS section session, but both sides maybe we do. Anyway, let's pretend we do. uh both sides know those constants. So the browser can encrypt them, send them along to the server, the server can use the public key to decrypt them and trust that if it gets the right constants back only the trusted client could have sent this request, right? So this is kind of beautiful, right? It's very little extra work on developers. Um the problem is this last box. The server needs to read the session content and then it needs to get the key that is associated with the session. Think back to your if you're a website
developer or maybe a web server developer. How far apart are those two bits of code, right? Like I write a lot of websites. I never look at TLS constants. And so the overhead that was being generated to somehow create an API, create an interface that the application code could use to get access to those TLS constants which it needed in order to do this decryption was just too wide of a gulf to b to uh cross and so people kind of like the idea but it just died. Um all the major browsers I think even Edge have stopped supporting this protocol. So, sorry to waste your time, but Google has come along and generated a similar
uh proposal and they're trying to have it get traction, right? I don't think it's perfect, but I think it's good enough that it should get traction. So, I'm here talking about it. Again, words are hard. Let's talk about a picture. Oh, I got to say one thing. uh in the RFC in this proposed standard there are some disclaimers. So Google recognizes that if the attacker is on the hardware, if it's still if they've still got access to your uh laptop, they can possibly call the TPM and they can probably call the API in the browser that is going to call the TPM. And so if they're still there, this isn't going to work. But if they're still there, you've
got bigger problems. So that's disclaimer one. Number two, if the attacker has machine in the middle position to look at this traffic as it's being exchanged, you got bigger problems because you would have seen the plain text or the the password go across. So the cookie is is going to be subject to or could be sniffed in that case, right? So yes, those are valid attack vectors. They're not worried about it because we're just at a different point in the attack phase. So similarly although the devicebound session credentials starts after authentication. So the user has submitted their username and password and two-factor authentication of course and then the browser is building this session and they send a header that looks like this
uh this sex session registration header. Okay. So this is a response header. It specifies the encryption methods that are supported. Uh, currently Chrome, which is the only browser that implements this, won't actually support RS 256. It's only uh ES. Anyway, trivia. The in the the response header, there's a path, and this is a web endpoint. We'll call it an API endpoint that the developer has to implement. And then there's a challenge with random string and there's an authorization with another string. I haven't figured out why there are two yet. Uh but there are two. So we'll we'll watch how they flow through. So the developer also continues with the normal session building. You still use a
traditional session identifier uh but it has to be a cookie. Okay. So, oh, there there's all the things I just said. Um, oh, I I will say in the at least the blog posts that are describing this, they're suggesting that you could continue with a a long lived session. So, this could be completely normal. And if a user visits your site with a browser that doesn't support DBSC, then it's just business as normal. So, that's probably a good thing. That will ease the transition path. Uh, but eventually, we'll want that to die. Uh, okay. So, now we're in the middle square. So, the browser has received the response header and it goes off and uses
the TPM the same way, more or less the same way. Um, the the the response header included a challenge. And for the longest time when I was reading the uh RFC, I thought that the challenge was the thing that got encrypted. Uh, turns out it's not. What it actually does or the browser does is builds a uh a JWT, so a payload, JSON payload, and it sends that off to the TPM to get encrypted/signed, right? And then the thing that gets sent back to the server is this signed JWT and it's signed with the private key that only lives in the TPM. Uh, and that looks like this, right? So in the previous request we said you
start your session at this endpoint and so we post this JWT this is the payload of it to that endpoint and the developer has to have written code to do something with this JWT right what they're supposed to be doing is taking the public key which oh by the way is right here and use that to verify the signature on this JWT. Now again, network sniffing out of bounds. They know it's vulnerable to that. Normally sending the key that would decrypt the token would be a bad idea, but in this case, we're okay. All right. So, if that validates, and there's no reason why it wouldn't, now the server knows this session is associated with this public key, and it
tucks that away. So uh again, just pointing out we've got that same challenge here and we've got the authorization here. I don't know why there's two. Uh, there's the key. All right. So, the response to that reg that session start endpoint looks like this. Uh, and there's stuff that I'm not sure is really needed to be returned, but it is. Um, but the main thing you want to see, I've got boxes here. You've got that same identifier. Don't know why. Uh you've got the refresh URL. This is key, right? So we've got a registration URL and we've got a refresh URL. And then we've got a scope. So part of the specification is that the server can
tell the browser which of my endpoints do I care about this session token. So you could like exclude slashstatic and then if the browser loads slashstatic uh the browser will know not to send the cookie to that particular endpoint. I don't know that it would hurt if it were but you can exclude it. Um and then the second thing is the endpoint tells the browser the name of the session cookie, right? uh because the browser is now going to take on the responsibility of keeping that cookie refreshed. So it's following the what's become somewhat of a standard. You issue an access token, you issue a refresh token, right? And the refresh token lives longer and and
typically it's been your client side code that has to keep track of or detect when that cookie has expired and then go get a new token, right? the browser is going to start doing this, which is a weird mishmash of responsibilities to me, but it it provides some benefit, so maybe it'll work out. So, when the browser realizes that the token is expired, and I I haven't seen this in action for reasons I'll get to, so I'm not sure if that's purely timebased or if it's just waits to get a 401 unauthorized error. Uh, but somehow it detects that. And so again, the browser says, "Oh, my cookie is expired. I'm going to automatically call the
refresh endpoint. I'm going to get a new challenge." That's going to get uh sent back to the browser. The browser is going to get signed by the TPM. It's going to get posted to the refresh endpoint. The server is going to get that public key, verify the signature. If it matches, then it's going to issue a fresh session token. Now, the session token is going to be quote unquote short-lived in this environment. The examples are generally like 10 minutes. Um, I would I could see that dropping down to like a minute, but regardless, 10 minutes or less is pretty much too fast for an info stealer to get that cookie and go sell it because the cookie is
still going to exist, I believe. again get to get to some of the challenges in a minute within the uh like if I go to developer tools I think I'm still going to be able to see that cookie it's just going to expire really quickly and the the client side application is not going to worry have to worry about refreshing it so we now have a cookie that only lives for five minutes and and a refresh mechanism that is tightly coupled with the device that was used to authenticate to the server and that pretty much renders the info stealer attack void, right? It doesn't work anymore. And and that's that's our goal. Okay. So, tips. The RFC's out there.
There's at least one website which is trying to let you see this in action. Uh it's written by a Google Google employee. It's a little flaky. Um I think it's got some backend issues and it crashes a lot. Uh but when it works, you can you can capture some of these exchanges. But this is a uh an inrogress standard and so you have to set four different flags within Chrome in order to get it to respond to the the headers. Um the website you're talking with must have a valid HTTPS certificate. One of the other rules that they're implementing with DBSC is that uh if it's not encrypted, the cookie does not go. and the the headers are going to be ignored.
So, you can't really do local, right? You've got to spin up something and get a let's encrypt or whatever. So, that has to be there. And then the other wrinkle is that the uh refresh requests don't show up in the developer tools. So, if you're trying to debug your server, you've got a challenge. I've gotten some of the requests to get captured by Burp and but some of them don't which is probably my server code error but I don't know now you can use uh net export which is built into Chrome and this basically logs a whole slew of network packet information to disk and then there's this other tool hosted on AppSpot that will parse that mess of
data and present it in a way that you can read. There's kind of too much data here to really I mean you can find what you need but it's not user friendly. Um okay about time challenges that I see for adoption. Uh like I said I spent probably only a week but a week trying to implement a server that would get this DBSC uh exchange to complete and I failed. Um, so it's fiddly. Uh, eventually people are going to get it to work. I'll get it to work. At that point, this code really be needs needs to be made public. I don't believe it's practical at all. If we go to a developer and say manually implement
these endpoints, I don't think it's going to happen, right? Not that they can't, it's just if we can't get them to add a response header, we're not going to get them to do this. So, it's got to be incorporated into a uh framework or at least reference implementations that we can just cut and paste, right? Java, Python, Ruby, whatever. I believe that needs to be um I'm still struggling a little bit with having the browser do my cookie management. Um I guess that's something I just need to get re used to. There's no reason I can think of that it shouldn't, but that's a different model. And so, developers and application architects are going to have to get used
to that. Uh, and then I've already gone over the the problems with debugging. So, that's it. Uh, we've got the QR code. The, uh, slides are hosted, uh, on my website, and we've got LinkedIn, email. Uh, I'm happy to field questions now or later via other methods. Um, thank you. >> [applause]