
Welcome to my talk uh does the fog ever stop testing without the all the answers. So brief intro about me. Uh my name is Aaron. I'm Welsh. I'm a pen tester at KPMG based in Leeds. Very big web application fan. I have tried [clears throat] to give the web app stuff a little bit. Um but towards the end we'll we'll let that bit loose. Uh and as you'll see throughout the slides, I'm a very big Housemd uh enthusiast. So the fog uh essentially in this talk I'm going to explore what it's like starting your career as a tester. So anyone here today that is working towards joining the industry or has started will probably share my experience in being
clueless as to what is expected from you. Uh when you get closer and closer to your first test, you may get your first scoping document, see a couple URLs, a general overview, three user roles, and a network diagram, which could very well hold the secrets to the universe, and start to think, how do I actually test this from start to finish? This may be simple to some, but at the start, we won't always know what we are testing inside and out. While this may be daunting, I'm here to essentially give advice and guidance which I've received, reference it to real life examples of tests that I've conducted within my first 6 months at KPMG, and
give you an idea of where to turn to when the fog starts to form. So, [snorts] how do you clear the fog? So, learn the fundamentals. This has probably been hammered into you by uni professors and testers in the industry, but most interpret it as knowing your tools inside and out, which isn't necessarily wrong, but is much more than that. When you start your testing career, you tend to learn on web apps. They're typically easier to test as you can get a checklist put in front of you, and provided you follow it, you will get a decent enough coverage. However, I bet that not many of you have been told to consistently to understand how a request is actually sent. How many
proxies, reverse proxies, and load balances your request hits before landing on the backend server to be protest processed. Additionally, would you then know how these front-end components change the original request in a way that varies the outcome of it? So, for my first internal, which that isn't the code for it, that's just an example. uh which so for my first internal which for those who don't know is essentially an infrastructure test where you turn up on site and test most of their network the client brought up that they were most keen for us to test the thing that they were most keen for us to test was their crown jewel redacted can't tell you what it was
called obviously now is everyone roughly aware of what the cobalt programming language is somewhat um so if not it's essentially an old programming language that is still quite common in a lot of legacy applications, namely with banks where they can't afford to actually upgrade the the software that they have underlying. Well, Redacted was a cobalt application. From the amount of people that kind of said they knew, I bet a few would drop if then I asked if you've ever used a cobalt application and then would most likely drop to zero if I asked whether you have ever tested one. If you have, fair enough. about um so obviously I had no idea how to test this.
I don't think many newer generation testers would either. But what I do know is the fundamentals not of cobalt of course but of security vulnerabilities. What you can see here is a page which gives you an option to select a CSV file from a server which was then essentially sent through their internal email to another uh user. Regardless of it being a cobalt application, my first thought was is local file inclusion possible? So I tried firstly by supplying an arbitrary file. Lo and behold we get the full path the application is referencing. Now as we had access to the server already for some other test areas I created a test.csv one directory up. As we can see
here attempted LFI with old reliable dot dot / test. And as we can see we don't get the same error message as before. Meaning it was possible and LFI was achieved. Now, some of you may be asking what is the risk? And like a lot of the things I'll show you, the risk is less important than how cool it was that it could be applied. Um, but it wasn't much as it would only accept CSV files. And the problem with the Cobalt application was that it didn't have the same methods of breaking out of the the file extension that others did, but it's still cool and shows you exactly why it's good to remember your fundamentals.
So now another point, document, document, document. While this may be obvious, and I promise I won't spend too much time on this, please document everything. Not only to look to when encountering something that you've not seen before, but to reinforce your learning. I find personally that when documenting a vulnerability or behavior of an application that I've not encountered previously um previously writing it in a way that allows anyone reading it to also understand makes me remember and apply it to future scenarios so much better than having some like ratty HTTP request that generally shows some kind of functionality. On the plus side, this also naturally helps build your consultancy skills as there will be times in the future when you need to
explain findings to clients on technology that you won't have worked with for too long. Spend the time at the start documenting why you have it. I assure you as you go through your careers, your time will dwindle and you will miss the days of nothing. So this takes us to our next example, hardware test. So recently I went to te uh to Chester to test some conference room equipment. The PolyTC10 and Studio G9. [snorts] Essentially they are mini teams computers that you use to join meetings in big conference rooms. This is both the kind of best and strangest type of test. On one hand we are trying to hack a conference room and on the
other hand where do you even begin? Lucky for me I was equipped with both extensive notes and uh CTL. Um, so we began the test. We ran initial scans which turned ve very little and in general the hardware was pretty locked down as they'd gone from the lowest privilege. They' taken these conference room equipment. They'd stripped everything off and then slowly built it up until it was usable essentially. So it was pre-lockdown, but that was until we got the ball rolling. The TC10 had an admin login screen as you can see. And after doing some research, we found that the default password was a combination of a preset string and the last couple digits of the devices serial number.
Unfortunately, we had a finding default credentials saving us from an empty report. However, this is where it got cool. [snorts] So, small interjection to give some quick context. I'd recently attended the cyber scheme CSTM training and subsequently the exam where I had to learn how to set a manual IP address and understand in what context it is useful. I also seem to remember saying that we would probably never need this and forget it as quickly as we learned it. Anyway, as you can see from the above photo, the devices admin panel showed the LAN information of the TCT10 and G9. From this and some minor documentation, uh we understood that the G9 was connected to the network and assigned an
IP address through DHCP whereas the TC10 was set one through link local with the IP address being shown in the admin panel. Suddenly, I remembered the notes that I had made up or made all about this. I brought up my Obsidian vault and brought up all the relevant documents. Now, this is the cool part. The TCM was originally plugged into the purple port in the image. Obviously, it's not now. Um, we plugged directly in, changed the IP address manually using my notes, rescan the device, and a brand new port 514 had opened. Now, to damper expectations, we didn't get a shell. Some of your keen eyes might have known 514 is a very a predecessor to SSH. It
was a logging port, which isn't as interesting. Um, but it was uh did have considerably worse TLS/SSL security, giving us more findings to include in our report, something which wouldn't have been possible without my notes. It also made for a very cool testing method section, which usually is a bit vague at times. Now, onto the next point and last point. Learn from not knowing. This title might seem awkwardly worded, and it is. I couldn't think of a better way to word it. But what I mean is that you learn the most from not knowing something and catching up. You might have experienced this when you have a bit of competition with your peers in previous jobs, maybe
in sports, in school, something like that. Well, when you notice the subject is hard or people around you are simply more experienced and proficient in the task at hand, you feel completely out of your depth, yet you learn so much faster. Testing is no different. This is where the human element comes in. Not knowing things is expected within the world of testing. As a matter of fact, it's something we look for in candidates. It's no good to pretend you know something. Firstly, because you won't learn if you don't reach out for help. And secondly, it will make for some very awkward client calls when they push you for explanations. So, this brings us to another redacted
on-site test. So 4 months into my time at KPMG, I was sent on site to a large bank to test a web app. Now, [snorts] this being my first solo on site, I was a bit nervous, not because I wasn't confident in my abilities, but because you simply don't know every scenario that could present itself or every question that the client could ask. And while in hindsight, this doesn't seem like a big deal, it can lower your confidence a bit. Regardless of this, I was excited. Now, for some subtle foreshadowing, on the 14th of August, we got sent a CDS-wide email pertaining to HTTP 1.1. When [snorts] an email like this is sent, you tend to pay attention. I
looked a bit at request smuggling, one of the vulnerabilities of HTTP 1.1 felt [snorts] a little bit out of my depth and left it for the future. A week after this, I was I arrived on my first solo onsite. The testing was standard. The scope was small, very small, only revolving around one API call. Regardless, one thing caught my eye, HTTP 1.1. Now, in general, most web applications utilize it. The presence of it is widely accepted, but it does carry inherent risks. To give a brief explanation of I've gone a bit up here. Give brief explanation of this. Essentially, HPS HP 1.1 has multiple ways of defining the end of a request. Usually, you see this
with the content length or transferring coding headers. The fact that you can have two ways of denoting the end of a request means that if the front end and backend components of the web application disagree on which to use, it can be used to essentially send multiro requests or smuggle them. uh which can lead to you uh essentially it can potentially lead to hijacking of other users requests um among some other things. Now this is oversimplifying it 10fold. There are way more methods to exploit it and plenty more outcomes than just hijacking a request. But still that is the general gist of it. Being new to the idea of request smuggling and passive discrepancies, I did what most
people would do. Run every single request smuggling scan that was available to me. Lo and behold, I got some results. not of request smuggling in general, but HTTP2 vulnerabilities relating to how the application was downgrading requests. Now, like I said previously, most web applications still utilize HTTP 1.1. This may be due to some legacy components that can't be upgraded or similar. So, the front-end servers will take HB2 requests, downgrade/reformat them into 1.1, and forward them onto the back end to be processed. The problem that the scam returned relating to the in was relating to this insecure downgrading process. This felt like one step further than Request Muggle in general, making me feel even more out of my depth. So, I
took my evidence for the day. I finished off the rest of my testing I planned and went home. Feeling [clears throat] motivated, I decided to dive into the deep end and become as proficient as possible in these vulnerabilities. So, I spent the next 3 days and nights of test researching HP 1.1 and 2. While I felt completely out of my depth in comparison to the entire surface of the vulnerability, I was progressing extremely fast, learning enough to actually apply it to the test at hand, discover an open redirect vulnerability from it, write the report, and then hop on a call with their technical team to explain the entire vulnerability and how it applies to their application. I can
assure you I was nervous heading into that call, but as soon as I came out of it, I felt like an absolute expert for a short while. So for those who are slightly interested in how it actually downgraded um it came from the following here as you can see um essentially HTP2 uses different headers um which are then formatted into a formula which you can kind of see an example of that might not be true to it but it's essentially a good uh good overview um as user input wasn't sanitized instead of inputting standard data I put an entire URL into the scheme header which is usually just used to denote HTTPS, HTTP, you know, any kind of
protocol. As a result, we got an open redirect which depending on which intermediaries cache it could cause cash poisoning attacks. Unfortunately, they did have a domain fronting blocker which is essentially the didn't have uh the website that I put in whitelisted. So, they didn't let us redirect but that would not be on uh production applications. So, the risk was still there. So with the last example that brings me to the end of my talk. I hope it was nice that you got a bit of an overview uh as to how you can apply these concepts to our kind of your testing as I felt that in a lot of these talks they hammer these points without
directly referencing slash proving how they can be applied to your testing. Uh if you have any questions I'll be happy to answer them. My LinkedIn's also on screen kind of. Um I don't really use it much but if you feel free you can add me. So thank you. >> [applause] >> Thank Thank you, Erin. Uh, do we have do we have time for a question or two? Uh, anyone any questions? Please make them really hard. Otherwise, I'm going to start quizzing him on prototype pollution and he won't like that. >> Hey, >> on your very first one with the cobalt uh testing. >> Yeah. >> And that company, did they have active cobalt developers? cuz I know how old
that um technology and language is. >> They they might have gotten upset when the CTL said that they must be employing half of the remaining Cobalt developers.
>> Any more for anymore? >> Silence is golden. Lovely. Well, another big round of applause for Aaron. Thank you. [applause] Thank you very much.