
So, I am Josh Farwell, a.k.a. Fondue. I only mention the hacker name because there are people in the community who, like, can't seem to remember my real name, but they can remember Fondue. So that's what we're going with. I'm a security engineer at New Relic. I split my time between a few things. I work on Linux security. I work on building visibility tools. And I do some purple-teaming, which is what I call it when I do red-teaming but I'm looking at the source code. And I have a former life as a Linux sysadmin and a SRE. And that really informs, kind of, my approach to breaking stuff. I really like finding implementation problems, I like finding
bad practices. Finding bugs is cool too, but, yeah, people make some cool mistakes if you know where to look. And, yeah. I really like breaking computers. I like building stuff, and I'm kind of new at breaking stuff. I think I've only been really focusing on it for about a year. But already had a lot of fun and found some cool stuff. We're going to talk a little bit about it. So, can I get a show of hands: Who here is familiar with the basics of how Docker works, in Linux? Everybody here is familiar? Okay, cool. We're gonna blow right past this then. So, TL;DR: A container is basically just a set of namespaces that the kernel provides
to a set of processes. If you're interested in looking into these, these are the names of things that you can look into. I think the big ones are: 'cgroups:' which contain processes in a group and give them resource limitations. So, that's memory, CPU, and I/O limitations. Not disk space, just I/O. Network namespace: which is, basically, the Linux network stack lets you pretend like you are running your own interface. And then it transparently will forward ports from the host into the container namespace. And the virtualized filesystem, I'll touch on that little bit. So, it gives you a layered filesystem with ... Yeah, I'm sorry. It's basically a 'chroot' with more features. With a 'chroot' you would put that filesystem in a directory on a Linux host; With containers,
we often do more tricky stuff. We have OverlayFS from Docker. I've seen people use QCOW. I've seen people use hard links. And I've seen people use LVM volumes that they add to. What is Docker? So, Docker and containers are not synonymous. Docker is a set of tools around Linux containers. And it is a Linux daemon that will start up and shut down, run C containers, and configure them and do stuff for them. It is a command line client. Which is basically an HTTP client. And it image management with overlay filesystems, including tooling around storing and pulling those images. And there are many pre-packaged images available in Docker Hub. And I think the big selling point with Docker is the ease of use and the velocity. You can
tell a developer, "Hey! I have put my open source stuff in a Docker container. It's already set up for you. You don't have to do an installation. You don't have to manage dependencies. You don't have to configure anything. You can just do 'docker run mything' and it'll just run. This helps people get developers engaged with our products. And it's really, really, useful. But, doing things that way, has some implications. And we're gonna talk about why that may not be a great idea. Docker prides itself on being very easy to automate. All of the things are done at the command line. You can really get rolling with just a few Bash scripts. And there's a lot of tooling around Docker for orchestrating production environments.
So, like, tooling around getting the containers to hosts and running the daemon on multiple hosts and managing clusters of services. Docker is historically a pain point for security. The Docker project, kind of, made some mistakes in their early days. Some issues with the idea, of like, access as authorization, I've heard people say. Meaning, they don't put authentication on important APIs. Such as, the Docker daemon API and the Docker Registry API. Segmentation and access control are challenging in Docker environments because of how dynamic everything is. You have a service, and you have, like, a big cluster of hosts. Sometimes those clusters are in different cloud environments. Or different geographic locations. And those services can spin up on any host in the cluster.
And, so, managing network segmentation in a traditional way, the way that I was used to, back in the day when I was a Linux sysadmin, is very difficult. Modern implementations have fixed some of these issues. There's better access control these days. There's better control over container processes. And there is better controls for Docker image management with Docker Notary. But it's still a pentester goldmine. The kernel is a huge attack surface for Docker, still. Alex talked about that, I think, a little bit in the previous talk. And the container escapes and attacking the kernel, really, has a high impact. Docker registries and image management are not handled well by default. By default you download a registry and you start running it. And there's no authentication. There's
no authorization. There's no hookups. You just ‘push’ stuff to there and ‘pull’ stuff from there. That can be problematic when it's your package registry. And developers will Docker ‘pull’ anything. You can trick developers into Docker ‘pulling’ things that they probably shouldn't ‘pull.’ And people build automation around these insecure practices. They build processes around these workflows that have problems. People are still finding new issues with Docker. Particularly, this summer, I think Alex demo'd some issues with Docker for Windows. And it seems like, you know, the Docker project will focus in one area of security, do a really good job, and then kind of move on to the next thing. And they still haven't gotten to everything, yet.
Cool. We're going to talk about the basics of using the client. Just for some context for the stuff I'm going to do later. Docker 'build:' What we do is, we can build Docker images on our local system using Docker files, which I will get into in a minute. Docker 'build' is the command to do this. Docker 'ps' will list things on a Docker host; List the containers that are running. Docker 'pull' will pull an image from a registry. In this case, we're pulling Ubuntu from the Docker Hub registry and this is actually shorthand for 'library/ubuntu' as the path. This is also not great that they do this. And, Docker 'run,' the '-it' is basically an interactive TTY. So, that will launch it
as a shell that you can actually work with at the command line. If you have a running container and you want to do stuff in it, you do Docker 'exec -it.' And then pass at the '$CONTAINER_ID,' which is a hash. And Docker 'commit,' will actually allow you to save something that's in a running container to an image. Which is really, really, cool. Tags: Docker uses tags to manage images. An image doesn't really have a name, it has a tag. It can have multiple tags. This is how you change, er, add a tag to a given container. And Docker ‘push' will put it into a registry. And you can think of a registry as, basically, a package repository in Linux. It's not any different. Like, you pull down
binaries and then run them. Linux folks have a lot of tools and ideas around how to do this safely. Namely, they put signatures on things and they guard those keys with their life. Docker folks don't do that, and it's exploitable. We'll talk about Docker files really quick. This is an example of a Docker file that I wrote for ... I think I was doing some SDR stuff with this. I think the important things to note here are the 'from,' that is the base container that we are importing. So, we import that image and then we build on top of that image. And, as you can see, it's basically a script. It's very similar to, like, 'config' management
scripts or a Bash script that you might run in to bootstrap a system. Right here I'm installing a bunch of packages and down below, we will have 1-2 or both, entries called 'CMD' and 'ENTRYPOINT.' These are important. This is what the Docker container ... 'ENTRYPOINT' is a binary or a program that Docker runs when the container starts and 'CMD' is default argument, the first argument that you pass the container when you wanna run stuff. So you can have it ... If you have a container that's for a service, a lot of the times, people will put in 'CMD' the instantiation of their service and then they don't put anything in the Docker 'run,' you just do Docker 'run container,' it knows what the 'CMD' is and
will run it. Workflows: And I'm going to talk about this because I think it's important to understand the context of how people are using Docker. We have two styles here. These are just example workflows. YOLO style: So, a developer's on their machine, they're working on their service, they don't really want to mess with writing a Docker file so they just, you know, get a shell in the container, do stuff. And then hit 'Docker commit,' save the image, and then ‘push’ it directly to production hosts and drop mic -- Done. Right? You can do this with Docker, it works. The reason it works is because Docker container running on your local system is going to, basically, run the same on the production
system with some caveats. The runtime configuration does matter. Particularly if you're doing, like, volumes, trying to pass in host filesystem resources to the container. But in general, like, you can get away with just committing an image and pushing it out. Not a good idea. [ in quotations for emphasis ] Enterprise-ready style. Most of the time, people write Docker files into their source code. They'll put it, like, in the root directory for that given service, that given project. They'll push their stuff into their source code repository, like Git, or Github Enterprise. That will kick off a build using Git hooks, as an example. The build, a lot of people use Jenkins for this ... Jenkins will pull in the source code,
build it, and then it will push the container image it builds into a container registry. And then the build will kick off deployment with the container orchestration and then the container orchestration layer, whether that's Docker Swarm, Kubernetes is used ... That's not exactly Docker but it can run Docker containers. There are plenty of others. People also write their own in bash or terraform, whatever using system D to run containers and services. There's lots of different options. But that's generally how it works. So, CI builds it, pushes to registry, kicks off a deploy, the deploy pulls that image back. I'm getting to the good stuff. Oh, here! Here's the good stuff. Let's break some stuff. One of the things that I've found in Docker environments is that people are still making
the same old mistakes. The Docker host daemon is one of the first places to look if you're doing an engagement with someone you know runs Docker. Right? It is an HTTP interface over a Unix domain and socket. This is weird. It's kinda weird. The reason they did this was because it used to be bound to a TCP socket, port 2375. And they do have authentication as an option, but it didn't come on by default. And they found that people don't actually use it. So, they moved it to '/var/run/docker.sock' and this is what the local Docker client uses to talk to the Docker daemon. '/var/run/docker.sock' is usually owned by Docker group and it has read/write permissions
for Docker group. So, if you put a user in the Docker group they can just do stuff with it. This is problematic and can result in privilege escalation problems. If we do right here, the Docker socket has rights to do a whole lot of stuff on a host. It can basically do whatever it wants if you tell it to. In this case, we're telling it to run in privilege mode which means running as root. And we are mounting 'slash' to a directory called '/hostroot' inside the container and then we're going to 'cat' '/etc/shadow' from the host. Now, this is great as a local privilege escalation bug. A lot of people do configure their systems like this. And people don't realize what they're doing. I've seen folks, like, put in sudoers
file, you know, 'sudo/docker/run/*' with no password. And then they mask other things and make you use a password for them when this is exactly the same as running 'sudo' without a password. The jury is out. Some people disagree on whether or not that's a good idea. I, personally, don't think it's a good idea. This is how you find those things on your network. So, I'm actually running Nmap here. This is a python library that pulls in Nmap and gives you an out JSON, just for convenience. And what we're doing here is we're just scanning for port 2375 on a given network. And then we're using the Docker API client. The cool thing about this is if we have version 'auto,' as a flag here online 29. That will
work for all of the older versions of the daemon. The library will figure out which API call to make. And that ping, CLI ping, if you can successfully do that and you get a return '200,' that means you can do code execution on that host. It's done, done deal. Right? So, this is how you look for those. Run this in your next engagement, see if you can find stuff. I may or may not have run this on the internet. I may have found stuff. I'm going to touch on Docker for Windows and Docker for Mac a little bit. This is what most developers use on their local machines. Alex went into some great stuff with Docker for Windows. I believe he was demonstrating
CVE-2018-15514, which is Stephen Seely's work. Both of these attacks ... There's also a great talk at Blackhat in 2017 about attacking the TCP socket using LLMNR host rebinding attacks. And these are both just attacking the Docker API. If you're interested in doing deep persistence inside of developer machines, the Blackhat talk actually talks about some post-exploitation stuff that's really great. They essentially mount the Linux VM that's running on these systems, they mount the root there and do an infection of the VM. And then continually start infecting containers in the local registry. Which is pretty sweet. And these do run as VMs. They originally ran as VirtualBox VMs which is a poor choice. They now use a native operating system APIs to run as a VM.
And, yeah, Docker for Mac actually seems pretty good. I've poked at it pretty hard, I use it. The isolation between the VM and host seems fairly robust, the network isolation seems fairly robust. And you can't really mount anything good on the OS 10 host into the container. It masks, '/system/,' and '/library/' and all of those good things. You touch the host with your own UID and GID, so if I'm running the Docker daemon as Josh, that's the only rights that the containers will have on the host filesystem. Unlike Linux, right? Docker Registries: And this I'm a little ticked off about, to be perfectly honest. And the reason I'm ticked off is because Docker registries are problematic by default. They don't really
prompt you to set up authentication or at least they haven't until very very recently. And they will actually sell you a version of the registry that uses Docker Notary, has the keys all set up, they'll support you in doing that, and it has LDAP integration all set up. But, if you want to do that on your own you're kind of, you are, you're on your own. You kind of have to figure it out. And the thing that I've learned is that engineers don't really have a good reason to figure it out, often. They don't really think about this in the same way they think about package repositories or source code. They're internal threat model doesn't match up.
And so, if you go and kind of let them run rampant, they'll just set these up and you can just ‘push’ to them. And there's no signatures on any of the images. They're hashed, so you can compare hashes by yourself, the system doesn't actually support this. Until fairly recently, didn't support any signature or trust verification of images. And the best part of these, they often straddle corporate and prod networks. Because, in order to enable developer workflows where they need to ‘pull’ down containers and ‘push’ up containers from their local systems, but the orchestration needs to pull containers in order to deploy them, so the easy way to deal with that is to stick it on both. And
people do that. It's bad. How do we enumerate vulnerable registries? This is a pretty good 'curl.' This one will work. There's also a version one of the registry you can check out the documentation for the API there. I will show you here. This is what that returns. It just returns like an empty hash there if you hit it on '/v2.' And we want to see if we can push there. Right? What we're trying to do is overwrite an existing image with an image of our own. And this is how we test that. We can tag a local image. 'Ubuntu' is really easy. I have that on my machine. So, we're tagging it with the 'target domain name/some path/some container name.' And we'll 'push'
it. And if you can 'push' there, you can likely push over existing tags. And what I've found is that when you do the 'push,' it will push it up and then it will check if you're authorized. If it's even doing that at all. This is an example of what it looks like to 'push.' As you can see, I'm running a registry locally for testing. And so, the 'docker run' at the top is just me starting up a registry. It's bound to 'port: 5000' so we can check it out. I check to make sure the registry is there. I 'tag' a container with a tag that will push to that registry. And I try it. And this is what it looks like. It pushes up each layer
of the OverlayFS and shows the hash. And then will show the hash for the whole image at the end. And if I want to check to see if it's there, I do the 'container name /tags/list' and I can see that it's there. And yeah, the other way to take a look at this is to look at '/catalog.' This will show you just a big list of all of the containers that are in that registry. And you can start looking at names and picking stuff to infect. It does some pagination. And I've run into issues with performance every once in a while. Where, like, a Docker registry will stop responding to me and start timing out. So it may take
you a little while. But if you've got some time, you can just sit in a network and hit this a bunch and get the whole list of stuff. So, we have a registry. We know what tags are there. That's great. What do we 'push' to? How do we know what to 'push' to? 'Cause it's not so simple as just, "Oh, I'm gonna pick a container that says authorization in it. Or pick a service and try to infect that service that I'm targeting." Because of how the CI works and how the images get pushed up and in what order, it's very likely that if you are pushing to, like, a 'latest' tag in Docker that it will get overwritten
by someone else. Someone who is trying to deploy things. This is kind of a representation of when that happens. The orchestration is only going to deploy your new stuff when a deploy is kicked off. So, when a new build has happened and a new image is ready for you. Or when a service is adding more instances it will pull. A new image is pushed to that registry right after the build is done. Right? So, if we are putting in a side-channel image and trying to get it infected and someone goes in, makes a commit, and does new build, it will push right over your new tag and deploy it. And then your stuff doesn't get deployed. Which is lame.
Good for us; People use base containers which is great. So, this is a very common use pattern. It's falling out of favor for a lot of reasons. And this is one good reason for it to fall out of favor. They are often imported in the 'FROM' in the beginning of the Docker file. And what people will do is, the SREs will sit down and go, "Wow, people are using all of this time, you know, just getting their basic containers set up and getting their dependencies put in. We want to make it easy for them, so, we're going to make a 'base' container that has all of the things that they might need and knows the things we want it to do for
orchestration to work right. And everyone will save time and it will be great." What actually happens is that these 'base' containers then just immediately get stale. [Snaps] Like, immediately. And changing them becomes problematic because you have all of these disparate services that depend on them. And you don't know what those disparate services need from that base container. And if we push into a base container as an attacker and push to that tag what will happen is: that when someone does a build on a service, the CI will go, "Oh, okay. That's that base container. I'll pull down that image." They pull it down and then it will add to that image in the build and then push it back to the services tag. Which means that our exploits
are, like, in the middle of that image. And yeah! This works really really well. And it's really easy to find these containers. You just look for things with 'base' in the name. Like, everyone puts base in the name of their base containers. Particularly, they'll put, like, their company name there. So if you see a, you know, 'company name/baseubuntu,' that's a good one. That's a good one. And the best thing is, you don't need to guess. If you have the time, you can just start pulling down containers from the registry and look in the Docker files. If you open up those containers, the Docker files are probably going to be right there. And you can just
look and start seeing what people are importing. And you can very easily identify a good tag to push to. Or you can just infect everything. There's no reason not to on some level. I think it's really easy to set up a Bash script that just starts pulling down containers and infecting them. And it kind of doesn't matter whether or not a bunch of them get overwritten if a few of them don't. Right? It's a little bit louder, but it would work. We're going to talk about the malware that we're actually putting in these things. Because a lot of our normal Linux persistence tricks don't work very well, I found ... We don't have any 'init.' Or any services running in most containers. Some people do, but you
don't know who's going to and who's not. And you don't know how they're going to do it. So you don't know whether you need to write to '/etc/system/D' or whether you need to write to '/etc/init/D.' It really depends on what they're going in their 'CMD' and 'ENTRYPOINT.' So, I don't bother with that. We can't do kernel modules either. Because, unless you're doing really crazy stuff, you can't really load kernel modules into the host kernel from a container. And that kernel is the host kernel. Right? And shell and profile injection are finicky. I've had issues with this. It's reasonable to assume they would work. And I may not have been able to just get it right, but, I was
not able to get it to fire things over in the profile. It had trouble. And we could put something in 'CMD' and 'ENTRYPOINT,' you know? Just make the 'CMD' 'run evilthing.sh.' But 'CMD' might get overwritten by the person who runs the container. And 'ENTRYPOINT' might get stomped on by whoever is building on top of your base container. So, not great. I went for infecting Linux software instead. Which sounds hard, it's not hard. Musl and glibc: musl being a libc alternative used in Alpine, which is a very popular container that people use. Glibc being glibc. It's an option to do that. I don't understand those libraries very well. You could very easily, like, identify part of that library that always gets loaded and put your stuff
in there. I just went for the shell binary because I know it runs. And it's simple and not hard to figure out. '/bin/ash' is run in Alpine, '/bin/dash' is what Ubuntu uses. They's actually a link from '/bin/sh' to '/bin/dash.' And you could also do '/bash' which is also very popular. And it's important to note that Docker containers use '/bin/sh -c' command to run whatever is in 'CMD.' So, it's going to run 'sh;' It has to. It also runs it in the 'build.' So, we're going to look at my infection here. This super 1337 hax0r stuff, oh man! I stole this straight off of StackOverflow. This is just so I can find a process by it's name
so that I don't start up multiple versions of my shell. And here's the good stuff: Bam! See if you can find it. Do you see it? Line 253. That's it. It calls 'popen.' That's it. It's not hard. Like, this is going to spawn a process called '/usr/bin/watchdog,' we're going to store a file there. That's bad, that's going to be our reverse shell. And every time 'sh' runs it's going to check and see if it's already running the 'watchdog.' If it's not, it will run it. It's so easy. This is not hard. This is just in the main command loop. And yeah, this took, like, two hours to figure out. It's not hard. Now, we can be a lot sneakier than this, like I mention. We can put stuff into 'libc,' but
honestly, I don't know that it's that important most of the time. People don't look inside their containers at what processes are running. And if they do, it's very easy for them to go, "Oh, there's a process running called 'Watchdog.' Man, those SREs are really looking out for me. They've got a 'Watchdog' in my container. That's perfect." Particularly since they've imported this base container from somewhere they don't know. And then they don't know what's running in it. They didn't look at the Docker file. They just imported it and they're using it. So, yeah, be sneaky if you're doing this for real. But if you're trying to demo this for engineers, don't bother. Just do this. I'm using Hershell as a reverse shell. I mention this simple because I wanted to give them
credit. 'Cause it's a sweet program. And because it uses SSL certificate pinning which I thought was really cool for running in a production environment. Making sure that no one else is going to swipe my shell, right? Let's do a demo of how this works. You all see that okay? Yeah. Totally. Alright. So, first we're going to demo putting in the exploit into a container. Right? So, we've got our folder here. We've got 'evil.sh,' which is our prebuilt infected Dash binary. I'm not going to make you sit through the compilation. And we've got Hershell which has been built with our C2 all set up. Oh, and I've gotta run the C2. Yeah. There we go. Super fancy C2.
Alright, so, we're going to do this. 'Docker run.' And we'll just start with Ubuntu. Alright. Oh, I did forget something important there. So, we're going to use '-v' to mount our current directory into the container. We'll go there. And we'll copy this in. And copy this in. And oh my god, we're done. Done deal. Infected. Holy crap. We're going to save this. Right? Come over here. I see this running container here. That's us. I'll stretch this out a little bit. Here we go. And we'll do 'docker commit.' Doink. And we'll bind it to our ... This is our, remember our registry that's running, right? And 'company name/base-ubuntu.' And we'll do 'docker push.' There. Done. So, that was really easy, right? Took a long time.
[ laughs ] Very very, uh, sophisticated technical stuff there. Not hard at all. And I'm making that point because I want to, like, I think that sometimes people think that these attacks are really, really hard to do and they have to be a super genius to figure this out. You don't. You really don't. This is so easy. And this will show up in the Docker history. And if you're doing IOCs for this, look through Docker history. Catch Alex's talk if you are interested in how to do that. It will show up there. But this won't show up in the Docker file and engineers assume that whatever is in the Docker file that their CI build is what's in the image. They don't even think about
it. We're going to demonstrate what happens when you run this. And actually, the cool thing is that build will also run our exploit. So, we have this service called 'return 200.' I'll show you what the service does real quick. It returns 200. And we'll look at the Docker file. And this is a very typical ... This is what a Docker file will look like for a service that you're trying to infect. And as you can see, they're importing our base container that we infected here. Right? So, we will do ... Do we have our C2? We got our C2. Okay, we're good. 'Docker run.' Oh man, you're watching me type. [ audience laughter ] There we go. And, as you can see, we're running the container here. And we got shell. Woohoo!
[ clapping in the audience ] We can check out our environment here. Look at stuff. One thing I did want to show is during the Docker build it also emits shells. So, we'll do 'docker build -f Dockerfile -t return200 .' And as you can see, yeah. We're inside of the Docker build. And if you pause processes here. Like if we pause 'aptget y update,' which is what the Docker build was running ... Let's see if we can.. Hmm, I think it's 'kill.' Ahh, I can't remember what 'kill' command it is. Um, so, I won't screw it up. But, we can pause it there and the Docker build will just hang out. And if it's, like,
running in Jenkins, that means the Jenkins build will just, like, chills forever and pauses. And, that's one way to get a developer's attention is to pause their Jenkins build for three hours while you're poking around. Um, but if you're quick, and if you script it, you can just, like, start hitting APIs, start stealing environment stuff. Often times people will pass really good stuff to their Jenkins builds because their builds need databases for whatever. And yeah, often times the build environment will also be on a cool network segment that might have interesting stuff. So, it's really cool to check out what's in the build environment first. But yeah, so this will get 'build,' it'll get pushed out, and then the orchestration
will deploy it. What do I do now that I'm in? Figure out where you are. So, you might be in multiple places. And I recommend, definitely, using C2 that will handle multiple connections for you. Because you will probably end up in at least two places most of the time. And you may end up in completely different countries. So, figure out A) Whether you're in a container. There are people who have gone over this at length but I just check the host name. Usually if the host name's a hash, I'm probably in a container. And you check out what's running there. Do 'ps,' figure out what service it is. And the other thing I would look at immediately is: look for secrets. Env is great. People
often will inject secrets into shell environments that run time with Docker containers. This is, like, a very common use pattern. So, just run env. You'll get secrets. There are also container orchestration things that will mount files to the filesystem. So, check out mount. Check out anything. And, definitely do your homework on what people are using at your given target. See if you can figure it out. Looking at their job listings might help. And, yeah. AWS credentials are often really common there. Try and see what you can do with those. And I would poke at the kernel as well, to start. See if they patched their stuff or not. 'Cause if they didn't, you might be able to get host execution really easy that way.
But the really good stuff is just by making HTTP requests to freakin' everything. And one thing that I found is that in these environments there's often information that's available for free. And that containers have a lot of privileges to information about the environment. 'Cause they need to figure out what they need to connect to. Because everything is dynamic, they often need to use some other service to do that. Most people call this service discovery. There are open source solutions for this. But often people will build their own stuff. So, go out and try and find this. Do Nmap scans, grab banners, and then just, like, start making requests to things. And go and look at the images that you downloaded from the registry and see if you can identify,
"Oh, that service is running here. I know what the source code is for that. I can look at what it's doing and get it to do stuff for me." Container orchestration APIs, oh jeez. So, people leave these open. Kubernetes has had this and it's been in the news a bunch. This is the same issue as the Docker socket being open on the daemon. You can do stuff. And people often do not secure these. So, learn how to use them. Learn how to find them and find them. Marathon is very popular. Kubernetes is very popular. There are many others. Find those. They're real good. Cloud URL metadata urls are similar to this. There have been, you know, check out Hackeroni,
about a lot of issues with the SSRF getting chained into metadata url access in AWS. Or Google compute engine. You can also that from this, really easy. And I think it's important to note that most things are proxied in these environments. So, the host is mounting ... It is forwarding a port on the host into the container's network namespace and usually it pushes out or makes available some configuration for some proxy to use to then set up a virtual IP and a DNS name and forward the traffic from usually, port 80 or 443 into the special port in the container. So, get used to messing with host headers and trying different stuff. Do your research.
Traefic is one that's really really popular with folks. It actually works really good and it's pretty secure. But people do their own stuff. See if you can find the container that holds it. And see if you can see what they're doing from middleware. Definitely. Okay, how do I deal with this? Access control and authentication for important APIs is a good place to start. And you'd think, like, "Oh, you know, Josh, this is dumb. People are definitely not going to just run those without passwords on them." They will! They do. They definitely do. And they don't do it because they're dumb, they do it because secrets are hard in dynamic environments. It's not easy to authorize and authenticate the various services that they
have running that they need to be able to push things into registries, that need to be able to pull things down. And there's a lot of value in this culture on being able to move fast, not having friction. It's really easy for folks to see authentication as friction to getting done what they need to get done. And I think there's not a great understanding of the threat model and of how this can actually work. So, people don't do it. People don't do it. And, if people are just putting a set of credentials, LDAP credentials in front of their Docker registry, then you're one set of credentials away from pwnage. If it's still sitting on your, you know, corporate network and that's all you did with it ... I don't think that
you're really, like, really digging into what the threat model actually represents. I think that people should be running these on isolated network segments if at all possible. I think that you should have very strict access control lists. And I don't think that anyone except for the CI, the place where the Docker container got built, should be able to 'push' there. And this is all possible to do. You should do threat models and attack simulations on your CI and on your developer tools. This is something that, like, gets overlooked. People do a really good job of testing their products that go to their customers. But then often the developer tools, because they're behind the VPN, people don't think about them.
And people don't threat model them. Definitely do this. You will find good stuff. Use Docker Notary, and Docker Notary's not great. It's okay. Docker Notary doesn't actually sign images, as I understand it. What Docker Notary does is it signs a commit to a registry. So, the person who pushes that stuff to the registry, they sign when they committed it in the commit has the hash of the image that they want it on. The image itself doesn't have any signature attached to it. So, it's only, like ... The only way to validate it is to ask the registry where it came from, "Hey is this cool?" It's better than nothing, though. And there are proprietary solutions to this. There may be some ways to do it that I'm not
thinking of. And Docker will sell it to you and give you support for it. But yeah, I think that people, you really need to think about where your images come from and who built them and when they were pushed there. And you should patch your stuff. You really should patch your stuff. The kernel is a huge attack surface and you can do a lot to limit what containers can do. But, unless you patch your stuff, it's not going to help. And patch levels in containers often fall behind. And that's where we ... That's kind of the first step of attacking these things. If you can get code execution inside a container because you were running an out of date version of a library, you can use all the post-exploitation
stuff right there. It's really easy. I will talk a little bit about Seccomp and SELinux. This limits what processes can do inside of a host. Seccomp is really cool. It whitelists system calls to the kernel. There are some system calls that have to be blacklisted or that have to be denied because otherwise Docker security wouldn't work at all. And that's why it exists. But you can whitelist as many or as few sys-calls as you want. This is a great idea and it's really cool, but I think that unless you are at a place of maturity where you are, you know, protecting your stuff; Unless you have a really good story around the sanctity of your images, I don't know that this is going to help.
And, it's useful for mitigating kernel attacks but it doesn't stop me from making HTTP requests in general. Because what else is a Docker container going to need to do other than, like, make HTTP requests to other Docker containers? That's the whole architecture. Like, if you're going to have to whitelist the stuff that I care about, right? And yeah, make sure that you've done all of the basics before you start pushing on this for people. This is a good slide: Collaborate early. And this is really difficult. I think that it is difficult for security people to make their voice heard in the early architecture conversations that people have when they're setting this stuff up. And often what happens is, is people will set stuff up, it'll be a test, it'll be something
they're playing with, and then it'll immediately ... You know, that becomes production now. And all of the loosey-goosey stuff with the APIs becomes just the way we do things. And once someone has built automation around that, you're really screwed. Because it becomes a lot more onerous to figure out how to fix it. So you should go get involved early. And make sure that you, you know, do training with your engineers. Figure out what they're actually working on. Ask them questions. And understand the problems that they're trying to solve. Because the problems that they're trying to solve are not easy. It's not as easy as just saying, like, "put authentication on that." Like, you have to have a plan for, "well, how do you manage secrets? What's a
safe way to manage secrets?" And secrets is a huge part of how to make this work right. A lot of people have talked about secrets management in dynamic environments that are better at it than I am. So I will refer you to them. Turtles All the Way Down is a good thing to look up. There's someone at Netflix that gave a talk about how they do secrets management. TL;DR signed attestations provided by AWS, so, you know, pass off your 'turtles' to AWS and let AWS do it. I think that is a pretty solid solution. But it's one thing to say it and another thing to implement it. This is a long term project and it is a dependency for this kind of orchestration
work. So, get in on it. And yeah, you should threat model stuff often because it's going to change a lot. They will change directions often and they will discover new problems with what they're doing often. And they will have tight timelines to fix them. So you really need to pair. You need to be embedded. And you need to threat model their stuff continuously as they come up with new ideas. So, let's say you are past that stage and they're already running stuff and it's not secure. What do you do? I think that doing the purple team thing is a really good way to do it. And it's one thing if a, you know, 'cause anyone you hire to do penetration testing
should, like, already know about all this stuff and be able to tell you immediately. Like, "You left your Docker sockets open. You need to fix that." It has a lot of impact if you, yourself, go and exploit these issues for your engineers. If you're embedding with engineers and trying to help them do Docker security, being in the habit of doing write-ups for them, showing them what you're doing, explaining how it works, can really really get their attention and have a real impact on them. Because they conceive, "Okay, it's not super genius hackers that we hire to do a pentest that did this. It is people who, like me, they're .. it's just Josh. He's just like an engineer. He's
not a super genius. He has a similar background to me. He comes from the same place. I understand him and he walked through this with me and I understand. Oh, I could have found this. I could have figured this out." And ask them to think evil with post-exploitation stuff. And go, "Well, what would you do if you were in a container and you were in our environment? How would you wreck our day?" And they will come up with some great stuff. They will come up with really good stuff. 'Cause they know where the bodies are buried. And if you can, like, engage them in conversation that's getting them to talk about that stuff, instead of a conversation where they're defensive,
that's really really good. And yeah, show them that you're on their side. You know? I think a lot of times, a lot of times the discourse around these kind of issues that are, you know, "no-brainer" issues isn't very helpful. And definitely, like, take a tact of being helpful. Take a tact of understanding the business use case for what they're doing. And help them compromise and figure out what to do. So, in conclusion, Docker is powerful and exploiting it is powerful. It's real good stuff. And the historical issues are still exploitable in many cases. You will find this stuff. You need to be very careful with images, with build environments, and with registries. And you should be demo'ing these risks for your engineers and showing them what's up.
Cool, does anyone have any questions? No? Sweet. Oh, right here? What's up? [Audience:] I'm curious, so you made a point about talking about base images not being [inaudible] ... What are alternatives to base images if you're not already down that path yet? [Josh Farwell:] Sure. The question is, I talked about base images a little bit and about how it's a little bit of an anti-pattern. How can I deal with that? Um, I deal with it by copy pasting stuff, honestly. And I think that, like, in a lot of cases ... So, what people are trying to do, they're trying to save themselves work. And I think that the pattern of doing a base image works really well for things within
a specific project that have shared dependencies. But it also serves engineers really well to understand what their dependencies actually are and how to bootstrap a Docker container for it. Like, it's not ... It's installing some packages and doing some configuration and I think it's, like, reasonable to and expect them to be able to do that and I think that when we talk about, like, do engineers need to understand the orchestration stuff, do they need to really get into how Docker works, not necessarily. But they should know how to, like, from a image that has been signed that's in the library in the Docker Hub ... So, yeah, I think that you will repeat things, you will install a
lot of the same packages, you should just put those in the Docker file. And I think that, like, Docker files that are one Docker file that go all the way down on are really easy to read and really easy to debug. Because then you don't have things happening in a binary image over here that you need to go discover. So, um, yeah. Does that answer your question? [Audience:] Yeah. [Josh Farwell:] Cool. Any other questions? What's up? [Audience:] It's kind of crazy so that Docker registries don't have authentication by default. [Josh Farwell:] Yep, correct. [Audience:] And so, what are the options for locking them down and how hard is that for companies to do? [Josh Farwell:] Sure, um, so, the Docker registry does have some hooks for doing, I believe
it's LDAP authentication. And they'll sell it to you. That's the easiest way is to buy it from them. I've also seen folks put stuff in front of registries. Like, doing HTTP password with LDAP. Again, I don't think that authentication is really the end all be all of protecting the registry. Like, if it's, you know, one set of credentials away from getting pwned, like, that's not solving the problem. And, yeah, I think, like, really the way to deal with the problem is to use Docker Notary and do signed commits. And I think that, like, you know, you should think about it exactly the same way you think about your source code. Like we have 2FA on our source code. We are pretty crazy about
checking the logs and making sure that stuff is good there. Like, do the same thing with the registry. And you know, it's HTTP, so you can proxy it and you can look at the access logs, and you can see when people are pushing things and you can do alerting on that as well. So ... Any other questions? Nope? Cool, thank you. [ audience applause ]