← All talks

BSidesIOWA 2015 Track2: Secure Process Isolation with Docker by Greg Rice

BSides Iowa1:14:13701 viewsPublished 2015-04Watch on YouTube ↗
About this talk
Docker is rapidly growing in popularity as a lightweight mechanism to deploy software applications within virtualized containers across any Linux system. Docker provides strong resource isolation and security without the need for separate operating systems or additional virtualization overhead. In this presentation, we will present an overview of Docker and describe its present virtualization and isolation capabilities. The presentation will describe the intrinsic security features of Docker containers, secure configuration of containers, and common techniques to harden the underlying OS kernel. The presentation will review Docker’s benefits and disadvantages over traditional virtualized OS infrastructures as well as other common Linux containerization systems, demonstrating Docker’s effectiveness in establishing process isolation.
Show transcript [en]

So, my name's Greg Rice. Um, I'm here today to present uh secure isolation uh with using Docker. Uh, it looks like we don't have many people in the room. That's okay. I kind of knew this was going to happen. When you're presenting defensive security versus uh someone in the other track presenting offense, you tend to know that uh the other person probably has the sexier talk, but that's okay. Uh and and truthfully, my experience has been not many people know what Docker is yet uh since it's still relatively new. So, I wanted to talk a little bit about Docker today uh and set the stage for uh how you can actually be using it uh

within your enterprise. Uh, so as I mentioned, my name is Greg Rice. I work as a principal cyber security engineer. That's my official job title. Uh, principal really just means um I spend less of my time actually within an IDE today and more of my time within Word and PowerPoint trying to convince people of my ideas so that they give me money. Uh, um, my focus in my job is in research and development. So un unlike some of the people here today that are really in the trenches and and doing real security work uh going out performing incident response doing malware analysis securing networks uh uh administering firewalls I'm more focused on the creation of new

security technologies. So my focus is uh particularly in embedded systems. Uh my my interests are uh in critical systems uh that are employed in the in the internet of things where those systems are either legacy and now just being connected to the internet or secondly uh they may be systems that are going to to come out of the factory and be deployed out in a uh a real world environment for 25 or 30 years, right? We don't really have the option to be able to quickly deploy patches to these. So these are hard research problems uh that I I like to tackle today. I maintain several patents on security technologies. I enjoy the work I do but

I like coming to conferences like this where I I meet a lot more people uh uh who are focused on the the day-to-day security and not and not uh uh don't get focused on uh academic uh security discussions. So the focus of my talk today again is Docker. A little bit of this will be Docker 101. In other words, we I want to be able to set the stage for what is Docker and how can we use it. Uh secondly, we'll talk a little bit about um uh uh Docker using Docker for resource isolation using Docker to the advantage of security on your systems. Uh third, we'll talk a little bit about uh how I leverage uh Docker uh sort of

as a hacker as uh being able to to uh leverage some of the Docker repositories and be able to quickly set up development environments that I can leverage. So what is Docker? Well, it's an open platform uh open source platform for deployment and management of containerized software services. It really consists of two parts. one is the the actual Docker engine. This is my runtime Damon uh allowing me to uh to provide a framework for bringing up and executing containers and being able to manage those containers. Secondly, it is a hub or a repository for vetted containers within the community that I can quickly pull and be able to get up and and working. Uh so you'll notice today I I mentioned

uh uh Docker Hub is really just a cloud repository of readytouse containers. So think of this as sort of like uh um uh just pulling from from uh GitHub, right? I find a software package on GitHub I like. I can pull it down. I can be up and running with it very quickly. Or like uh AppGget, right? Uh if you're uh uh you work in the uh Debian uh Debian environments, AppGet is a really convenient means of quickly pulling down a package and having it up and running on your Linux distribution very quickly. So when I say containers today, what I'm really talking about are Docker containers. A Docker container is largely built on uh um lib container. a

containerization technology uh built by Google around 2007 or so. A container is really just a lightweight virtualization environment, right? We normally talk when we talk about VMs today, we're we're thinking about things big beefy virtual machines, right? Entire operating system images that I'm going to run on top of a hypervisor. But a container is really just a a lightweight virtualization, but it allows me a lot of uh uh additional security uh here because I'm allowed to be able to start to restrict what sort of resources are available for that container. So this includes things like what's uh uh my available CPU bandwidth uh what is my uh uh maximum memory limit for this system? What network interfaces are available

for it? uh what software is going to to run within uh this container and I do that uh in Docker largely through as I mentioned before lib container and control groups or croups and Linux namespaces. Control groups are what allow me to actually limit uh uh the resources to a container. Namespaces uh within Linux are what allow me to essentially allow one container to talk to another. In other words, it allows me to define that interface between containers. That is to say, I could bring up one virtualized container that operates uh um uh se a SQL server and another virtualized container uh that operates something like Apache, right? If I'm starting to create an environment

where I want to be able to run a uh something like a LAMP server. So, this should f sound pretty familiar, right? Right. The last uh Jared just presented on um uh sandboxing technology. Things like jails have been around for a long time. Uh uh Cherut was very popular particularly in the the early and mid 2000s. Uh Lex or Linux containers uh have been around for a long time. In fact, Docker was originally built on uh Linux containers. Uh today it's as I mentioned built on lib container, the technology developed by Google. It's existed in the Solaris environment. If you're familiar with zones on Solaris, containers are are very similar, right? This technology has been around for a while in that sense.

But what I like about containers is that they're very lightweight. Again, with a when I virtualize an entire VM, I'm virtualizing in this case an entire guest operating system, right? And we've seen this again and and again in today's demos, right? All of us are are essentially uh um code hackers, right? we we uh want to bring bring up development environments where we can try new tools. So for example, that may mean uh running uh uh uh several versions of uh Kali Linux, old versions of Backtrackk. I may want to be able to quickly bring up an environment where uh uh uh a YUbuntu environment or Red Hat environment if I'm if I'm doing some

initial testing and uh want to create an environment where I can start to uh to poke around and and start to run some of my tests. All of that's very easy to bring up as VMs, but a VM, that's a lot of overhead, right? I have a hypervisor running now. I have to I have to virtualize that entire uh guest OS infrastructure. Whereas in the Docker world, all I really care about is a particular service, a software service that I'm running. So that could be something as simple as uh uh let's say uh running Apache um um or I'll give you an an example recently of a case where I brought up Docker to try something. Uh

the CDC, if you've taken part in it at the Iowa State University, the cyber defense competition there every year uh distributes uh essentially a web application to all of the the red team. So you have the code to to run uh in advance of the event and start to poke around for web application vulnerabilities. But every semester that seems like that that web application changes dramatically. So sometimes it's just a PHP uh uh web page uh this past time uh that was entirely written in Django. And I get that as a as a hacker and I'm like man I don't know anything about Django. All right, I I'm willing to learn, right? I love learning new

things. I'll go off and I'll start to learn about this, but man, I gotta get up in a Django environment. Now, what about after this competition? I don't want that sitting around and running. Uh so, you know, it it could be something where I'd spin up a VM, put it all in the VM, and uh uh once I'm once once it's over and I'm finished testing, I blow away the VM. But again, that's a lot of overhead. I got to wait for that VM to boot. I have to uh make sure that I I have all of the dependencies within that environment. Uh I have to get Django up and running within that VM. That's that takes a lot of time and

energy. Uh it takes a lot of computing time. I'd like something much more lightweight. So this is where containers become so useful because I still have the isolation of a virtual machine. I still largely have that isolation, but I'm only concerned with virtualizing just the service that I want to run. Now, you might argue that uh in this case, you're not you don't have quite the same security and and I would acknowledge that, right? There are certain shared resources within a containerized environment that are shared. So, for example, I point out a few here. Uh those include things like uh proxis procirq uh some of the uh the dev devices like some of the uh uh dev

sd devices are shared low-level kernel functionality shared across the vms right you're you're essentially you're running on top of the kernel so some of that is shared as well so things like uh uh um any of your Linux drivers your LSM modules all of that is shared so technically if you had a very low level kernel exploit. Uh uh this is something where you could break out of your container, but again, it's it's really not about uh um uh low-level kernel exploits here. Really, what you want to be able to do is bring up uh uh quick environments uh where you have virtualized services, and you want to bring those up in a default and

minimized state. So, Docker hasn't been around for that long. In fact, when I talk to people today about Docker, they've either heard of it and uh uh they're fully committed to Docker or secondly, uh they they've read about it in the news, uh they kind of know about it, but they really don't know much about it other than it's it's a container technology. In fact, it hasn't been around very long at all for about two years now. So in uh January 2013 uh DuckCloud started a side project uh that wanted to start to look at new containerization solutions. Um that migrated into an open-source project that was released just a few months later uh in March of

2013. Uh that project was released to the public, but it started to gain a lot of steam very quickly. So fast forward to uh uh September of 2013, you already have many many people using this in fact in production at this point despite it being so so relative uh uh in in relative infasy. You start to see a lot of investment into this area. Uh eventually cloud sort of uh sells off the rest of their business, rebrands as Docker, which brings us uh uh even more recently to uh this past summer with the 1.0 0 release of Docker. We've seen it a lot in the news now. Um but but many people are still kind of backing away

with it and that's largely because containers to date have been problematic. Uh um they historically lacked a decent infrastructure. Right? If you uh if you worked um in the early 2000s in security and you looked at at starting to create services that operated within jails or uh chutes um that that took some time. It took some configuration. Uh uh the containers that you created were largely matched to that particular distribution. And so this these are the sorts of problems that Docker wanted to to essentially tackle. They wanted to focus on a very lightweight standardized container that could be taken from one distribution to another and there should be no worry from the developer on whether or not

that technology would work. Likewise, they wanted to focus on that state of resource isolation, making sure that these particular uh software services only operated within uh their their uh uh restricted environments. So today you see a broad use of of Docker and this is why I say it's been in the news a lot lately. Uh Red Hat has is probably the largest adopter of Docker today. The Amazon clouds now supports uh Docker containers. So I no longer have to worry about spinning up a uh for example a whole VM within the Amazon cloud. I can be a lot more lightweight. I can drop uh Docker containers there. Google uh has been in the news uh again being the

creator of the lib container technology that Docker is based on. Uh um has been in the news recently for being a huge supporter of Docker. In fact, I I have a news clipping here that says Google uh everything at Google runs within a container. Uh Spotify has uh uh dockerized much of their technology today. Um uh Microsoft is fully investing in Docker as well. I'm not quite certain what Microsoft is actually uh strategically leveraging it for. That company's strategy is never clear to me like jumping into the smartphone business. I think that ship has sailed. Uh again, Docker consists largely of two things. One is the execution framework. The second is that cloud repository or

docker hub that I mentioned before. So the execution framework is where we begin to set some of the stage for managing containers. Again, to date, raw containers were always very difficult to manage. They weren't standardized. I couldn't just take them from one distribution to the other. Docker aims to tackle that problem by making deployment very, very easy. So what I really like about Docker and the reason that I first started to uh to start to use it uh I had this this problem of portability. I was working on a major program where we had maybe uh uh uh six different companies with 20 disjoint software services and we needed to integrate all of them into one common

solution. each of our services we're providing an integral part of that system but you know somebody's running on uh um uh Fedora another person's running on uh an old version of Debian another person's running on BSD right we have all of these different services all these different dependencies and we just need to start to bring them together into one rack and so the practical way to do that at the time was just to spin up lots and lots and lots of VMs so I I remember heading out uh to the uh one of the first integration exercises and I brought tons and tons of disc with me because all I was bringing were lots of

VMs. VMs where I was focused on on bringing up and making sure that I had environments with all of my dependencies built in. And this is really no different from uh uh uh within the security arena, especially as you think about supporting multiple uh different deployments, right? You have your uh um uh a cluster maybe for developers. You have uh um uh a backup recovery site where you want to be able to quickly uh uh deploy to in the case of a major incident. Right? All of these things they tend to use different hardware. They have different dependencies and so we end up with a lot of distributions that we have to manage over time and

this becomes particularly problematic uh not only to manage functionally but in security as well. If I have lots and lots of different OS distributions, uh, VMs that I'm operating, right, there's a lot of overhead in terms of patch management, making sure that those are adequately backed up or snapshotted over time. There's a lot of different types of operating systems that I have to uh to to manage uh starting to to worry about some of the real resource requirements as well, right? VMs are very heavy. And that's what we drove into on this on this particular program. We had rack after rack after rack where we're just wasting large quantities of uh of computing power on virtualizing

entire oss. So our focus there was well we wanted something that was more standardized and uh um someone offered the idea of hey I've read about Docker in the news maybe we should check it out. And so that was sort of what initially sparked my uh uh uh my interest in this area. Docker, as I as I've kind of alluded to, Docker, it's its name extends from this idea of shipping containers, right? A shipping container standardized. I can take a container off a ship. I can plop it on a rail car. I can put it on a uh a truck, right? That shipping container can go anywhere. I don't have to worry about the platform that's transporting that

shipping container. I merely need to track that shipping container and where it's at uh and fill it with good things. And so Docker takes that model, takes that analogy and starts to package up software services within these containers such that I can put them anywhere. If it runs Linux, it can run a Docker container as long as that container is of the same instruction set architecture as uh uh um as the uh the executing framework. By that what I mean is I can't take a PowerPoint or excuse me a Power PC uh uh container and run it on an ARM processor. I obviously can't take an Intel container and run it on an ARM processor, right? I have to I have to

pay attention to the the the low-level machine uh architecture, but I don't have to worry about oh you're on uh Ubuntu 12, I'm on 14. Oh, um, I can't run this particular version of Hadoop there because of these dependency issues. It's unstable, right? I containerize things. The rest the rest is all easy. I don't have to worry about uh uh anything else. And this this is what I I love about uh uh Docker uh even more than that the the security aspect. Container management's very easy. What I do is basically set up a little bit of uh metadata that describes things like um uh do I want to open up ports on this parta uh particular

container. What are its dependencies? Does it depend on any other containers that I have operating? I gave the example of a LAMP server. I can create containers where I run one Apache in one container. I may run uh my SQL and another. Uh I could run uh Postgress in another. um um uh uh Django and a completely other container, right? I had the ability there to separate things and then uh start to identify the minimal um uh minimal communication uh mechanisms between these containers. So I bundle everything up and now I have a very lightweight small container and I'm able to to run that anywhere that uh that has the Docker Damon running. So all I need

today to get uh Docker up and running really uh is just uh to to pull the uh the Docker uh this Docker execution framework and get it up and running on my box. After that things are very very easy to start and stop containers. Uh launching a container is relatively easy. There's uh the Docker execution framework not only consists of the Damon but a command line utility that allows me to start and stop uh things very easily. Uh all of the Docker dammons uh listen on a uh um a Unix socket by default. I can actually configure that uh within production environments to listen to a TCP socket. So I can actually issue all of these

commands uh remotely as well. It's very the easy the API uh is a bit uh has a bit of a learning curve but more recently as we'll see later Docker uh purchased a company uh called Orchard that had a technology called Fig uh that largely focused on this idea of how do I quickly and easily configure Docker containers. Uh so this has actually become significantly easier over time. So with that in mind, the Docker execution framework is relatively straightforward and easy today. Uh we'll see a demo here uh in a few minutes where we actually start to look at examples of that execution framework. So I mentioned Docker consists of two parts. One is the

execution framework again. The second is uh the Docker hub. The Docker hub is essentially my um way of being able to manage different containers. In particular, the public docker hub or the public docker repo allows me to quickly pull containers that are preconfigured and vetted by the community to be secure. Uh um uh so I can quickly pull those containers and get them up and running in my environment. That's very useful as a developer. Uh um I gave the example before of of bringing up a a CDC web server and discovering oh this uses Django I don't have that installed and I I don't want to have to worry about uh for example if somebody gives me

something to pentest and it runs my SQL uh I don't want to have to worry about all well I got to get this MySQL database set up and I want to make sure that uh I you know I'm a security person so I tend to be anal if I'm going to bring up my SQL on a particular box I want to make it get to be I want to make sure that it's secure even though I'm going to uh uh be poking around their web app as part of this uh as part of this pen test. And when I'm done with it, I since I'm not using it anymore as a developer, I want it out of there. I I

don't want to have to worry about all of that. And that's what makes Docker container management uh they've made that exceptionally easy. what I do as a developer today when someone gives me uh uh new requirements. I could quickly on my on my uh um within my development environment go out and pull a container image from the docker repo. And the best part about this is that it's really not unlike at uh atget in the sense that I could pull something and it will figure out dependencies based on the image I'm pulling. I can even pull entire uh uh uh base operating system images within container environments. So you can pull for example a a virtualized uh uh Ubuntu

image uh today uh a virtualized uh uh uh Fedora image all from this environment and be able to be quickly up and running right away. Once I do that as a developer now I can start to uh to add my applications on top of that. So for example, let's say I was to uh uh uh in my example earlier of the the CDC website. It's running Django. So I'm going to pull two probably two containers from the Docker public registry. The first one's going to be the Django container. The second one I'm probably Django uh if you're not familiar with it really is just a a framework for uh uh uh doing web design in conjunction with a SQL server. So,

I'm probably going to have to pull a SQL Server container as well. I pull both of those. Now, I can start adding my ex uh my proprietary content to those containers. So, this in my case, my example, I just want to run the CDC web server. So, I put that on top. I can add some uh uh metadata around that. In other words, how I want want that website to uh to auto auto run each time I launch this container. uh maybe I I set up some configuration settings around it. But at that point now I can if I wanted to share it, I can push it to our own internal Docker registry. You'll notice here I'm using the terms

push and pull. When I say Docker registry, what I'm really referring to here uh is essentially a get repo of container images. Now all of once once I have that internal repository, any of the environments that I operate in can easily just pull from that environment and as long as that environment is running uh uh Linux or has uh uh some variant of Docker, for example, you can run you can run this on uh OSX. Um they're able to uh uh quickly up get that container up and running. It's as easy as a as a get pool uh assuming that they're running the Docker Damon. So, this gives me a lot of flexibility now. I don't have to worry so much about

dependencies, right? When I started uh uh I can remember especially developing like web applications in the late 90s and and really web hosting providers at the time just weren't ready for dynamic uh uh dynamic content on board the web yet. you know, you'd spend a lot of time on on on the help desk with with different individuals because what they really wanted to do, the only thing that they were configured for at the time was hosting static HTML, but trying to get uh work with web hosters at the time uh uh to to run things like Pearl um on the website uh that was that was a difficult issue and you had to walk people through

all these dependencies. Now, this is very simple. The same thing I can do in uh uh a cloud environment like Amazon, right? I can I can uh uh pull directly from uh my my container registry, pull container into the Amazon cloud and get that up and running very quickly. So it's worth talking a little bit about how to build containers. Containers are composed of a baseline image. So this can be large or small. For example, I could pull something like let's say uh I just wanted to be able to operate PHP. I can pull a small image that simply gives me uh PHP uh uh uh the ability to execute PHP. And so there's probably at least some base around that.

Sometimes that's a bit of a uh a baseline file system. uh I can pull complete baseline operating system images like for example I give the uh in my sample here I have Red Hat Enterprise uh Linux operating as my baseline image. I can then add services on top of that container. Uh for example, I may want to run uh uh Apache or something and my finally I can add my application on top of that container. But what's nice about this is that I can pull these standardized vetted containers again from the the the the public docker repo. That gets me up and running very quickly. The other thing I really like about uh uh Docker uh from a a

configuration management perspective is the ability to quickly uh uh uh do revision control. Here in my earlier example where I'm uh I have everything hosted as VMs. Once I'm done with that VM, I I uh uh I take it down. I can I can image it again. I put it back on uh my hardware at home and I start making changes. The next the next uh uh integration event I might go to. Now I got to make that take that whole VM with me again, right? I'm always I'm always concerned uh with uh um uh the revision over snapshots. So I tend to be pulling these large images. Whereas Docker has a nice environment where you're only really pushing just

the modification to these containers. So you'll see in in in the graphic here, what I have is a a container in the top left corner. I make a modification uh or two to that container, push it to my whatever my my container repository is. Now, when out in the field within my development environments, I can quickly pull those uh just those small modifications and get those up and running within my environment uh without having to move around a whole lot of uh uh uh container data. So, advantages of Docker particularly from a security perspective, one it's very minimal, right? These containers, the baseline images, they operate exclusively what uh uh you want them to, right? If I pull uh a uh um an image

today, let's say if I I pull something like Apache, I can define there the version of Apache that I want to run any uh um but when I pull it, it's going to run just Apache. If I want to run something additional, I have to define that as part of my metadata, right? If there's an Apache module that I want to run, I have to define that as part of my metadata. They're very portable, easy to share. Uh uh what I really like is uh they have uh uh a strong separation of privileges. All of the this is relatively new, but all of the images are now being vetted by the community to make sure that they don't have uh any

sort of malicious content that they're set up uh as secure by default. You have with within these environments, as I've pointed out before, you have a strong isolation as well, right? If I take the time uh to make sure that my containers are set up with minimal resources, any isolation that I have of that particular application, it tends to be uh uh limited to that containerized environment. Um it's very fast to quickly deploy uh uh within the Docker environment, right? I'm not so much worried about dependencies here. Docker takes care of all of that uh uh automatically for me. There's complete environments now uh uh new new systems based on uh Docker that allow you to uh

uh that focus on this deployment problem. So for example, Go CD uh Go continuous deployment is a complete framework for quickly being able to test images within a development framework and then uh once they once they you've gotten them through security and regression testing, quickly pushing those uh out to uh uh out to production. Uh there's other frameworks that manage containers for you. Uh allow you to see images over time. Make sure that you have uh your baseline image uh contains everything that uh uh that you need. So let's let's get a little bit uh uh more detailed uh uh so that we can do the demo here. Remember I mentioned before that any container contains a baseline

image. These are baseline uh vetted images that I pull from the docker hub. So in this case I have from docker file tomcat uh app server. Right? So that's kind of my my baseline image that I I want to be able to pull. This is uh I I want to be able to uh operate a tomcat server within this environment. I add the exe uh uh web apps on top of that that I want to uh uh want to be able to execute. So right, I start with a baseline image that I'm assured has some security. Now I add uh my ex uh my application on top of that. So in this case I say add uh my web

app. And now I can start exposing things from that container. Right? By default none of the ports on that container are exposed. Right? It's just it just brings a tomcat server up and it's running within that that uh virtualized file system. It's running within that container environment. But ideally, I'd like to start exposing some things, right? A web server is not much useful uh not very useful to me unless I can actually access it over a TCP port. So here I expose port 8080. What that essentially means is for this container expose port 8080. And in fact, I don't show it here, but you can actually map that then to a con a port that's open on

the the host operating system as well. So in this case, what I might do try to do is do an expose 8080 colon 8080. So that means I expose port 8080 uh um on the container and have that open on my base operating system as well. This idea of exposing allows me to quickly configure how individual containers will talk to one another over uh uh ports and I can give it some initial commands to uh to to start processes within this container. So in my case um uh I have a a simple uh uh service kickoff and uh um uh start uh the the logging process. Now, as I mentioned before, really uh when when Docker first uh arrived on the

scene, it wasn't particularly userfriendly. It had an environment that was largely command line driven. You could do a lot within that environment, but it took you if you go through the Docker documentation, it all still exists today. There are a ton of command line options. It's sort of like tackling NMAP for the first time, right? N MAP has a ton of command line options. It's hard to to wrap all your mind around all of them uh uh quickly. But fortunately uh uh there are environments today that make that very easy. So today uh um FIG has been uh integrated into Docker. Uh it we call it now Docker Compose. Essentially what we what we do

is set up an environment, define what we want our environment to look like within a YAML file and we can use that to quickly deploy multiple containers uh for docker all using this this fig environment or docker compose. So this makes things very easy. So let me give you an example here of a yaml file uh for uh for docker compose. So you'll notice here I have two uh um two particular operating system images. Can we make this bigger? I wonder is that helpful? Um so I have two images. The first is the database image. So I define it as being named DB. The second one being the web image. I I call it of course

web. My database image is going to be based on the baseline image for Postgress. Notice in this case I don't get I I could give it a version. I don't. So by default it's going to be the most current. What I'm saying here is I want hey docker I want you to go out to the docker hub to your public docker repository and if I don't already have it grab the most up-to-date image for postgress and within this database environment I can set some uh environment variables I set two silly silly ones here um uh just the username and password a username of postgress password of blah blah I can set lots of there I'm showing a very simplistic

example I can set lots of environment variables here for example default database name and so on. My second container is going to be uh a a web environment. Uh um the important thing to note here is that I set up a virtual file system. So in this case I'm putting everything in web. So if you're familiar with jails and how we limit uh uh file systems within jails here, I I I mount things on web, right? I'm limited to this environment, this mount for web. I want my web environment here uh to operate a Django server. Uh it's going to uh so if you're familiar with Django, basically Django is based in Python. So I call Python manage.py. I'm

going to run a Django server. I'm going to run it on the local host on port 8000. That means that I need to open port 8000 on the container as well. And here I map port 8000 to port 8000 on the local host. That means anyone connecting on the local host to port 8000's passed directly to that container. Here I'm starting to uh build on top of this concept of namespaces. I want to be able to link these containers in some way. Right, I've isolated my web container. It operates within one docker container. But I want it, right? It's a Django environment. Needs to be able to pull ideally from a a a SQL database. I

want it to be linked uh and dependent upon a uh a different container. In this case, I'm describing this link to be my database container. The database uh is defined here. So that is to say now if I launch the web container, the web container will automatically launch then the database container uh to be able to actually uh operate within this when within this framework. Again the important thing to to remember here from a security perspective though is that each of these operate within an individual container. There's a zero day discovered tomorrow for uh Postgress, right? and someone uh was to uh uh uh uh let's say somehow I I haven't really exposed anything here so maybe

this is a bad example but if someone was to say let's let's say uh um uh compromise uh um uh Postgress because I I I exposed something more than I should have right they're limited their attack is limited to that environment right breaking out of that container uh is going to require uh potentially a low-level exploit

Oops. So, let's switch over to a demo. Then, what I want to do in this demo is really demonstrate the two features of Docker. So, in one case, what I'm going to do is do a simple uh pull from the Docker public repository to get a full image up and running. In this case, it's going to be a simple image. Uh um uh I'm just going to run a uh a simple application, the hello world application, right? What we're all familiar with as as software developers, but I'm going to run that on top of a YUbuntu baseline image. My second case is the uh the one that I've mentioned here before. I want to operate Django and SQL within uh both

container environments. So, let's kick off. Um, I wonder if I can make this bigger, easier.

All right. So, what I'm going to do here, uh, it actually let's do this. If I say docker images, this will show the images that I presently have installed. So, the first one here is a Python image. Uh, notice here I'm not running the latest instance of Python. Uh, I I I I truthfully don't know many people that actually program in Python 3. uh X uh so I'm running Python 2.7. I have a Postgress container as well. So here are my Docker images. These are things that I've pulled from the repository. There's one more at the top. That's one that I've built docker test.web uh for this particular demo. And you can see their virtual size there on the on the right.

Um and the time when they were created. So this isn't the time it's point out here. This isn't the time that uh when I created them on my system. This is the time that they were created on the Docker Hub repository. So, Py uh uh uh the the Python image was last updated uh with uh system updates two weeks ago. Same thing goes for uh for Postgress. Um I want to be able to run hello world. And so to do that, what I'm going to do, if I switch over to my other tab here, uh I'm going to uh uh uh create a command say docker run in this case. And this is this is this is what I really

love about Docker, right? I'm just going to say I don't I haven't done anything else yet. I'm just going to say Docker run. I'm going to tell it what I want it to run. In this case, I want a containerized image of baseline Ubuntu 1404. And I'm going to uh uh once you bring up that baseline image, what I want you to do is just echo hello world. So I'm going to kick this off. Remember though, I don't have an image for yubuntu 1404. So what happens then? and it says, "Well, I can't find it." So, I'm going to go out and get it. Uh, so at this point, it's downloading an image, and it's a big image. So, let's

let's switch gears. We'll come back to it in a minute, but let's switch gears and uh uh we'll come back and and check on that. No, it's not. But I I'm not I'm not I'm not going to do comedy. Well, here you have a question. All right. So, this is running on your laptop. Docker service installed. Yep. So that's important to note. Like so for example uh uh this laptop uh I bought off of eBay used uh five years ago. Normally the laptops I use um I just use ones for work. This is a a presentation that I'm personally doing. So I brought my personal laptop. Um old laptop. Uh it's a a a uh

Core 2 Duo. Not much RAM. The thought of running a VM on this laptop would scare me. Uh, at this point, the laptop's probably on the order of like seven, I mean, I don't know, seven, eight years old. It's not very beefy. And you'll notice any of the speakers today when they switch to VMs, particularly like uh I'll pick on uh uh Andrew this morning. He tried to uh to switch uh from his base operating system OSX uh to a Fedora core image. and his demo particularly early on was very slow because really you know a laptop right even a new laptop doesn't have a whole lot of processing power for a complete uh virtual machine

infrastructure um uh did I switch oh yeah I did let's see how this is doing oh it did actually download now uh so that downloaded uh yubuntu and did its thing it ran hello world uh um right All within a few minutes, I brought up an entire environment, a baseline Ubuntu image without having to go through a process of spinning up a VM and letting that for that to boot. Uh installing from the baseline image. Uh let's say I'm booting up the VM off of a uh uh a standard ISO disc. Then I have to go through the process of installing. Now I have to uh uh eventually uh go through the appget update. All that's been done for me.

I've pulled the most recent appget updated 2 image from doc uh from the docker repository online and I've run my command and I can make this more interactive. Uh so now if I do like uh uh docker images all right well I see that uh I'm I have uh ybantu installed now it's it's tagged as a number of different things such as 140404-2 or latest uh trusty. So they've tagged it here as a number of different things, but it's really all the same image here. You notice if you if I look at the image ID, all of these point essentially to the same thing. And it's relatively small. I can actually make this is this is useful to me when I'm

debugging applications that are running on top of a Docker container. I can make this more interactive as well. Uh so what I'm going to do now is say docker run. Same command as before. um only in this case I'm going to open up I want this to be interactive. I'm going to open up and run a uh I'm essentially going to run a terminal. So instead of running echo hello world I'm going to open up a bash prompt. And in this case I've set this up uh with the commands such that we open up oops a root prompt. So notice there boom I'm up and running right away. Right? I don't have to pay that penalty anymore of downloading an

image. I'm up and running within that environment in a shell right away. And so now I can start to uh to poke around here and I have a a full a full up uh essentially since I'm I'm running a baseline Ubuntu image, I have a full essentially containerized image of Ubuntu. So let's exit out of that. All right. So let's do one last thing and that is other things easily. Yes, very easily, exceptionally easily. So, uh, in this case now, right, I have I have now four images. My Ubuntu image, my Postgress image, my Python image, Docker test web. Let's get rid of Docker test web and build it from scratch. So, uh, I'm going remove what is it? D6. You

can also just give the name. Oops. I probably I'm still running something. Uh, so let's do probably have it in my history somewhere. Yeah, there we go. I'll stop the images I'm running. Come on.

Oops. If I spell it

correctly. All right, let's get rid of stuff. And I'm going to get rid of that one container. All right. So now if I do a docker images, I should see I'm just down to Postgress, Python, yubuntu. And the question uh uh for those of you on the video recording is can I quickly layer things on top of it. So in this case I have I've already pulled two images. One just a Python image. All I can do there is run Python code. The other one just a Postgress database. All I can do there is operate a database. Uh but I want to be able to quickly layer things on top of this. So enter my uh YAML file again. So this is

the one I I had earlier. Uh um let's start out with the YAML file. Then we'll look at the Docker file and my requirements file. So in my YAML file, I have a data I I create an image here called database. It's going to be based on Postgress. Uh and I'm going to set up some environment variables. Again, I have a second image. It's going to be called web. This is the one that I'm building. It's linked or based on this database image. And notice here I call out a command Python. So Docker now knows that well okay uh uh I we're essentially going to need Python within this environment. I can uh uh contrast that with my Docker file. Maybe

I should have started with this one. within my Docker file. I notice that I'm going to start I'm going to create my new image from the Python 2.7 image. I can set environment variables, but what I really want to do is set up my virtual file system within that that image. So, I'm going to make a directory here called web. I'm going to set that as my virtual file system. I'm going to add a file within this directory called requirements.text to web. And now I can this run command essentially allows me to run particular Linux commands within that container. So notice the first thing I did was make a virtual file make a a folder. Set that

as my uh uh essentially my my work folder my uh uh my uh essentially uh root file system. I'm going to add within this container requirements.ext text and then I'm going to do a python install uh uh to require of requirements.ext there. So let's look at requirements.ext. We'll talk about that what I do for requirements.ext, right? My baseline Python image is just Python. I don't have any of the uh uh like any of the Pi Pi modules that that I might want as part of my baseline Python install. But I'm running Django. So I'm going to need Django. Uh uh and in this case I add an another module. So what I have at this point is

a Docker file that says create an image from called Py uh uh uh from Python uh create this uh virtual file system and install these additional Python modules within that environment. My FIG environment. Now I start to to build on top of that image. So here I set my uh my build environment as dot. So that's just going to look at that docker file. Here's the command I want it to run once that container is launched. So I'm going to actually open up my Django uh interface. Here's the volume that I want to mount. I point to the same thing that we had before. That's the web. I'm going to open up one port and I want this to be

based on uh the database as well. Uh that particular image. So now with compose oops with compose what I can do now is kick this off and we can actually create uh um so I'm going to start docker compose right I I I've been using docker compose and fig in interchangeably docker compose is my uh my fig environment. What I want this to do is run the web image. So, it's going to have to build that first. And what I initially wanted to do is just kick off that that baseline Django framework. If you're familiar with Django, basically what I'm doing here is saying create a default website within whatever volume that I mount you

in. In this case, web. So, I'll kick this off. Now, notice the first thing it did here is said I need Python. Python was already there. It wasn't going to uh wait for us to download. I set up an environment variable within Python. So I I modified the image slightly. Created a directory called web. Uh set that as my uh default uh working directory. Now I added that requirements file. Now I'm installing all of those Python uh modules within that environment. So I've installed one. Now I've installed Django. All right. I've successfully built this. Oh, I forgot to delete something first. That's probably okay, though. Let's do that again. I forgot to clean up. You know, you

always do these things where it's like, I'm going to test this so many times. And I did. So, I tested it. Oops. But what I forgot to do is clean up my prior test. So, let's remove that. And we'll remove manage. Yes. So, all I've done right now, what I just did was remove the default Django website that I told it to install because it started to build it and the last thing it did was start to run that command to build the default Django website and it said I can't do it. So, I'm just going to uh uh uh the other thing we should do here Oops. is go ahead and pull that container.

Oops. We'll build it from scratch again since it's so fast. I can type

So, I had been typing the ID before. You can also just type the name Docker test web. That'll pull it. All right, let's try again. So, grab that environment, start building on top of the environment. Now, I'm going to install those Python modules. The first one I download is Django. Second one I download uh is Psycho uh PG. I'm in uh running the uh the setup now for PsychoPG. So I'm building that container. So you'll notice here it's it's this is what I really I I continue to stress this. This is what I really love about it is that I'm up and running within a framework that I want for my penetration test very quickly. So now

I've successfully built my container. Docker compose or fig makes this launching that container exceptionally easy. If you look in my environment now, I have that docker uh file and docker compose.yaml. Docker compose is sort of like a a make file. Um if I call docker compose uh run uh this will essentially run the docker compose file within this directory. So I'll say docker compose run. Oops. Uh what did I do wrong? Uh it's uh

start up. I always type wrong. All right. So the first thing that does is bring up my database. Remember the database uh uh um is what I have to is is what I base my link on. Now I create start the other container web. So my database uh comes up. You'll notice it's just called database one at this point. Database uh uh um automatically started. Now my web application starts. So now if I open remember I mapped my container to port 8000 on my uh uh on my PC or on my laptop. So now if I go to localhostport 8000 I should yep get here's my default Django web page. Now it used to be when I did a pen test

I would spend a lot of time creating baseline operating systems installing VMs trying to match my environment. Uh if I wanted to if I had source I'd want to get that source up and running so that I could internally play with it. You don't have to do that anymore. There's no reason uh uh to to spend as much time as we do building up these large uh virtual uh operating system infrastructures when things are so neatly containerized with Docker. That's why this technology is catching on so fast. Uh again, you know, I've spent, you know, the past, I don't know, uh a few months learning about Docker and and using it for uh projects at work. When I

meet people today and talk about Docker, they're either they've either used it and they start to see the benefits and they're they're they fully latch on to it or they just they haven't yet and so they they they haven't uh uh um they haven't drank the Kool-Aid. Um Docker has been something that uh I find I find especially with Fig or Docker Compose very easy to use. There's lots of online tutorials. In fact, if you go to docker.io, io the main website they have an interactive uh uh command line utility with little assignments that you can step through in terms of the tutorial to get up and running with Docker uh uh right away depending on the

operating system that that you use um uh it can be easier easy or hard to get up and running with the Docker Damon um for example I'm running I run Mint on my laptop uh Mint is Debian based you can do an appget install of uh of Docker It's a little old since it's Debbian based. So, I' I'd recommend pulling directly from GitHub. Um, if you're on Fedora, relatively easy to get up. You just do a uh a yum install. Uh, if you're on Mac OS, uh, you can actually run Docker within that environment as well. I haven't played I'm not a Mac fan, so I haven't played with that. Uh, I can't I can't attest to its its maturity there.

Uh the only thing I haven't found uh uh you you can run so last night at uh at dinner someone asked me um when I was telling them about this talk uh have you have you spun up containers uh at all to try uh to do uh malware analysis right everything's containerized and uh um you can actually spin up environments like cuckoo within within docker containers um um typically though with malware analysis this uh it I it's generally better to run with a virtual oper complete guest virtual operating system, right? You want to be able to mirror that environment as much as possible as you're doing uh as you're as you're trying to reverse engineer a particular

piece of new malware. So, it's the only the the only thing that I haven't found it useful for as as as a uh as a developer. Questions at all? Yeah. So, I see it's got a whole bunch of controls around like you know connection to the docker. What about the Oh, good question. So, I don't know because I've never actually I I've been primarily using it within development environments at this point. I haven't used it in in the case of uh of a full deployment so far where I w where I was concerned with uh egress. I don't know. I'd have to I'd have to I'd have to do some research on that. That's a good

question. And then what about you know like you know install like you know inside or something like that. Yeah. So there's actually if you go out and look look around there's uh there are a ton of projects on uh the docker hub. So there's actually projects where they start to look at doing things like uh hostbased intrusion detection within these containers. Um the the I should just mention um here the other thing the other projects to check out within within Docker. Uh shipyard is complete deployment management system. I mentioned Go CD before. Uh core OS is complete infrastructure based on top of uh of Docker. They really have a lot riding on it. Uh but there's a ton. I

mean, if you look at the the this is a project that came out two years ago, and if you look on GitHub, it's one of the the uh uh you know, top 10 most popular projects today. It has something like the last time I checked, like 4,100 4,100 uh forks of it now. There's a lot of people starting to use it. That's why I mentioned, you know, you talk to people today and they either have no idea what it is or they they've definitely they're all on board. Uh I don't I've been using it now uh myself for only a few months. I I would hardly consider myself a Docker expert yet, but in that time it's been so easy and

convenient to use to do uh to development that it really gives uh uh the maturity there that they've been able to achieve in a short period of time really gives me sat satisfaction. Yeah. So have you used Vagrant at all? I have used Vagrant. Uh um if you look at uh um for example, Chef would be another alternative to Vagrant. Um uh Vagrant and Chef um any of the other provisioning environments are all um uh work seamlessly well with Docker today. Uh a number of people have mentioned uh Jenkins today in in the other talks. Uh Jenkins has complete support for now uh quickly deploying Docker containers as well. um um that it's it's very easy, you know.

So, for example, with with something like Chef or Vagrant, I want to be able to spin up uh small virtual operating systems and provision things within those virtual operating systems. Those technologies are completely amenable to Docker as well. I'm just not virtualizing an entire operating system. It's closer to uh uh virtualizing in my example here, right? I just wanted a Python environment up with uh with Django. That's all. Right. And so those environments are very amenable to this as well. Yeah. So I noticed you didn't open any ports for postgrade. So it's probably running through a socket, right? Yes. So I want to say local host, but I know it's not local host. So is

that socket on your laptop or is it in the It's on my laptop. So to be clear there, system. Yeah. And to be clear, like I uh that is a completely separate container. So I was only using that just for um uh just for uh uh the the hello world and uh bash uh example. So I could blow that container away. What's it called? Docker. Oh, right. Uh right. So I could blow this container away. 09 Python. That's right. But see, think, you know, I I especially as a new Docker user, I would get confused by that as well. It's like, well, I can't blow that away. That's my my baseline operating system. Uh, so let's do Docker Compose.

So, I've blown away now. Uh, Ubuntu, right? If I do Docker images, I should only see the three that Oops. Docker images. Docker test web. That's the one we built. Python, Postgress. Uh, now I'll go and do a docker compose up again. I can easily launch those containers. Good question. Other questions? Yeah.

Yes. uh and and I haven't really seen a strong road map around that per se. I mean it's still so new and there's so many people that are building technologies on top of it um that yeah I can't I I would hesitate to comment on a road map but they make it very easy for you to be able to understand what what should my baseline image look like? how can I quickly deploy that to a development environment and set up an an entire framework around testing it and then be able to uh push that to uh a deployment environment as well. So for example uh go CD that I mentioned before really tries to to

focus on tackling that challenge right but that's a separate project than docker if you if you look at any of the presentations in this area it's clear I mean people realize that this technology is going to take off uh but there's a lot of people that are trying to uh to to the number of forks for example on docker uh are clear indication of this there's a lot of people that are trying to to attach themselves to that wagon so what what really takes hold and becomes a standard across the community. I don't think anyone can comment on yet. Yeah. You said earlier that use this in your

exploration trying to exploit those. Uh so I I mentioned early on that I work uh in the area of embedded systems and the the internet of things. Um, often what I, you know, as I look within those environments, uh, as I mentioned before, you you get to a point where you, you want to be able to create something very quickly, uh, uh, that sort of emulates a particular uh, a particular uh, uh, system that you've encountered. So, I'm largely from that standpoint looking at this from a a defensive perspective, right? I may do a penetration test. Um, but it it's largely because I want to make sure that something's configured correctly or I just don't want to uh I

just want to understand what this this operating what this operating environment looks like because I might not be familiar with it. I'll also say that I use Docker a lot personally now in the case like I gave the example before of of the CDC, right? People will come to me and uh I used to work for a long time as a penetration tester. So people will come to me and and ask questions regarding uh a particular uh uh vulnerability analysis. I don't know much about that area, but I'm willing to tackle it. So I but I need to be able to quickly provision that environment and get it up and running right away. So there's where I've used it more

offensively. Other questions? As far as

question it's using some shared resources, however you want to call it like would you be able to monitor some of that locally even like because I mean that's one of the challenges virtual environment is if you spin off 10 or 15 machines to simulate something or you know an application or something like that you have like they were talking about one of the talking about earlier you've got to like buy licenses for all those things and everything else to some degree like you could possibly even just monitor all this because I'm kind of think it's opening up socket locally or something like that over I mean is it theoretically you just monitor on your host and watch those sockets or you

could probably do that I haven't explored that area much yeah You should be able to. Yeah,

I know that sometimes some applications will just straight up break when you try to do that, but I mean I'm just curious. Yeah, thinking out loud asking. Uh, I've never I've never I've never explored that. Right. primarily, right? I mentioned I'm a researcher. So, you know, you there's plenty of other people that do the the daily fight here. Uh what I what I would do in terms of intrusion detection technology uh is research new types of of analytic techniques, right? I'm not concerned necessarily with all right, well, how am I going to deploy this? So, I I haven't explored that, but I my my suspicion is that that's very possible. Well, I'd be curious too because I mean one of the

biggest challenges of like AWS and stuff like that is how you can't do NSM and virtualiz not virtualiz.

Yeah. I'd be curious if like that would be a way that you kind of have like containerized systems that you can still monitor centrally without necessarily having that you could theoretically do theoretically. Ben, did you have a question before we break up? So, okay built up my bundle with all my share. Do I share just like script to build it or do I have to share the whole? It really depends on your environment. So you could even push all the way back to Docker the Docker Hub um and share it that way. You could just push your image to your internal Docker registry. Um so in the case of work that's usually what we do. It's like

okay I got this all up and running here. Pull it and you'll get your your uh your image or you could share the scripts. Typically you just share the image because as I mentioned before it's very easy to do revisions on the image. So when I do another pool it's just like get I'm just pulling the changes. I don't right I'm not I don't have to it's not like I have to uh snapshot this VM and share it with uh uh the whole snapshot with another person. I just want to be able to quickly share a uh uh the revision all the modifications you make that can you like fire up your into it to make customiz

good question. So you you can uh uh to your point it doesn't have to be scripted out and in fact I mentioned earlier I will typically put that container into an interactive mode when I'm debugging. So if I have the script, if I'm doing like trying to get an initial environment up and running with everything that I need and something keeps breaking and I don't know why, I'll say, well, just give it to me in interactive. I'll debug it and figure out what's wrong. But once I get my once I want to start putting on my applications now, I want to make sure that okay, well, let me figure this out. Let me start to add things over time.

Right, the file system is uh is is permanent. So if I as I make changes now I can get this to a point where all right now I can now I have my updated container and now I I want to do a push and I'll share this with other people. Um the scripting the scripting makes it easy from my perspective to to get thing to get a new environment up and running right away if you're pulling from standardized images. Last question as far as you know and a lot of the virtual operating systems they have proprietary or open source standards for how file systems work for example those things obviously I take it that's kind of one of the differences

like those shared resources like file system or something like that yeah those tend to be abstracted away um and I'm trying to remember is it UFS that docker uses um it has its I so that's the underlying file system But yes. Yes. That's right. I So I I really appreciate it. These were very engaging questions. Thanks. [Applause]