
i am kelly i'm one of the staff here but i'm not as important as david and matt behind me who are about to give a talk uh on encryption keys it's a secret to everybody i hope you all get that reference and aren't too uh young for it anyhow we are going we are recording this there is an av stream so if you miss anything or if you want to see the slides later we're recording it and there's a stream so don't worry too much about it we'd also like to thank our sponsors all of our sponsors but especially our diamond sponsors lastpass um and paulo today and some of our gold sponsors including google intel and blue
cat you know you we're all important you're all important but sponsors definitely help us out here one more thing is of course we all have cell phones we don't want to hear each other's cell phones for the next hour so just take a moment make sure they're on vibrate silent etc and uh of course we are recording we'd prefer you not to be recording and obviously besides we don't allow recording or pictures unless everybody in those pictures or recordings gives consent i've done my job and now the guys are going to do theirs enjoy
all right ready yeah uh hey everyone uh thank you very much for taking the time to come out to our talk we're really excited to be here at b-sides and we're even more excited to share our presentation with you today um if for some reason you can't hear in the back please let me know this is actually my first time using a microphone so uh my posture might be a little off so just give me give me a hand up or something if if it's not good um as we continue to go um so the title of our talk is actually driven by a single question which is whose encryption key is this we admit this is a terrifying question
to be asking as a security professional but today we're going to share a story with you about how log services and encryption configurations interact together with each other by the end of this talk we hope you have a better understanding of some of the nuances of how aws log services and encryption configurations interact with each other we'll also show you some tangible action items that you can take inside of your own environment as well as remediation paths if you do if you are facing any of these issues before we get started into the story we wanted to quickly introduce ourselves my name is david i'm currently a security engineer at benchling focusing on detection response
prior to this i was at apple doing security for three years and i've been heavily involved in the aws security space for about the last five this is a shameless self-promotion but i recently started a blog on cloud security if if that's your cup of tea would love for you to check it out it's called simply cloud sec it's just on medium um yeah if you have any feedback we'd love to hear it hi my name is matt i am on a sibling team to david's on cloud security team at benchling prior to that i was at workkiva for nine-ish years i think and i've been around aws and security in aws for around the past decade
to give you an idea of what today's talk is going to look like matt and i are going to set the stage for the problem we were trying to solve before our adventure started we're then going to walk you through exactly what our story was what we found what the problems were how we fixed it we're going to share some remediation paths for you as well in case you run into these same issues inside of your environment and at the very end we'll share some kind of closing thoughts and have some time for q a so first i want you to put yourself in my position as a cloud security engineer at benchling you're doing whatever tenured
cloud security engineers do it's probably like reading hacker news finding who's wrong on the internet and correcting them and david comes along and sends you a message and says can you sanity check me on this and what you're hoping you get from david is something simple like one plus three equals four that's still true right but what you actually get from david is do you know if it's possible for objects to still be encrypted with a key that no longer exists and now you're like ah like i'm not going to do what i said i was going to do at stand up and this is going to be a whole day and you start to feel a little bit like
our favorite guy here world's starting to burn down and we found ourselves in an interesting situation where we were unable to access network logs in an account that we needed to but before we get into that we need to give you a little bit of background on how some of these aws services interact so we're going to do a real quick refresher on how logging works inside of aws so there's a lot of different places where you can get visibility inside of your operations in aws so you can get things like network traffic through your vpc flow logs s3 server access logs control plane logs audit logs pretty much anything you need this is really awesome and one more
benefit of this usability for customers is it's all managed by the vendor it's all managed by aws you don't need to run your own logging services you just kind of specify configuration which involves a destination where your logs should go and then things just kind of magically work on the back end for you so this is really really nice especially coming from a blue team side of things where you want to make sure you have the appropriate telemetry a common workflow for shipping these logs away is to put them in an s3 for long-term start storage from there you can ingest them in your sim of choice or just kind of keep them around for
compliance reasons a very common workflow for these logs when you store them is to ensure that they're encrypted at rest some of these logs might be a little bit more sensitive such as your network traffic logs you may also have company requirements or compliance requirements to ensure that all of your data is securely stored at rest so very common workflow is to encrypt data when it lands in s3 especially for these logging services there's a couple of different ways you can encrypt your data when it gets into s3 of course you can probably do it on the client side inside of your application but this can be very time consuming and a little bit hard to
manage so the common workflows and patterns we've seen actually involve just leveraging amazon services to do server-side encryption in s3 and there's three different configurations that we're going to look at today the first is sse s3 sse stands for server-side encryption and this is essentially kind of like your most basic setup you have a key that is lev that is managed by aws and managed by the aws s3 service you can't really do much in terms of controlling this key just kind of a lot of magic on the back end which is great if you want to get up and running and get started the second mode of operation is kms which is amazon's key management service
in this scenario it's pretty similar to the first one but you do get a couple of benefits notably you can specify key rotation so you can set up automatic key rotation and you can also address your access policies so for instance if i create a custom kms key i can control who can access it whether it's inside of my account outside of my account whatever i want to do with it it's my to-do the last one is ssc that's kind of a mouthful we're not going to go into the details on this one in fact for this talk the only one we really need to focus on is the second configuration which is sse kms
taking a closer look at this configuration you can specify which key you would like to use when you're configuring your bucket and there's really two paths forward here the first one is using a special key that aws provides for you inside of a customer's account this is amazon's aws slash s3 key that's the alias that is used for the key and what it's literally called the thing about this key is it's fully managed by aws so there's an access policy that allows anything inside of your account to access this key you cannot modify this access policy so that means you cannot really scope it to certain applications inside of your account more importantly for the
purposes of this talk you also can't share this key externally outside of your account the second option is to specify custom key inside of kms so as we just alluded to on the previous slide this is where both your key and your access policy are controllable by the customer we showed a screenshot there for exactly what it looks like if you were to try to configure this inside of the console you can see there's literally a key called aws managed key or you can provide your own and specify your configuration putting it all together we just wanted to show a visualization for what a reference architecture would look like i'm very sorry
as a security practitioner to be happy just once there you go you can be happy okay thank you thank you very much i really appreciate it thank you i uh really appreciate you guys coming out thanks appreciate it what just happened and the show will go on um we have two different sides that we look at for these logging services so as with a shared responsibility model and leveraging cloud providers there's the cloud provider side of things and the customer side of things so with these logging services which are run by third party amazon they run in their own account and they have their own workflows and on the customer side on our right side we
simply have the s3 bucket containing the destination and the encryption key that we want to specify so i've spent some time telling you about logs and i can tell abort someone because they literally came up and gave me a mask to tell me to stop talking about it so let's get back to the actual story of what we're here to listen to today so the problem that matt and i were originally tasked with is the joy of any security professional or in fact anyone who interacts with tech which is an integration plan so we ran into a situation where we were now the new custodians of a brand new aws organization which came with some
child accounts and our job was to integrate this new infrastructure into our existing infrastructure of course nothing could ever go wrong in the situation so matt and i were really excited to get started as with most companies and security organizations out there you have a set of standards that you want your infrastructure to adhere to so something that was really important to us was to ensure that this new infrastructure we were now responsible for was meeting the bar for what we considered to be a high security posture for us internally we came up with a pretty straightforward integration plan the first thing that we wanted to do was ensure we could immediately understand what was happening inside of that
account and for that we wanted to start collecting security telemetry and shipping it to our threat detection pipeline the second thing we wanted to do is just some gap analysis let's understand the new infrastructure that we're looking at and figure out the gaps for where we'd like that new infrastructure to be and the last step would just be working and coming together with a plan to resolve any findings and close the gaps over time as with any multi-step plan that you confidently present to your manager or to your teammates we got stuck on step one which was finding and adjusting the logs so the first log source that i went after was the network telemetry
specifically this was the vpc flow logs and i acquired a role with all the appropriate permissions i had s3 read access to the scope bucket i could do pretty much whatever i needed to in that account there were really no restrictions and so i poked around i found where the data was being stored in s3 and i was like all right let's let's carve some network logs and figure out what's going on inside what happened at that point was the least favorite error message for me of all time which is in access denied and i was pretty confused but access denies are fairly common inside of aws usually you're missing a permission there's something you're not seeing it's
totally fine but i double and i triple checked and i couldn't really figure out what was going wrong i had the permissions there were no explicit denies on the bucket resource policy i should be able to read the data that's inside of this bucket so i started kind of poking around because what else was suspicious is that i had access to list all of the objects and view all of their metadata i just couldn't fetch the actual objects myself so i noticed that the objects were all encrypted which when i initially saw this i thought was fantastic hey we are inheriting this new infrastructure clearly there's some pretty good access policies that are in place there's some
good security practices that are being followed this is awesome um again it was a new account for me so i wasn't really familiar with the aws account ids at that point so when i tried clicking on that key i got another error message and it's kind of when i knew i was going to have a bad day so what's happening here is that if you click on that blue link that's hyperlinked in the second screenshot it will take you to the kms console and show you all the details for that relevant key however what was actually happening is that in the console it was taking the uuid from the key in the blue hyperlink appending that to the account that i was
currently operating in and then issuing a described key operation but when this error popped up and actually took some time to read the error message i noticed that the account id which was specified on the object in the blue hyperlink was different from the account id that was specified in the red error message and this is when i started to get a little bit concerned rather than confidently moving on to my second step which i in writing put would be gap analysis i pivoted pretty quickly and went over into mild panic so the stage that i found myself at this time was objects are encrypted and for some reason i cannot access the key to
decrypt them also the key arn and the account id inside of the arn was for an account that i had no idea you know where it was or who owned it so i started thinking of worst case scenarios and what could be happening here first i started off pretty rationally and i said hey this is an integration there is some leftover infrastructure somebody forgot to mention there's a really good reason for this we just have to do some more digging and find it then i went to the other side of the spectrum and said wait what if this is some brand new half-baked cloud-based ransomware where someone had compromised this account and decided to just start encrypting all of
the data inside of it whatever it was audit logs network logs etc and then i also worry that maybe the bucket apples were just misconfigured so someone was having a joke or writing into the wrong bucket and just use encrypting with their key instead of anything that we had owned it was at this point in time where i decided it was time to ruin matt's day as well and i sent the slack message which kicked off our investigation so matt and i spent some time going back and forth i really wanted him to sanity check me and make sure we were all on the same page of what's going on so we came up with a couple of questions to
kind of drive our investigation the first obviously whose key is being used to encrypt our logs we also wanted to know whose account this key was present in we wanted really wanted to figure out how these logs are being encrypted because clearly there's some sort of process which consistently over time has purposefully been writing these logs into this s3 bucket and also taking the time to encrypt them and last but definitely not least we really need this network telemetry to figure out what's going on inside of this account how do we decrypt this data our plan of attack uh consists of a couple of steps first pretty basic what are these account ids the second thing we really wanted to
understand was tearing apart the bucket configuration how is the bucket configured let's double and triple check the encryption settings let's double and triple check the bucket hackles we really wanted to get an idea for what this bucket was being used for and how it could be getting into the state we wanted to be able to reproduce this behavior so we could confidently state hey this is what's happening and this is how we can prevent it from happening any longer if these kind of three initial first steps didn't get us to where we wanted to go we were going to open a formal security investigation and start tearing apart the cloudtrail logs for evidence of any suspicious activity
so as part of step one whose key is this you know you do what you can with any string you've never seen before like david already talked about we'd already seen or we'd already checked all the places that we expect uh aws account numbers to be this wasn't in any of them so i googled the thing and i got the best result you ever get out of google which is zero results so this only pushed us further down the well if it's unknown you know maybe it's still kind of our thing maybe it's something that's not really well known which you know it doesn't give me a good feeling so you feel like oh no guy there
um because i didn't get anything good out of this i moved on to all right well let's just click on it let's just check the bucket in the console and see what it says and it looked like this and if anybody has ever spent time in the aws console looking at encryption configurations for buckets this is what they look like and the thing to see is that there is not a string present in the aws kms key arn space if things are configured on the bucket and it's quote unquote proper you will see a key populated there saying what it's going to encrypt things with there's some nuance in that but that's the gist of it um
so we found this and we're like oh the console's showing something weird like is it buggy is there is there something wrong like this puts us more on the side of like what what do we do okay so we go to the cli let's pull that so we hit the cli with a get bucket encryption and it kicks back this json which might be a little hard to read in the back but the thing to point out here is that you can see that aws kms is the algorithm that's set but there is no line for what the key arn should be to encrypt things with and this is a completely valid bucket configuration it just implicitly means
what david said which is the um the uh the alias aws s3 key and we actually flipped this and said okay maybe we can morph this data that we just got into getting into that replication mode so we just took this changed it to a put bucket encryption and gave it this json and it applied just fine which confirmed what we already thought which was this is a valid bucket configuration to be in so then i moved on to let's see if i can fully reproduce this so this is just a screenshot showing my madness at david because i was up until like three in the morning waiting for claude cloudtrail logs to write twisting you know all the knobs and
pushing all the buttons that you have between those two services just to see what would happen and as part of that it the aws service spit out or we got in a log somewhere a new arn that we had never found before and i just hoped that this one would show up somewhere so i googled this one and it did in one result which is infinitely or however that maths out from zero and it was somebody on reddit asking a question completely unrelated to encryption keys but what shook out was is that this account id is owned by amazon and used to run their logging services out of so that's what we were seeing so this is the first
point for us where we were like able to dump the did we screw something bad up like do we have shadow infrastructure and started to move on to all right maybe this is amazon's service is interacting in a weird way
so thanks to matt's 3am adventure which i felt really really bad about because i was fast asleep but also he didn't quite tell me he was going to do that so i didn't feel too bad but more than anything i felt really really excited because now we had a working path forward we were able to figure out that hey this a can id that was being used probably belongs to amazon but most importantly matt through his hard work was able to reproduce the exact conditions in order to trigger this behavior where you have data inside of your bucket encrypted with a key that you did not configure and that you don't have access to so the conditions are three-fold the
first is leveraging ssc kms encryption again this is the second encryption mechanism that we outlined at the start of this presentation and the one that we focused on a little bit more the second correlates to the output of the get bucket encryption calls that matt showed you where the kms key was missing so the second condition is the kms key is not specified this means that it implicitly defaults to leveraging that special aws s3 key which we outlined at the beginning of this presentation if you recall this is a key that is fully owned by amazon and it comes with a trust policy that you cannot control as a customer again this trust policy does not let you
access this key from outside of your own account on its own so far these first two steps are still okay in fact if you're running services inside of your account things will just kind of work as is because if you have a log writer inside of your account it's able to access this key everything looks okay the thing that tips the scales is when you consider when you configure an aws logging service or an external writer to start writing to your bucket so really highlighting the fact that the first two conditions are completely valid now granted it's probably a weird scenario where you're specifying a kms encryption configuration and not specifying a kms key but it's still
accepted and there's no api errors so these two things on their own are completely fine but you start to have a bad day like matt and i did when you combine that third facet which is the log rider so we've kind of understood how these conditions come into place but let's walk through a step-by-step example so we can see how this works so one precursor we want to say is matt and i do not work for aws we don't actually know how these things work on the back end we know the s3 encryption and decryption operations are a lot more complicated than what's on this diagram however for the purposes of this presentation to kind of illustrate what
was happening we're really simplifying things and showing you this diagram so the flow is the same but this is not you know a accurate technical diagram so the first thing that happens is again we go back to our original diagram for the aws log writer on the left side you have the aws managed service and on the right side you have the customer configuration where you have a log bucket to store your locks you have an encryption configuration and you have an encryption key the first thing that happens is your log writer generates a log maybe you have some network traffic that came in it dutifully does its job and decides that it's going to write it to your bucket
so the first thing is that the writer reads the encryption configuration to figure out how this data should be encrypted now recall that with this ssc kms configuration that is missing a key this defaults to using that a special aws s3 key which is not accessible outside of the customer managed account at this point in time is when we start to get in a scenario that's a little bit unexpected in my experience and matt's experience inside of aws usually when you have you know log riders or operations that interact with encryption or decryption or even if there's just an issue with permissions the failure mode is that the operation aborts so what we expected to see here was that
this call would error out because the encryption configuration was a little off and that no logs would end up being in the bucket however at this point in time the log writer actually ends up fetching its own encryption key inside of its own account and note that this key as illustrated in the diagram is not owned by the customer is not specified by the customer and is not accessible by the customer account either after this happens this key is used to encrypt the object and it ends up getting written to the bucket so at this point in time after these four steps again you're left in a scenario where your log data is encrypted with a key
that you can control and you do not have access to so let's take a couple steps back and go back to the initial questions we had at the start of our investigation we've learned a little bit more as we've gone along so we now are able to answer whose key was being used to encrypt our logs as well as whose account it was it was an aws thing that was happening we were able to figure out how these logs were being encrypted through the four-step diagram we just went through together but we're still faced with that last but not least question of how do we decrypt our logs so at this point in time we became fully
dependent on our provider so again the key doesn't live in our account we don't know how to get access to this key the only thing we could do was engage our appropriate channels through support and work with aws to try to find a resolution we were able to succeed with aws's help and they were very responsive in helping us get our data back which was great but a question that matt and i wanted to kind of ask is what if we were in the middle of an incident or a critical security investigation usually when you're running some sort of security investigation you have questions that you need answers to as soon as you can possibly get them
but in this scenario if you're dependent on someone else to get your data for you you put yourself in a situation where you can't really control your own destiny anymore and you're subject to the slas of whatever hosting provider you're using so matt and i spent a lot of time on this more than we probably care to admit and we decided that we didn't really want to deal with this anymore so we put together some things on how you can look for this inside of your environment and how you can prevent it from happening so it doesn't happen to us or you again all right so as part of like how do we find this in our environment one of the
first things that we wanted to do was just like inventory like how the other like the cli the aws api how does terraform treat this how does cloud formation treat this the big takeaway from this is that everything is this the the uh the aws key arn is optional it it's not required on anything if that for the first bullet shows you or just tells you that it will use an implicit aws s3 if you don't configure anything point is optional so i want to show you how like easy this is to miss in prs that might be coming through if you're a possible security reviewer like here's an example of terraform this is configuring a bucket
for kms encryption and the thing to point out is right here kms encryption is configured there's no um key arn specified but if you're just like you know you how many pr's do you look at possibly in a day if you're just glossing through stuff as a general security like yeah anyway you're you come to this and you go that looks great they used encryption that's better than like we could often hope for right like pass it on and it looks the same in cloud formation it's very very similar and if you end up looking at the cdk it's even more abstracted so it would be even harder to find unless you're specifically looking for this scenario
or you have like policies on the books that say kms keys must be custom and like they must be specified and you do some advanced checks which we're going to get into point being this is all very easy to miss so if you can't trust your eyes to look at it how do you actually find this stuff and it depends on the state of your aws inventory so when we were in that panic mode of like let's find everything we have that looks this way what we ended up doing was just i think because i had a github window open i just searched for the syntax i know that i was looking for and found all the
buckets that were configured that way listed them out that way but you could do stuff like aws config data cloud trail you could even brute force it with get bucket encryption calls but if your environment's large it's not going to scale very well but that's a bunch of ad hoc stuff let's do something more advanced with some more tooling so you could do something with your static analysis infrastructure's code tool of choice and this is actually an example of that down there it's this is uh rigo opa if you've ever written this language it's both fun and terrible um this will search for a bucket that is configured this way um and then you have
to go do your own investigation on like do i have a logging service interacted with it you could probably hydrate it some other way some other way that could have prevented this or you discover it is you could actually put a bucket policy on your bucket that restricts where keys are coming from you can do i'm not going to try to say that entire header that's ridiculous but you can do a condition key on that header and you can check things like did it come from my account like i want all encryption keys to come from this account which technically if you're enforcing server-side encryption on an s3 bucket you should be doing a policy like that anyway because anyone
can put whatever configuration they want on a put object request so that that's a good thing to do anyway so as part of preventing this uh something reactive you could do is an aws config rule if you're at a place if your organization lives in a state that it's able to make mandates like if i see something that i don't like configured aws can come along and abs config can come along and actually change the state of things for you that's a pretty advanced thing to be doing other tools can do it too you could use cloudtrail to try to find things and then react or a lot of people will build custom pipelines for this
kind of thing we obviously can't speak to those
so to bring it all back together we kind of gathered you here to share a story and let's look over what we've covered so we had an integration exercise which turned into a quote unquote fun adventure and i'm sure everyone in this room has a story about how integration has gone wrong in the past what caught us a little off guard was how the folks who configured the infrastructure were trying to do the right thing they were trying to configure encryption uh things were seemingly working inside of their account but unfortunately it had major unintended downstream consequences which resolved involving aws support to get our data decrypted lastly matt just walked you through a couple of different ways how you can
look for this statically either through an ad hoc fashion leveraging an iac tool of choice or just parsing cloudtrail and config data to keep track of resources that are inside of your environment but we also had a couple of takeaways that we want to discuss with the audience and kind of show some food for thought so i alluded to the shared responsibility model earlier and the concept of the customer being accountable for their side and the vendor being accountable for their side so in matt and i's experience cloud services work pretty darn well if something is going wrong inside of your aws account it's usually your fault 99 of the time so the mentality that we
were operating with in this scenario is what did we do wrong how can we figure out how things were broken and putting us in the state however this was almost a very rare instance which still won't change our mentality on dealing with aws issues but how involving aws support and putting the blame on the vendor could have resulted in a faster resolution here the second thing we wanted to note on that initial first point is you know was was this behavior expected again we were in a situation where the cloud provider we were using uh chose a key of its own accord that we did not have access to we did not configure and we did not request to
encrypt our log data that we were unable to access without their help the second bullet point will read like common sense but we wanted to put it here anyways similar to testing your backups which is ensuring your logs are actually accessible before you need them and it's too late a common definition of done when configuring logging services is you turn on your logging service you check the bucket where the logs are supposed to go and if you see the logs you're you're probably you're probably good um however this this was not the case in this scenario and this could be a common use case because again if you're turning on logging and you're just keeping it for
compliance reasons or you plan to ingest the data into a sim at a later date you could find yourself in a pretty bad scenario where you need to answer questions on these logs but you've never actually tested whether you can access these logs which which is the scenario we found ourselves in so ensure your logs are accessible and make sure your definition of done includes kind of every step of that workflow lastly matt and i combined have over 15 years of experience working with aws and we feel very confident about our abilities however due to the speed of features that aws gcp azure all the major cloud providers roll out you can never really be 100 confident on how these services
are going to interact with each other there's just too many awesome features that get rolled out if you look at re invent for example i'm pretty sure half of my workflows get deprecated every single december so no matter how well you know how these services interact with each other really make sure you go through all the extra extra steps um to make sure that your flow works exactly how you expected especially when you're using managed services and with that being said we would like to thank you very much for coming to our talk and open up the floor to any questions [Applause]
uh i believe the question was did amazon decrypt our logs and then transfer them in plain text is that right i think the question was on the details of how amazon decrypted our data um it was not decrypted via plain text and everything was up to kind of the standards we would like um the data never left our account
sorry you might have to come up and speak into the mic sorry sorry my question was that like usually uh for encrypting data at rest the way that gcp or aws work is that they use a method called envelope encryption so the key and the information around the key is the sort along with the object i'm wondering whether those information could help you to see what was the actual keypad and stuff like that can you repeat the question i didn't catch all of this
um so the question was about envelope encryption and how aws and gcp and you know these services do that under the hood and like as david and i said we're not employees of aws or privy to exactly how that works and whether or not it's a enveloped encryption i mean i don't know off the top of my head right now what they're doing on the back end um but i don't think any sort of enveloping would have helped in this case especially because the crux of the problem was that we couldn't access the key anyway so um it that might have been easy like it might have been easier for them and not us like if they were needed to do
something like rotation but this didn't involve any rotation what they what they needed to do was get the data encrypted with a key that we own not one that they own so their stuff on the back end of whether they're doing any enveloping or anything on that i don't think would have impacted us at all
so the object showed what the key was so like we knew what the key was we just couldn't access it because if you know if we're right we're guessing that you can go if we're right we're guessing that this key is the actual aws slash s3 alias key in the amazon account so we think that the log writer and s3 interact in a weird way it goes and pulls that key which as david said you know a bunch of times now we're gonna talk to her blue in the face would have a policy that does not allow outside account access so any of the nuances of how they're doing any envelope encryption at all just don't matter
because we can't access the key anyway and they're rightfully not going to change a key policy to let us access that right so does that answer your question
[Music] uh so the question is does this happen only if you use the default key and it happens if you specifically specify ssc kms encryption and you do not specify a kms key so i guess to put it more explicitly if you specify sse kms but the key that you ask for is that special aws slash s3 key um you will be put in this condition with a log writer yeah yes yes it's the default key um that amazon provides that works that works fine he asks that's the key amazon provides um that works fine as long as all of the requests are coming from inside your own account because it's interacting with a key um that you can actually go see like
you can find go click on that key see the key policy for it in the console ui it'll be grayed out because you can't edit it it's a completely managed policy um so when things from the outside come in that it fails and it failed in this weird way we saw um but i kind of glossed over this in a slide i don't think i actually said it um you can actually not get yourself into this configuration via the console it holds your hand like the console does weird stuff sometimes it's not a one-to-one with api calls or cli calls so it would actually be easier to enter this scenario if you are using iac like
terraform and cloud formation so i don't does that answer your question at all yeah okay i think you have to come on you're going to have to it's
so the question is could you leverage something like organizational scps or service control policies to kind of prevent a situation like this so pretty much using a centralized set of rules that if you are using aws organizations these policies will apply to all api calls inside of your child accounts um matt i think we looked at this and i don't remember yeah we that's something that we explored um but due to how specific that header was we weren't 100 positive on whether we would be able to write an scp to specifically block that kind of behavior we also were worried about the general ramifications of something like that happening to the entirety of every child
account in the org because this use case is a little bit specific right you have this configuration which could technically be valid but it only starts to fall apart when you have that third party log rider outside of your own account so i think something that we would push for instead would be leveraging those kind of static controls at pr time and detective controls upon deployment time because it could be perfectly acceptable for someone to configure a bucket like this but if it's just operations inside of my account the data will still be encrypted at rest and everything will be okay i i can't hear you i'm really sorry
i'm sorry you never have access to the data again without talking to support right yeah yeah correct so the question was would you would you ever be able to access the data again without talking to support and we needed to involve support in order to get this data decrypted because again that encryption key that was being used was outside of any of the accounts that we owned [Music] yeah so like we did everything like i mean we tried to access the key we did uh get objects specifying that exact key but it was all the same access to night error because when you get down to it it's a key in their account the key
policy says no so you can't right
the question was because it was reproducible did amazon treat this as a bug um technically we actually ourselves matt and i don't consider it a bug we think it's just unexpected behavior if you dive really really deep into the documentation you'll notice references saying that hey this kind of configuration for the log riders isn't really supported the key point we wanted to call out in this presentation is how the operation fails it's a silent failure where your data is encrypted versus the operation just stopping um and using that there's no logs in there so correct you you would never know until until you go back which which is why you know it's really important to make sure the logs
that you are ingesting um you know you can access which which sounds silly but is a step we fail to miss here uh but to answer your original question uh we don't even think it's a bug um we obviously work with aws support to kind of resolve the data but um not not sure we're not we're not privy to yeah we're not privy to aws's thinking yeah
[Music] oh can it happen with rds is that the question um the question is can it happen with other services that support using kms for backing their encryption like rds or i think ec like all lots of things do right um and in our experience we saw it only on the log writer services so vpc flow is not the only thing we saw it for i tested it for s3 server access logs i tested it for firewall logging which means it appears to be although i didn't test it with everything it appears to be a generalized problem with the logging service services themselves anything that uses that abstracted service to log stuff to your s3 buckets will have this
problem so i can't be sure but i don't think it would have that issue because typically anything rds is going to be you're using something from your account like rds using the default aliased aws s3 key that's going to work fine unless in the documentation it says it doesn't yeah i know i hedge everything um but it's definitely going to work with custom kms keys right like i would expect rds to interact fine we i actually tested uh setting that json configuration to the bucket encryption where there was no arn specified we never saw it fail on any other service interactions except for the log writers that doesn't mean that's something out there but i didn't comprehensively test
you know 300 amazon services that might go do something with kms right like so i would expect it to work fine it's just like if the one takeaway of what you should go look for in your environment on what might be broken right now it's that configuration of doesn't have a key arn present and are any of the logging services utilizing it so there might i guess there might be some nuance there in how rds does some of its logging like you can configure postgre and like all that stuff to go to certain places i don't know if you can configure s3 for that it's definitely something to look at but i'm not sure sorry it's kind of a weak answer
would you be able to audit for the config rather than like setting the config policy to stop it from going forward is there an easy way to audit that config going backwards
so that would come down to the slide that said depends on your aws inventory like if it like if you just want to check things that are configured that way it's wherever is convenient for you where that that information is um if you continuously collect something like cloudtrail though like i'm almost certain that's going to be in there and like the the thing that's put in or the the policy that gets set is likely going to be in there so if you've got that collected somewhere that's probably going to work but again it comes down to tooling what do you have what data can you look at thank you very much again to everyone who stayed and asked questions we really
appreciate it uh we'll actually have a question we'll we'll be around for the rest of the conference and if you have additional follow-ups please feel free to reach out on linkedin thank you [Music]