← All talks

AMAZON FORENSIC PLATFORM: SCALING YOUR DIGITAL FORENSICS - Pratik Mehta

BSides Sydney40:32447 viewsPublished 2019-09Watch on YouTube ↗
About this talk
Pratik Mehta is a Security Engineer at Amazon’s Forensic and Malware team. PRATIK'S TALK The scale and diversity of Amazon's infrastructure intensifies common challenges in Incident Response and Digital Forensics. Heterogeneity of device type and location, coupled with tooling fragmentation across new and legacy systems, created inefficiencies and frustrations alike. To address this issue, we created a central platform to perform forensic analysis resilient against these challenges.
Show transcript [en]

good afternoon everyone and thanks for staying back till 4 5 p.m on a saturday evening or an afternoon so i appreciate that um so i'm going to talk about the amazon forensic platform and how do we scale our digital forensics a bit about me um i work in the security engineer i work as a security engineer in the forensic and malwa team uh which is part of the amazon security operations center prior to amazon i used to work at cisco and before that i have rules spanning from network engineer and somehow i've reached into information security and into forensic and malware so how we have laid out this agenda is basically uh just introduction on what

this talk is about um and basically three eras or spending on our tooling and the timeline on how we should do things in old days what we are doing now and uh you know what is the future at amazon it's always day one because we are still a big startup so that's why even now what we are doing is still as a startup and that's our pretty much story so what is asoc and what does it comprise of so we have basically three teams incident response uh forensic and malware and detection and monitoring uh all these three teams are spanned across the across three regions seattle sydney and dublin to cover uh the global follow

the sun model and we basically take care of all the incident response activities for the amazon retail website and our focus is not on aws side of things in the sense of aws services which which powered the aws cloud our function is mainly amazon consumer or retail business the scope of this talk is basically focused on response side of things and to be precise on forensic acquisition and analysis uh we will not be talking about the detection and how it makes it to forensic and things like that also uh one of the response generally is about remediation how do we remediate things uh again in this talk we won't be talking about remediation because that's

entirely different story uh having said that like it is our team's goal to have remediation but this is about scaling uh how we acquire things at a scale on amazonian scale and how we analyze what we have acquired and remediation we are keeping it out of our workflow for now so why this talk and what does it take for amazon the scale and what is the challenge uh we have offices and full fulfillment centers which are basically responsible for shipping things in many many countries across those offices and fulfillment centers we have uh you know hundred thousands of thousands and thousands of hosts on corporate network uh laptops desktops which are windows mac linux and whatnot

and the more challenging thing we have is our amazon.com website is basically in the aws cloud so we have millions of servers basically powering that amazon.com website and anything around it especially our other subsidiaries what we acquire they are all moving into the cloud so that's another responsibility for us to give them capability into acquiring either from a corporate network or from the aws cloud so what is day zero day zero is basically long long time ago when we used to do everything manually it was mainly about cold forensic not really life forensic at that point and we'll talk about some of the tooling we had and how from doing manual analysis and things like that we moved into doing

automation and things like that so day zero so i don't know many many years ago everything was manual someone would basically take a dd image copy it into a external drive and upload it to a central location that would be someone server a ftp server managed by someone or in a particular cloud or something like that so it would take a long time to you know first of all get dd then again have to upload it and then copy into another analysis machine so before we actually got to do anything uh like analysis it would take about 20-25 hours to get everything in line because if our id team or our forensic team is not in that particular

geographic location people would ship the laptops or the hard drives across the regions and stuff like that so we would wait for like four weeks to just get hands-on to a drive and then start the upload so this was again very manual uh using the tools which were you know at our arsenal so if a engineer wanted to use dd that's fine go for it if they had another commercial tool which was freeware or paid yeah they would use that it was not very standardized and this was many many years ago so there were a lot of pains on that uh of course giving manual and things like that no no standard process where sometimes

we didn't have right blockers or anything like that so we got together uh we as in like you know people who work before me at amazon they got together and came up with uh you know developing a particular tool and a process to acquire this in a much more simpler way and in in a more unified way so this is one of the tool which was developed uh in in python uh and it would what it would do is it will upload directly to an s3 bucket so it will not store anything locally or into any external drive it would straight start reading from the disk which is attached via a right blocker and upload it straight into

s3 bucket so the entire process was basically you know you get a drive or a laptop you unplug the drive put it into a right blocker and you basically run this software with some configuration upload to s3 why s3 because one of the greatest thing we found when you know this feature of multi-part was rolled out into s3 is that you could chunk up your object or a file into certain parts and you could upload those parts only so for example you could chunk up uh physically saying that you don't split a file or logically so for example if you have a drive uh if let's say it's 250 gig so you would first chunk up let's say 5gb

chunks if you want to do so maybe 2000 3000 chunks and you start reading those chunks and straight away uploaded to s3 so given this opportunity what would happen is that if for some reason [Music] that the upload was failing we've had the opportunity to resume it from very left top so if it had uploaded 50 parts and after that it failed the local machine where it was copying and running polaroid would have a have an update saying that we have already uploaded 50 parts and i need to resume from the 51st part onwards so it was failure protection as well as we could pause it for some reason we want to pause and run

something different on that particular machine we could pause that copy and resume it after after an hour and we will begin from the 51st part so this is basically a massive uh feature from aws s3 bucket uh and those are not aware of you know what are the quirks around this basically the minimum uh multi-part like one chunk you can upload is 5gb and i think it goes up to probably one terabyte chunk so how this is all works uh is basically this is the flow diagram of polaroid so when we start reading a disk we first create a chunk map as such and every chunk is hashed and end of the entire entire disk it is

copied and a hash is calculated now the interesting part is that once it is uploaded everything is uploaded to s3 using multipart we also attach metadata to s this is again a feature of s3 where you can attach metadata whatever you want in sort of json format onto the object so what this helps us to do is we attach the hash on the metadata of the entire drive we attach our incident ids ticket numbers and any other information which is useful and we store it in s3 another cool feature of s3 is like it's very very durable s3 gives you like seven nines or something like that for durability so and we can store the data we don't

have to worry about you know uh expanding the disk after we have you know uh uploaded hundreds of drives so if you had an ftp server or sftp server or something like that uh after a point either you have to delete it or migrate the disk off and then again you know or add some other drives or raid or nfs or anything like that but in s3 we don't have to worry about it aws manages that so uh the multi-part thing uh the ability to you know manage storage those all things are really really good and that help polaroid so this helped us to from 25 30 hours the time reduced to 10 hours so basically we

would cut a ticket to our id support saying that hey you need to do this for this particular host and they would basically attach a right blocker and start this process and once this is completed we would get a notification that hey this has been completed and we can do it further with our analysis but as everything every tooling every automation there are some panes this was this solved the first problem of not being that much manual than the previous method but it was still semi-manual where we could take it uh the id support gets it depending because we had this was a big dependency on id support so they have to wait for their resources to be free also make

sure that the right right blockers and the right adapters are available to them um and you know it was still like 10 hours is quite a lot for us you know we wanted to reduce it further uh also like the right blockers and ssv adapters started becoming expensive for example uh earlier when we had simpler ssds or simpler drives it was like sata this is all fine but then came you know the ssds which are nvme pci to get a right blocker for that is used to be like 506 a piece so that started becoming more expensive for us as well and the other problem was that some of the uh vendors laptop vendors like apple and things like that they the

the newer systems the newer laptops came out where you could not pull the drive out so that was again an issue for us if you can't pull the drive out how do you copy it and you have to do some other things so we spoke about the acquisition part for day zero the code forensic things the other thing is like how did we analyze those acquired disks because at any point we used to have uh you know 15 20 disk a month coming in uh disk images coming in to copy and you know with a skeleton stuff it's you know very difficult to analyze all that manually or even if you have you know all those

awesome software's commercial softwares it's still time consuming so we needed something wherein we could at least triage it quickly or once it is copied once we acquired the image what if there was a workflow which would do the basic things like uh do the registry browsing and timeline and prefetch so that and if you're looking at a malware or even like you know some other e-discovery kind of things you have a timeline or you have basic artifacts which allow you to triage these things uh for example like for most of the ntfs related uh or windows related analysis you would always start with the timeline first or browsing history or registry most of the malware uh can be triaged

from these three or four different artifacts you don't have to do everything so this was the goal uh that accidental tourist this is the software written many years ago and it was basically a bunch of different bash scripts and perl script put together stitched together and basically it worked for us so now we like to do automation we like to make things much more simpler so how does polaroid acquired images and accidental tourists work as automation so if you see on the diagram you have an amazon office which has a drive and they are using polaroid to upload into s3 so for example when the so what would happen is that the we would give the id

support guys which exactly bucket we want to upload into so for example uh we keep data separate from eu us and couple of other regions where the data data privacy laws are strict so always when you copy the data from eu it never leaves eu it stays in eu similarly when we do any analysis the data does not leave eu so from the amazon office it will go to the bucket in whichever region we have configured it to go to and as soon as we have uh this particular upload and uh sns notification or a trigger sns is another service by aws which just sends notifications that hey something has been done so this s3 bucket will send a

notification to another service in aws called lambda lambda is basically if you know it's basically a code you have written and it just executes when it is invoked or triggered and you don't have to worry about maintaining any infrastructure to run that or execute that particular piece of code so as soon as the drive hits there uh we add in our bitlocker key as well because uh across amazon windows linux mac everything is full disk encrypted so we need to add the bitlocker key and as soon as it is done uh the lambda will be kicked off which will start an ec2 instance and transfer the data from the s3 bucket onto the analysis instance

again because everything is in aws the data transfer is very very quick uh in a matter of you know one hour or less we get the 250 gb drive copied across to the ec2 instance and as soon as it is copied we have the entire those actual accidental tourist bash scripts which kick off the analysis for timeline registry browsing history and prefetch and once all those data is done all those artifacts are passed they are copied into uh sorry they are compressed uh into zip and uploaded back to another s3 bucket from which you can download all this data offline and just review them for your triage so again this helped us to reduce the amount of

time we spent to copy across data from and bucket or any other resource into an analysis instance and also it helped us that you know for one drive we kept one instance so that we didn't contaminate data or we didn't mix analysis data or things like that and once all this thing is completed people can just you know uh start the another accidental tourist script which will back up everything so for example what accidental tourist did is that you said that okay i'm ready to close this case out it will hash all the artifacts everything and it will upload everything back to another s3 bucket for long term storage so in that we can store data

relatively cheaply for up to a year easily and we don't have to worry much about the higher cost and things like that so again uh there's some pains uh with with this sort of uh accidental tourist software analysis what we did first of all it supported only ltfs drives because you know windows was the primary uh focus for us at that point a few years ago windows was the primary os used on the laptops and and the desktops and the other thing is like over the years we learned that you know windows matured in terms of you know providing much more artifacts which give you forensic value so we started to reach a point wherein

you know registry refresh and browsing was becoming limited we wanted to do more like how about parsing evdx logs how about looking through the recycle bin and you know making sense of you know what was deleted and things like that so again then the timeline was at the timeline what we created was from sleuthgate fls so all the other artifacts we passed they were not timeline like log2 timeline would do and yeah another was like majority of the code was written in bash although like uh i would really they love to write you know long bash one liners rather than all those fancy languages but still bash has its own limitations it's slow um and you know

on on this analysis one of the major things we lacked specifically for e-discovery was the full keyword searches like ability to index the entire drive and analyze on that and more and more incidents started happening or more and more and more people wanted to analyze certain things so the cost started increasing uh one of the leadership principles that amazon we have is frugality so as much as possible we want to be frugal we don't want instances to be running you know for one year where we don't need so however we had this thing like one disk one ec2 instance and we wanted to change that we wanted to do much more effective efficient way but being frugal at the

same time so this was some of the main uh pain points we had now we this is your present day what we are doing present day present day again i can't really say that what are we doing now this is something we have been doing for many years and day 0.5 is mainly focused on live response we have been doing for many years and how we have evolved into from simple tooling to endpoint these things and how we have built automation around it so what is live response for us it's very simply uh put like ability to acquire either raw artifacts or certain past artifacts some of the things which were mentioned by mike on velocity

raptor those all things resonate with us this is what we want to do and i wish we had velocity wrapped uh velocity raptor about four three four years ago it would have solved a lot of problems for us so how were the first steps we took towards live response so as we spoke earlier we were mainly doing you know grabbing the drives and doing analysis for the full drives so if we had a piece of malware which we are not sure about uh getting the entire drive and just pulling one file and then you know not doing anything with it was you know no good return on investment whether it is for the cost of spinning up resources or

engineers spending that time on those sort of analysis hours and things like that so to reduce uh the amount of time spent specifically on malware analysis and things like that and to focus on malware pe files a particular tool was written called golden retriever with a ui wherein people can just specify the hostname and the path which the file they want full path of the file they want to acquire and that will be acquired now the files were however limited to only pe files because again at that point we were starting into live response uh working with legal and all those things there we have to be a little bit cautious that you know we don't pull

unnecessary personal data files so this like pe files it was limited to p files but it used smb to grab that files so this was a very simple service but it brought down probably ten percent of uh the full disk we were doing like we reduced the amount of full disk uh acquisitions we are doing so what were the sum of the panes again smb uh windows support only pe files uh and we didn't have an api every time you know you wanted to go you you would have to go to the portal pass in the path and you know wait for it for the host to upload it back to your portal and then download it for the from the portal

and things like that of course it did have things like you know compressed with uh you know password protected and things like that but it was still a little bit uh you know some of the paints that we had with like no api and things like that so we thought of going into a journey to see if we can find some endpoint agents and and solve this sort of problems where and they are across windows and linux and os and one of the things we wanted for our live response tooling for the response side of things is that we wanted minimum infrastructure we did not want uh to have a lot of server and client

things wherein you know have 10 because you know to support aws or sorry to support amazon uh corporate network there are hundreds and thousands of host hundred thousand of clients and we would have to have a lot of different servers uh deployed across geography then we would have had to put cdns and stuff like that to have it work reliably so we wanted minimum infrastructure maintenance and the other main capability we wanted was execute custom scripts custom binaries like if you wanted to write a powershell on the fly deploying the host and do something with it we should be able to do that and very important api support again we don't want to go to uis you

know many vendors do that but we wanted api so that we could tie it to whatever workflow we wanted and to scale things uh to give you perspective of that uh the ability what we wanted is that if i wanted to pull let's say an mft file from 10 000 hosts which are online i should be able to do that and i should be able to upload them straight to s3 bucket if we don't want to do that on any of the ftp servers and things like that everything to go go through s3 buckets i don't know if you guys can read that but basically this was some of the detailed uh requirements which we put together

uh you know we just go some of the things are like we should be able to query the data like you know just run query and grab the artifacts uh get raw artifacts like registry hives browsing history files uh mft tables and things like that and you know we should be able to customize and run scripts uh ability to have memory forensic we should be able to get full memory dumps um also we had a requirement to get like we want to run recall or something like that on a particular process and not get the full dump similarly data processing things and minimum architecture authentication and those are all the requirements so we looked at

a few of the open source products commercial products and things like that uh at that point there were many many uh nice commercial products but most of them if you talk about endpoint agents they were sort of going into the space of detection and eventually now it has become edr xdr or any of those fancy names but our problem was response as i said detection was not our focus at that point response was our focus getting the artifacts and things so we found one commercial endpoint solution unfortunately i cannot name them right now but uh through the talk you know some of the things i'll mention you might be able to guess them uh who we are talking about

but they provide us like api uh minimum server maintenance like we are maintaining right now our fleet of all this endpoint agents with you know really really small number of servers really really small uh script execution and the other thing was scheduled deployments so for example if the host is offline you can deploy a particular package or an action saying that hey i want to grab this particular thing when the host comes online after 24 hours you will be able to get that data but again this this was like really really good fine for us good evaluation for us i mentioned mostly about the commercial solutions at that point forensic focused solutions were very few

one was google rapid response which was really really focused and for response for forensic however at that point uh gur was very much tied down to client server architecture with having your servers rather than you know for us ability to change things around and deploy it on aws scale it with s3 or some other micro services architecture that's why we did not really move forward with something like google rapid response so now we have uh now we have this particular endpoint fleet area and live response tooling deployed it allows you to query and things like that it allows you to deploy scripts and your binaries so we wanted to have certain capabilities one of it was that how about we grab the

entire partition when the host is online through live response so inspired from the good things about polaroid like multi-part and things like that we wrote a tool in golang because golang is much much faster which would acquire the entire partition or the disk depending on the os and upload it to s3 again this choleroid binary basically works across windows linux and mac and the benchmark we found initially was like a 250 gb drive was uploaded in three hours so from eight ten hours we have reduced to three hours it's a pretty big win and another good thing or bad thing depending on how you look at it because we are doing from the live instance

they were all decrypted drives so again you know sometimes you know this is a difficult uh discussion uh because we are grabbing the young degree red drives and storing it somewhere so you know there is a risk on that but this this has worked uh wonders for us in the sense that at times we don't have a particular id support or a subsidiary we have had and we just deploy this thing and we grab the entire entire partition or disk so again live response has certain uh limitations as well and the color right itself has certain limitations so you are basically reliant on how is the internet connection and if the host is uh available so you might have had like 200

gb worth of data uploaded and suddenly the user closes the lid you're lost you can't do anything about that so that is one of the pains um although resume capability has been built into polaroid but because we are using live things like this at times we don't have proper access to what sort of things have been uploaded so far and that's why this the resume capability is limited and the way uh that particular endpoint agent works the commercial agent it deploys certain action and kicks off this particular process of cholera so we have visibility into that a particular package was deployed and goldroid has been executed but if it has failed if it has you know paused or

run into some errors we do not know that so we had limited visibility on what was being uploaded um so moving forward we were like getting full drives like this has been useful but we should do something very specific uh to pulling just a particular set of files so we wrote something called uh file 10 retriever again some of the good things from golden retriever by specifying path and things like that but again this was written in line so in this what happens is a user can specify a full path uh and it was not limited to pe files any file you could provide any file path anything can get and it will be uploaded to s3 bucket

and again it worked with windows linux and mac so now we have the capability to getting the full image if you wanted and we have the ability to get one particular file from the host whether it is mac linux windows anything like that so now this is the third artifact so this is the third capability this was mainly part of our com the commercial endpoint solution we got uh live response so what this did is uh you specify configuration or collector files where you specify what you want you bundle like i want registry file i want browsing history i want mft and it will grab all of that things once at once and upload it to an s3 bucket

again we like s3 buckets because of the flexibility so you will see through the talk that everything goes into s3 bucket and so this is the example on how the collector configuration looked like so you can see that it supports environment variables it supports regex and you can say how much do you want to recurse further down into a product folder or something like that so again not only just a file we can do files and folders uh entirely and you know you can see here here the maximum number of files we want is three if anything more than that we are not interested so whenever the regex runs if we select the first three files it'll be done so

you can change this particular fields and another good thing with this particular live response agent was that it supported api and what we did is that we wrote a wrapper around that api so it took care of all the it took care of all the deployment of golden retriever file and retriever or artifacts and people didn't have to go through the portal uh fill in the parameters most of the things are specified on the cli cli will fill up grab all the credentials and required and deploy on the host say that hey you have to go and upload this particular file via gold or whatever and it will be uploaded to s3 so again going towards more automation with the

live response toolset um so basically how did how did we do live response analysis so the artifacts or which were captured the raw artifacts which were captured into s3 bucket what did we do with them and how did we analyze them so basically whatever is uploaded into s3 we would run aws lambda jobs so as soon as like earlier what we saw in polaroid is when a particular file hit the s3 bucket there was something triggered which ran a particular code to do something similarly this was the case so for example if a registry file hit there the lambda job will look at it and it will upload it it will start the analysis again the lessons we learned here where

like using this live flexibility to execute scripts we have done some scripting to say that we want to disable this particular host a powershell script is written deployed and we are done with it and microservices architecture worked really really well for us when i say microservices architecture is like we're still using ec2 instances to do some analysis but it's still like not using like you know small code microcode and things like that and the pains again micro services are limited limited for example lambda has limitation for running for 15 minutes only if you have to do something very intensive computing in intensive this lambda thing would not work you would have to go to something else

and again we needed better means of tracking end-to-end from acquisition to analysis we wanted better things and the other thing was people started to say like you know hey we have too many tools with which tools should we use when to acquire certain things so this got us into day one day one is like what we are doing now the actual forensic platform and things like that so right now what we have is you know ability to extract files to file tan or artifacts full disk through goldroy and polaroid and for analysis we have external tourists and aws lambdas which we have written before i go into further into the corporate side of things like laptops

and all the things i want to talk about how do we analyze we were planning to analyze ec2 snapshots so this is a particular service called kokagura this has been inspired from an already open source thing open source software from aws which is called easy to clean room forensic so this the idea here is that you have an api gateway uh which is a again another aws service you just pass json which kicks off certain workflows using lambdas attaches your snapshot and does the forensic analysis for that and for forensic analysis we are right now focused on doing local timeline for linux drives ext4 ext3 xfs and things like that and uh extract certain artifacts so do the logo

timeline just to create certain timelines uh for files file stat and we want to extract certain configs because in linux everything is a file like you want to see what is etsy host what are the modules available and things like that and this service we want to try and make it uh open source i'm like we're not promising anything we're working on this part where we we should be able to make it open source in you know some time so just this is a diagram of the apis so you have basically uh right now three apis listed here one is snap so you pass in your you pass in your incident id snapshot id and the region

and the analysis will start so this uses step functions so first you create a ticket internally then it will create an easy to instance mount the particular snapshot sorry create the volume from the snapshot and attach it and this will start the forensic analysis now the forensic analysis is basically using another aws service called ssm uh ssm is basically an agent which lives on your army or your instance uh i think from 2017 november onwards pretty much all the armies whether ubuntu amazon linux they all have ssm agent installed on it it's just matter on how you configure it and make it enabled so it's like a live response agent already installed baked into your armies so in

that we have configured certain steps again where it will run through log to timeline p sort and couple of things like that and once that analysis is done again back to s3 bucket so and the second api here is the status so as we go through the steps and what are happening the steps in here and the steps in here we have an api where a user or a particular automation tool can grab all the data grab the status in the status you will have you know like the region which is running in the incident id the forensic instance id which was spun up uh and the automation execution id which is this one which is here automation execution

id so this gives an ability to track the status and things like that and this is the third api which basically uh you can get uh whatever is uploaded into the s3 bucket the artifacts which have been analyzed you can request a free uh a pre-signed link aws s3 pre-signed link which allows you to basically curl your details so again now we have kookaburra added to the list as well so now we have all these things working uh whether it is easy to uh easy to analysis or for your kukagora or all the different other tools for your live response so we have we wanted one platform uh which is amazon forensic platform so and

just one single api which can do everything as well as provide a self-service portal for let's say legal teams or certain other teams which are not very well versed with api calls or automation and we wanted to support modular architecture where tomorrow if you wanted to update something else it uh and add certain other analysis you should be able to do that and case management so again added end-to-end tracking uh and all of this code will be written as cloud formation templates so if tomorrow we release this service it will be a pretty much cloud formation template you can take it put it in your aws account and all the resources will be spun up for you

and again this we we are trying to make this open source as well when we we have further into our development uh so initially when we started planning this this is particular the mock-up ui we wanted where you know this was a self-service portal wherein you know user can go and specify the username uh what sort of host they want to do and what sort of artifacts they want to grab and and this is the main architecture for the forensic platform so you have an api and a ui which basically calls api gateway and it will log certain things into the database and if you see down in the pink blob uh they are basically the workflows so

you can have all the step function workflow so for example a workflow to do a registry analysis a workflow to do mft parsing a workflow to do log to timeline and we won't be limited to just lambdas we will be using ecs containers so that any heavy any compute heavy applications which are limited by lambda can be run into containers and again the output will be in s3 buckets so this is the contact information for our team basically carlos is located in dublin hugo pedro and mary are located in seattle and i am in sydney [Applause] thank you very much for your presentation showing us the evolution just to clarify these are all your internal tools right

but are you like releasing some of them because i saw that it's amazon confidential yeah so again like we are trying to release uh the tools uh as soon as we can uh being in amazon we have certain guidelines to follow we have to go through certain legal pr and bunch of open source related things to be reviewed as much as possible we will try to release

um hi thank you view talk uh about four or five slides back there was uh an acronym there l-a-a-a-s it's just around there was there was no bingo yeah uh this is basically infrastructure as a service or infrastructure researcher yes sorry mother cool thank you um just from the checkings right now you guys have the capability to extract memory from the servers start running on say ecu for instance um yes we sort of have like the shorthands i can't give you more details about that but yes we sort of have the capability to get from linux machines which are in the cloud in the aws cloud yeah we have so uh mac is basically not in the cloud

you don't we can't run mac on the amazon aws cloud yeah with windows and mac windows and linux we have memory capabilities but again like uh what we have seen is that the windows footprint for our services on the aws cloud is very very small do you use your own open source tools uh open source wrapped into our workflow okay yeah fair enough thank you all right thank you everyone [Applause] you