← All talks

Facilitating Fluffy Forensics

BSides Boston · 201648:4265 viewsPublished 2016-07Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
Cloud environments present unique challenges for forensic investigations and incident response, from unclear responsibility boundaries across SaaS/PaaS/IaaS to the limitations of traditional tools. Andrew Hay explores the forensic and IR challenges of cloud investigations, reviews existing tools and techniques—including AMI snapshots, hypervisor introspection, NBD, F-Response, and GRR—and discusses how cloud architectures can actually accelerate forensic workflows when the right tools are deployed.
Show original YouTube description
Cloud computing enables the rapid deployment of servers and applications, dynamic scalability of system resources, and helps businesses get products to market faster than ever before. Most organizations are aware of the benefits of adopting cloud architectures and many are becoming aware of the potential security risks. The majority of organizations, however, don’t realize the numerous challenges of conducting incident response (IR) activities and forensic investigations across public, private, and hybrid cloud environments. It’s not all doom and gloom, however. The consumption model of cloud architectures actually lends itself to helping investigators conduct forensic and IR exercises faster and more efficiently than on a single workstation. For this to happen, however, the tools and techniques employed must evolve. In this session, DataGravity CISO Andrew Hay will revisit the forensic and IR challenges of investigating servers and applications in cloud environments in addition to the opportunities that cloud presents to help expedite forensic investigations. Andrew Hay is the CISO at DataGravity where he advocates for the company’s total information security needs and is responsible for the development and delivery of the company’s comprehensive information security strategy. Prior to that, he was the Director of Research at OpenDNS (acquired by Cisco) and was the Director of Applied Security Research and Chief Evangelist at CloudPassage, Inc.
Show transcript [en]

all right hello my name is Andrew hey I'm the sea so a date of gravity we're nashua new hampshire never hear from new hampshire five people is it like right now you wish your Indian action just in jog got it so this talk this is the second iteration of the talk that I created I think about three years ago now and the reason I had to do a two point O is because I don't think many people listen to my one point and implemented some of the suggestions are some of the glare and things that I pointed out so i'm going to this talk is really to highlight what has changed what hasn't changed what can be better

and hopefully people listen to me this time so just to give you a little bit of background on me i am the sea so data gravity I've been doing the job since January fourth I stole their because I had to remember what month it was now I've been on the road since last Friday so this is my fifth conference sorry fifth talk at my fourth conference in three states over the last seven days some kind of burned out remember that when we're buying drinks tonight so I used to be the director of research at opendns I built the research team there up from he was around for people to the 12 when we got acquired by Cisco I was

with Cisco for three months you can refer what you want from that I was also the chief evangelist and director of research at cloud passage and that's really where I started doing this talk was when i was there because i was i found myself doing a lot of cloud education especially around security because there were so many people that would say oh my CFO into this conference and they heard about cloud have you heard about this thing yet and we can save all of these money all of these bags of monies because people are only paying pennies per hour Benny's of course that was as much research as they had done so I did a little bit more I

was also an analyst at an industry analyst at 451 research where I helped a lot of companies raise money of which I've never seen and get them acquired which didn't I got a thank-you email every once in a while I've also worked in higher education at a university in Western Canada yes I'm Canadian but do remember how big and tall I am before casting aspersions and I also worked at a bank in Bermuda which sounds a lot cooler that it actually was I was employed 34 q1 labs massachusetts company i am not a millionaire like the people that stayed with p1 lives after the acquisition I've written a bunch of books that judging by my income

statements none of you have read I blog occasionally and I spend an awful lot of time on planes so that's just a little bit about me you can learn more especially the Bermuda style stuff tonight in the-- well after drinks so what I want to talk about today is a lot of the challenges around cloud and you know all jokes aside cloud security is pretty cloudy or fuzzy at best I want to talk about some of the existing tools how they can help what they can do better if anyone's looking for start-up ideas there's a whole bunch at the end of this session and if how you can use cloud to really do forensic investigation and sorry is it response

activities in cloud environments and how it could be made so much better if we just had the right tools to do the job it's not always the case so some of the problems with cloud just calling it cloud is that there are so many different definitions of cloud there's private cloud hybrid cloud there's different nomenclature across the different cloud platforms is it an instance is it a guest are we talking about SAS pass infrastructure as a service security is a service which is a stupid term because it's still SAS is it on premise at off site is it posted somewhere is it multi-tenant single-tenant you know all of these things are cloud of some way shape or

form so it becomes very complicated so I'm sure you've seen slides like this before the delineation of responsibilities between the different types of cloud platforms this becomes kind of a problem not from a deployment perspective but definitely from a forensics and incident response capability so if you look at SAS you're responsible for the data it's very hard to do anything at the system level in a SAS environment because it's not your environment you're provided with what is essentially a web app and a method to put information in and get information it and present it to the end user there's not a lot you can do if that the guest or instance becomes compromised or the virtual the hypervisor comes

virtualized you just don't have that visibility you have a little bit more in platforms a source environment but really it's going down to configuration options in code you're not going to have you know most times you're not going to have kernel level root level access you're not going to be able to look down into the frameworks used because you're provided with you know apache and postgres you're provided with the interface to it but you're not really provided with much other than some of the configuration options now if we look at infrastructure as a service here's where we're starting to get a little bit more comfortable or more familiar but with all of this access so

you're really you're responsible for the virtual machine container all the way up to the presentation layer the cloud provider is going to take the hypervisor all the way down to the plugs in the data center that's their responsibility but if you start looking at the terms of service for the various cloud providers you will see very quickly that security is not their main business so this is from the AWS shared responsibility model so what I have highlighted is the parts where it says the customer should assume responsibility and management of the guest operating system you can enhance security if you so choose which they recommend with the addition of host-based firewalls host-based intrusion detection prevention Krypton

key management center etc things that back on prem sorry i will not say on yeah i'll say on prem when i say on prem I mean on-premises not on-premise just in case anyone is like a grammar nazi friend these are things that we would have done on prem before migrating to cloud environments right or at least we should have done this is not something that the CFO was told when it's pennies on the dollar to move infrastructure in the cloud environments now let's take a look at Microsoft so they tell you data classification and accountability client and endpoint protection are the responsibilities that are solely in the domain of the customers so you're probably understanding very quickly that

these infrastructures of service providers or these cloud providers are not in the business of providing you with a safe and secure infrastructure within which to operate they are providing you just like vmware provides you with virtualization infrastructure they're providing you with a method to run machines in a virtualized environment that in some cases is off Prem in public cloud multi-tenant environment or its sorry off print on Prem where it's a private cloud infrastructure over you know a big ESX deployment these fear deployment whatever so does anyone here have experience with running virtualization infrastructure yeah what about conducting forensics and instant response activities in those environments it's not fun so the delineation is fairly straightforward but again when we're talking about cloud

forensics and it's in a response it can be many different things some of these vendors span multiple types of delivery systems whether its infrastructure as a service platforms of service software as a service and so if you look at microsoft office 365 the SAS offering for platform as a service i always forget what it's called so they would know now say it's not just me that's poor marketing then they have as your cloud which is the infrastructure as a service model AWS same thing you can run all of these things very very easily with a click of a few buttons doesn't necessarily mean you should but means you can so there are all sorts of

devices that are interfacing on interacting with these various levels in this pyramid such as mobile devices physical media attached storage whether it's virtualized or a physical client ship to a data center laptops but because we don't have eight hours to talk about this we're just gonna focus on infrastructure as a service because it's really the most comparable to having a physical machine in your data center with which to perform forensic analysis and incident response activities on the other ones you're going to be calling the provider and asking for help so the five major challenges and the things that we're going to highlight data residents which is a big issue especially if you perform a lot of international business

especially with Europe physical acquisition of these infrastructures of service devices and instances / guests how to isolate an instance properly I'll talk a little bit of a hypervisor introspection and data integrity I should say that you're probably never going to get hypervisor introspection it would be awesome but you're not going to get and then cloud service provider collaboration and support and then I'll talk a little bit of some future tech so if you're conducting is anyone here ever served in an expert witness capacity or testifying to the validity of data i have not for a great case it was for a child exploitation case luckily the guy was insane you probably got that from the earlier statement but knowing where

the data is kept along the way adds validity and confidence in your analysis and your presentation of the findings to a court via a report to someone just your findings so where is cloud or where's your data stored in the various cloud providers that's really the big question you can't just say yes it's in the club where cloud so there's been an evolution in how the various cloud providers have stated that they support earth where tell you where your data is stored so back in two thousand eight amazon was saying you can specify where you want to store your data when you create your s3 bucket okay well that just tells me that at that

point that's where I want I want to specify it should be there fast forward to 2013 within that region your objects are redundantly stored on multiple devices across multiple facilities okay so if my job is incident response in forensics that's not a good thing to see if my job is operational I t this is great you know redundancy resiliency these are things that I want to see with my data I want my customers or my constituents be able to access this information when they need it as incident responder I see this and I get like I actually did get goosebumps right there that's kind of weird that now it's somewhere and this is where your psyche

yet cloud to the cloud somewhere in this region in the cloud where's the region well based on the map it's showing the northeast of the United States so we'll focus our investigation on the Northeast I guess most recently it's been changed to say within that region and when they say region they mean like AWS us East is anyone know a big provider runs stuff in u.s. East one or actually anyone know where US East is located Virginia they know what big provider of video content uses AWS Netflix you know when Netflix goes down there's a whole bunch of other vendors that seem to go down at the same time I advise you to not run in u.s. East one

just because if people can't get the Amazon the odds are that they're probably not going to get to you because they're a very resilient and fault-tolerant organization it's usually a data center problem if they go down keep that mud so within that region your objects are redundantly stored on multiple devices across multiple facilities but please refer to the regional products and services for details on amazon s3 service availability by region does anyone know why this last little bit was added no nothing happened recently like safe harbor yeah and you'll see a theme here in a moment so as your 2012 I actually like this quite a bit so they will replicate between two sub regions within the same

major region for enhanced deeded durability that sounds awesome I want my data to be super durable unless I'm doing in cell phones 2013 hey you know what customers can choose to disable that feature for the replication it's not in the documentation anymore hmm so in 2016 each Microsoft cloud service has its own location policies for customer data say if ever Google so back in 2013 at this time selection of data center will make no guarantee that project data at rest is kept only in that region I got a hand to them at least they're honest it's not saying you know well it might be moved around here they're saying look you know we're trying to

replicate this we're trying to get stable we're still relatively new it could be anywhere deal with it we're not doing evil we have a slogan acesso 2014 hey that data at rest it's not in the region anymore 2016 specific regional information again safe harbor so now finding data within a particular region within a particular provider is difficult and being able to point your finger and say I know for example my data is there with a physical server you can walk to the data center and say you know i'm ninety-nine percent sure it is on this dell server right here or on this storage array you can't really do that with cloud with any great certainty

and what happens if you're using multiple clouds again data resiliency and data durability back four years ago 50 G's five years ago there was this idea that people someday would start moving their cloud or sort of moving their on-prem workloads to multiple clouds for resiliency so you would have data shared between Rackspace AWS Google whole bunch of companies spun up and said that we will help you manage this migration and keep track of the data and you know one goes down will spin it up over here and DevOps DevOps daaa so that is horrible from it's a response perspective cuz now you're relying on other tools to figure out where the data may have been moved and

then within that region it could have been replicated somewhere else based on the type of instance that was used so it's just snowballs out of control so catching a cloud instance is very hard but you know a specific cloud is extremely hard finding your data on a specific cloud instance I'll leave it here dancy Waldo Wow not bad yeah eagle eyes you know I've used that slide for about six years and no one's ever found it or they're not paying attention so thanks on both fronts so physical acquisition of data in cloud infrastructures or of a physical image extremely difficult unless you own the data center or unless you are running a private cloud infrastructure if your

name rhymes with Netflix you may be able to work something out but you're probably going to be stuck with logical snapshots so if you call up to Amazon say hey do you know what I pay ten dollars a month I demand support you know I'm one of your biggest customers I spent a hundred twenty dollars a year they're gonna say okay yeah sure we don't help you if you're spending like hundreds of millions of dollars a year they may help you a little bit so the three ways that i know of of getting data from AWS and some of this actually does transition or does intersect with some of the other cloud providers is creating a snapshot of the

volume mount it copy it you can have aw a ship you the data you'll see why that seems a lot better or sounds better than it actually is and then you can use software tools to actually compress encrypt sign and download which sounds pretty awesome but again we're talking logical here so I'll go through this very quickly you can reference the slides so this is just like performing acquisition on a live system with the exception of you know you're not moving physical hard drives around plugging cables in getting right blockers etc etc so launch an ami stop the instance of your target detach the volume create a snapshot of that volume attach the volume to the new am I create the EBS

volume that is going to be the same size attach the volume execute your standard file system commands used ed to make an image seems pretty in pretty similar to anyone had to do this in the past ever these commands of sort of this process has kind of been around for the last 15 I'd say even 20 years that might be pushing it a bit but this these steps actually came from Lance at AWS in the user forums someone asked the question like hey how can I get this and he's like oh well here are all the steps that I've done this is an official policy but this is what I suggest that you try and

do very helpful good work amazon so s three if you are storing your data in an s3 bucket you can physically ship a USB Drive to Amazon for a eighty dollar per device handling fee and 249 per day to loading our so I advise not using a USB one hard drive because it could take a while they will ship it they'll copy the information and they will physically ship it to you so at what point does a chain of custody become a problem when so in my apartment complex we have this you always have the shipping people that will say okay well you know it requires a signature we won't deliver it unless it requires a

signature but then it just kind of ends up like thrown on the floor so no one's signing for it it's really hard to do chain of custody when you're handing it to a shipper unless they are physically delivering it to you by hand so I guess the moral the stories don't ship evidence hard drives to your apartment complex or my apartment complex at least and what will actually get back to this slide in a minute dear so the am I tools this is where you can compress encrypt sign download this is probably your best bet it could take a very long time depending on the size of the data but at least you have compression encryption

and signing of the data you can pull it down locally analyze the snapshot as if you would any other system so has anyone heard of the Dijkstra Sherman experiment so Josiah Josiah dextra and I can't I can't remember what Sherman's first name is or what the tea is they went as part of i think was part of Josiah PhD thesis they wanted to go and test all of the tools that are used for forensics and incident response in cloud environments to see how they'd fare so if you take a look you can see that you know the standard tools in case ftk fast up memorize ftk imager so these remembering things agent injection which is in an

AWS export so this bottom one is the shipping of the physical drive as you can see the bottom one took 120 hours so if you are in the midst of an investigation and it's a sensitive investigation you have nothing but time on your hands so you can just sit there total your thumbs and wait for 120 hours for the data to get to you it should also be noted that they didn't use a substantial file or collection of files of a substantial or significant size in order to conduct this experiment we're not talking about like terabytes or petabytes of data 520 hours I could go through the roof very easily and when people want answers 120 hours waiting is

still way too long so I encourage you to read the the analysis it's very very interesting now hypervisor introspection yes ah just live so you have to do a live capture so you can use memorize or recall or no no there I could all be there because those are huge memory pools that would because it's a multi-tenant environment they couldn't explicitly carve it just your data so hypervisor introspection this has been around since the days of ids where you can passively watch all the traffic without anyone really knowing so in terms of a hypervisor this is cool because you could look at files you could run all of your tools and the end user would not be aware on the negative

side you can access these files run all these tools and the end user would never be aware so it's very covert it's very low level you can access pretty much anything but on the plus side it has to be enabled and you're not going to get this enabled by default I think for even for ESX you have to dig really really deep to find out how to turn on hypervisor introspection if this was implemented by the major cloud providers which from an incident response perspective would be amazing you would probably have a ridiculous series of legal battles and class action lawsuits coming as a result one of the challenges of introspection is proving data integrity so if you find out if you're

an end user you find out that hypervisor introspection is enabled that means that someone could have altered your data and you can just claim Mike oh well I've been told that introspection was enabled you can't really prove it's me it could have been the person who has direct access to the introspection engine or visibility they could have modified they they put all that elicit content on my share wasn't me so very hard to prove integrity in that respect but you're probably not going to run into that because it's hard to get so in terms of forensic image capture we have always been told that a physical machine that you're investigating the first thing you do not do is pull the plug or

sort of pull the ethernet cable the second thing is you don't power it off in cloud we have to throw that completely out the window so because you if you power something off then you lose all the volatile memory and execution points and anything that's been touched on the system that is deemed volatile if you do not isolate the instance then you can't guarantee that it isn't going to continue to be monkeyed with as you're performing your investigation so what I usually recommend in this case is moving it to a secure so in AWS they're called like all not server groups someone said for me yes security er their security yours yeah so if you move it to a

security group that only allows your analyst station access then you can testify if call to that you were the only one that had the ability to access that machine at that time I have to be careful though if you're moving these into into security groups or behind security groups that other systems are running in it's very hard to prove that there wasn't some sort of contamination so if you isolate it you can say at the time that I conducted my investigation the instance was located in this region the data was stored in this region and possibly sub-regions depending on how the wording of the documentation is you can conclude that incoming and outgoing communications you should be the only

one initiating new communications into the machine because it's firewalled off outgoing blocking you're preventing so if it's part of the command control session or data is trying to be ex filled then you've essentially blocked that so by isolating it you can make it easier to collect not contaminate it you separate it from your production workloads you're not going to taint it it's great you know this this is an ideal situation it's not hard I actually wrote a tool that will do this in AWS but nobody uses is because it's written in Ruby so most forensics people are hardcore Python zealots and they make fun of me when I right Ruby so I'm slowly transitioning the tool over to

Ruby Mae Pierce right over to python create a UI so that's kind of point and click move things around isolated instances it's kind of like a poor man's club passage or there's a bunch of other tools that do it now too but stay tuned for that so getting support from your cloud provider it's not always that easy again unless your company name rhymes with Netflix a lot of a lot of these cloud providers have very very smart people that are willing to help but if you are looking to undertake a new cloud project or trying to figure out where you should throw your money from a cloud infrastructure perspective you need to find out from them what level they are

going to exert how far you have to Oh after they help you and at what point are they just going to throw their hands up and say look we can't help you anymore this is beyond our knowledge the last thing you want to is to be told to email into support and get some first level person who I'm sure is very good at helping with certain things forensic image capture and incident response is likely not one of their specialties so if you can if you have the clout you can ask for samples of past investigations could be obfuscated of reports it's kind of like when you're doing a pen test or you're shopping around for a pen test

you don't want to take the firm at face value you want to see what they've done in the past to see if they're any good you want to see if they're employing methodologies if they have a documented procedure or if it's just hey we're going to we're going to see what happens here you laugh but a side note i was on a puddle-jumper plane once twin prop and we're flying around the airport for about an hour and the captain came on and said well we've been trying to figure out how to fix this but the flaps aren't working we don't think we need them to land so we're going to give it a shot pretty sure that wasn't a

documented procedure that was kind of yeah let's see let's see what happens that's what you want to hear from your pilot now we're gonna give it a shot what's the worst that could happen yeah asking for the credentials of staff I don't think that's too much to ask for if and this is a huge if this is likely never going to happen but if you can have an interview with some of the cloud service provider team members that would be great because then you can get confidence as to see if they actually know that there are methodologies that they should be following if they have been involved in any of past investigations that they can talk about

so you can get a feel for them it's an interview so it's not all doom and gloom there are a lot of existing tools that can help with cloud incident response and forensic activities they're evolving more and more but one of the big challenges it's not just technical it's wanting an understanding the need to conduct investigations in cloud environments outside of our comfort zone so we have been told that you know we need physical systems to perform forensic activities on or a properly sequestered image whether it be memory or drive and then only then can we perform our activities you're probably not going to get that you're not going to get physical drives in cloud

environments it's just not possible you may have to get very comfortable going forward with storing your investigative images in cloud environments there's a lot of a lot of very good ways to sign something encrypt it compress it and then send it up to long-term storage in Amazon glacier or Azure blob storage that you can certify has been untouched because you put it up there following this methodology processing things off site we are going to see more and more forensics and I are vendors providing SAS portals to perform the tasks that they only ever did with packaged software before and launching off site analysis consoles as much as the big vendors would hate this because they

want to sell per seat licensing for you know in perpetuity eventually they're going to be forced to have certain like elastic time regions in order to conduct activities because let's be honest these tools are expensive most times and they they become shelf where after a while if you don't have a lot of incidents or if you don't have the expertise to conduct the investigations so I'll give you some free stuff NBD server does anyone use this before it is a fantastic way to remotely interface with a Windows device and mount mount the partition is a block device you can also use volatility and recall to image the memory hey so it's very very easy you know you run the server on the cot

on the host that you want to investigate on your client and there are various clients for linux and windows just connect to it and then you can run commands like fls to start generating a timeline simple so there have been there's active development on these projects so I actually updated this about 10 minutes ago well sorry 50 minutes ago and so the top one you know that's the one that I was referencing over here 16 days ago his last commit this one here if you are using OpenStack hey there's an OpenStack object NBD version cool for swift or whatever they're calling it this week then there's this one here pure python to latest emit 29 days ago I will warm you

that the latest commit was actually just fixing the case of a letter but hey at least someone cares enough to do f response anyone use F response so this is an old version 404 this is the first inclusion of any sort of cloud capability so they did amazon s3 buckets HP Rackspace cloud containers as your blog storage let you mount remote systems our remote file objects in the latest version as of March and emerge you can do amazon s3 blob storage again box Dropbox Gmail Google Apps HP Helion Rackspace cloud office 365 so at least these guys are making progress there's not a lot of money to entice them to do this right now cloud isn't as lucrative

as people would tell you because there's so many so many data centers still around so they're they're kind of playing they're betting on the long game where this is going to be relevant it granted it's relevant now but it's not as relevant as data center based investigations or workstation investigations and you can even spin up an F response instance in the cloud to do this collection so you don't have to do it on your system which is pretty cool is anyone here know Chad tilbury does a lot of saying stuff he was a big fan of F response and he kept talking about how it's a genius idea it's really just using the I scuzzy protocol and

mounting things why why can't we all do this so I had this idea I stood in front of the room and I'm like you know what I will give you a link to open I scuzzy the code base I'll give you the link to the ice guzzi org site go out and build me a tool so I don't have to pay for F response and then it was just crickets no one really cared to help you screw those guys but windows 2008 r2 plus plus I scuzzy target you can now create on a windows server which is pretty cool because now with any client and you there's actually this is great this walkthrough gives you

all the PowerShell scripts to set it up on a remote host so there's really no manual clicking around to do this either so what you can do is download one of the various Isaac as the initiator clients connect to this remote Windows system and do pretty much everything f response did but with free and open source tools yay I had no part in any of this by the way I facism this is the result of me just waiting long enough for something cool to happen anyone used ger yeah it's great now in 2011 not so great by developers for developers it was I was told that it was deployed within Google on hundreds of thousands

of machines I i would think told hundreds of thousands i think i was told by some artificial number or something but it allows you to run commands for image capture and grab information across all these systems by running a an activity the latest version is a lot more friendly for the non developer it's still you know very python-based it has plugins that allow letting users recall from memory analysis there's nothing saying you can't run these these gurren points on your cloud infrastructure might just take a little bit longer to pull the information down but this isn't too bad so I encourage that you take a look by the way this is developed by the guys at Google but the G does not stand

for Google for legal reasons a couple other tools so people just pointed these out to me used to be called wire speed it's now called evam entry you can tell by the way analyzes spell that this is not an American company there's also turb idia made by well Corey and in yo hand they presented on it i asked corey four slides he's like we don't have any slides we do you have any code no we don't mean code that's public okay so i'll just mention this and you can watch that space bry more labs a lot of really cool live forensic capture and instant response tools you should definitely check those out so i did say you know there's

challenges i gave you some tools now this is the future tech if anyone has played civilization you know what that means if anyone is wasted years of their lives playing civilization you know that means so the advantages so here's where the startup ideas come in so you can do automated instruments isolation you knew that was a lot of DevOps tools now there's just a little back-end work that you have to do to actually make that works well happen so that's probably if you're gonna build a start-up don't do that one on demand forensic work benches those are starting to evolve where you click a button and say i need an instant response dashboard to do something and

then i want it to go away and i don't want to pay i want to pay on an hourly basis and not on a yearly basis we're getting there don't do that start up either automated timeline generation this is great so i'm a big fan of timelines when it comes to forensics it is a response because you can narrow the scope of your research to order your investigation to a certain time period based on when an incident first happened and when it stopped happening and then you can just focus on that little area if you take some of the tools that the grr you select plas 0 which used to be logged a timeline and create some rapper that you can

deploy this to all of your running instances that would be cool i'd use that dynamic analysis workers one of the biggest challenges that i was told from a forensic forensic analysis perspective was given to me by former NYPD detective he said that he would go home and he would have a stack of hard drives on his desk this high and he get through three of them he'd go home he'd come back and there were like six more it was just a never-ending battle and he couldn't retire he set this in his head like I'm not retiring until I get through these drives but he gotta stop giving me new drives otherwise i can never retire he's

since retired but a lot of this analysis happens in serial so it's not a parallel activity so wouldn't it be cool if you could take multiple drive images push them up to an engine that would start analyzing things in parallel and then telling you when they were done so that you can start farming that work out to maybe junior people or spending time as you need to based on how important that particular investigation is same thing with distributed file carving a lot of times you'll look at a forensic image you'll say give me all Microsoft documents okay I got those now give me all image files now give me this this this and it's it becomes very

computationally intensive to get that information because you're just trying to extract bits and bytes and turn them into files and dump them somewhere so why not create this orchestration layer that pushes everything up to cloud and say okay this cloud instance does word docs this one does excel this one does pictures etc etc etc and it's going to cost you a lot less than having a whole bunch of physical systems and a multi-cloud analysis this kind of goes with the on-demand work benches so if you end the workers where if you want to push things to multiple clouds for resiliency more power to you I have a couple slides left so just ignore if they look like

they're having fun they're really not so this is the basis for a lot of forensics and incident response training and knowledge very good document a little bit dated but you know there's only so much you can do the cloud computing forensic science working group I was on there for a while but with most working groups they move at the speed of NIST so I'm hoping that the stuff we worked on when that first started probably be published in the next two or three years I think already irrelevant that's how working groups work here's some links this is a fantastic resource it's not updated as much as it should be but there's a lot of papers

like scientific research papers from that link that you can learn more about the challenges and opportunities in cloud security also watch this space because this is where i'm going to be posting the the tool suite for isolating your incidents at some point bottom line is that it has to be done before dfr WS because i submitted a talk there and if it gets accepted I'm screwed if the codes not complete so that I've got a forcing function there so just to summarize cloud forensics and I are you have to have an open mind when you're talking about cloud can't think of the brick and mortar data centers and work the physical servers and having the

device in your hand that's just not the case anymore you can use cloud use the advantages offered by plug to conduct investigations and the tools though they are evolving slowly need to continue to evolve and again there's not a lot of money in cloud from a forensics point of view that does anyone know the proportion of of revenue for like a guidance software or access data how much of that goes to ediscovery versus forensics it's like a 9010 split ediscovery pays the bills ten percent of the forensics is the cool stuff that I care about they don't make a lot of money on that so with that if you want to email me follow me on Twitter I tweet a lot of

inane stuff I'm going to be around for the rest of the day all right thank you very much