
all right welcome back for this next talk we've got Alyssa Miller and that is the the coolest most awesome stylize [Music] like beginning to a slide deck that I've seen that is that is awesome I love how it looks embossed like it looks it looks raised like it was stamped out like a license plate that is super cool can I say stupid PowerPoint tricks you know [Laughter] yes so this is I think there's a really interesting one I'm glad we got a talk like this that you submitted something like this you know I think we've seen a lot of fraudulent stuff in the years but but very few that like viscerally physically give you chills when you see
them in action and deep fakes are are super creepy I think most people will agree so yeah if you're well we're one minute away I think we should wait until 3 o'clock just uh just to be fair okay you know we've been bang on starting right on time throughout this so I don't I don't want to start one early and somebody that might miss something but yeah thanks for coming to talk at I believe you you're not oh man look at the guitar collection in the background alright we if we were super early we'd have all kinds of stuff to talk about I've I've actually been playing guitar and bass in between presenting people here well it
is three o'clock and if you're all ready to go I will turn it over I am all set take it away Alyssa
you all right well welcome everybody my name is Alyssa Miller and you have joined me and the rest of the folks here for losing our reality deep fakes changing the face of attacks so if we've not met before let me tell you a little bit about myself first first of all foremost I am a hacker and a researcher I have been a hacker all my life I bought my first computer when I was 12 taught myself how to program in basic quickly learned asynchronous modem communications and started playing around with a service that unless you're old like me you might not have even heard of prodigy before so yeah I've been doing this pretty much my whole
life I'm also an application security advocate for a company called sneak if you haven't heard of us I can talk to you about that some other time you'll have all sorts of contact information for me later I'm also an author and a blogger so I do have my website I'll give you the link to that where I keep my blog I'm currently working on a book that will be released in preview form fairly shortly a guide on helping people start their career in information security so kind of goes along with our last talk a little bit and then finally I'm also speaking of our last talk you've met one of my co-hosts from the uncommon journey Colima stodgy is
another so we do host that podcast and itsp magazine so today I want to take you guys on a journey and it's a journey of sight and sound and I want to begin by asking you to take just a moment and look around you you might be sitting at a desk watching this you might be on a couch just notice the things that you see around you take a deep breath in what do you smell what sense are floating in the air right now what do you hear what sounds are around when we perceive the world around us we use our five senses everything that we know everything we've based science off of everything that we understand about this
world is based on our five senses but what I'm here to talk to you about today is the fact that attackers now have the ability to attack those very senses that we use to perceive the world around us and this is the new attack paradigm of deep fakes so when I talk about deep fakes let let's discuss really briefly what it is that I'm actually referring to when I talk about deep fakes I'm talking about any form of artificial media that's created using deep learning neural networks now it might come in the form of still images as you see in the upper right it could come in the form of audio or as most of us who are who ever
ever heard of defects before probably what you've seen our videos like this one of Steve Buscemi at the Golden Globe Awards only that's not Steve Buscemi wearing a dress I'm sorry so we you may have seen some of this media before ultra-realistic very difficult to tell that it's fake other than clearly that's not Steve Buscemi's body but these are all created using deep learning neural networks now deep fakes began as a research project back in 2017 a paper was released detailing how researchers were able to create this type of media using these deep learning neural networks and you know where most promising technology seems to take its foothold first the porn industry jumped all over this they realized that they
could use this technology to take the faces of popular celebrities and place them on the bodies of the actresses in their porn videos so you see images here of women like Natalie Portman or gal gadot or Emma Watson and Natalie Dormer who are just some of the many celebrities who have been subject to this in fact there are websites now where you can go and for a fee you can actually request videos with your favorite celebrities in them but of course that wasn't where things stopped and it wasn't too long after that point that people started to realize we could leverage this in politics and we could start to manipulate the political discourse in countries around the globe
by making phony videos sure some of them are sad tires some are meant to be a little more damaging you see a few of them here another example you may be aware of from 2018 was a video that surfaced of Barack Obama giving an address from his office and saying some less than complimentary things about then president-elect Trump your problem was it wasn't Barack Obama saying those things it was actually comedian Jordan Peele who had created this video and had deep faked Barack Obama and so these are some of the things that have been talked about in the media have been talked about indeed amongst politicians about the threat of deep fake but deep fakes
go further and the threats that they present are far more right wide-ranging one of the areas that we don't hear as much about is where deep fakes can be leveraged for social engineering a terrific example of this comes to us from the UK where the CEO of a large Energy Corporation was duped into transferring 200 million euros to attackers now how did they do it well they created deep fake audio of the president of the parent company instructing the CEO to make this wire transfer and the CEO fell for it in fact they came back a second time and tried to do it again thankfully the CEO learned his lesson and the second time that came around at least he picked up
on it pretty quick and realized ok this is not the president that I'm talking to and this is abnormal behavior for him to be requesting but the fact of the matter is the attackers were successful and they made off with over 200 million euros we talk about pornography well there's also this idea of revenge porn and if you're not familiar with this term typically where this term is leveraged is when we talk about people who have shared nude photos selfies whatever with a partner or they below their partner to take such photos or video of themselves and then after the relationship ends for whatever reason that partner decides to get revenge on them and releases those video
images to the web well now we have the case where that video or those still images don't even have to exist where potentially a disgruntled partner or ex-partner could create this type of media and release that to the Internet and while it's not real it could still be just as embarrassing just as damning or even worse because it could present scenarios that never actually occurred but what about the business world I mentioned that social engineering approach before what about some other threats against business what about using deep fakes to extort high-profile business leaders now I show the the example here okay Jeff Bezos has not denied the real nature of any of the images and videos that were used to
extort him but do those videos do those images does that media have to be real in order to have potential threat to a high-ranking CEO it only has to be believed long enough for it to have a negative impact on that organization or on that CEO or on that high-profile business leader to have a damaging and lasting impact on their marriage this is a real threat this is something that we face today with high-profile business leaders around the globe we've all you may have already seen videos of Elon Musk's face being put on the body of a baby now yeah that's that's pretty innocuous but it wouldn't take really much additional effort to create more damning video of
him in other compromising situations speaking of Elon Musk what about this concept I like to introduce called outsider trading so if you're familiar with insider trading it's when people with insider knowledge of the inner workings of an organization use that sensitive information to profit and to bolster their their stock portfolios they may know that there's bad news on the horizon and so they sell off their stock in advance or they know that an upcoming development is going to cause a significant stock increase and they might buy up that stock knowing that it's about to increase in price well consider this scenario since I have my my friend Elan up here at the moment I consider Tesla's getting ready to launch
their next model call it the Model Q and the night before Elan is all set to go on stage and have his big launch and he's gonna you know smash windows with sledgehammers or whatever he chooses to do this time video surfaces of him in a meeting with stockholders sharing information about problems that exist with the line or you know something that's going to cause a delay in the launch or cause problems or delays of backlogs in and delivery of these vehicles well what's going to happen to that stock naturally it's going to plummet that's a deep fake video released by an attacker they could make use of this situation they could plan this situation
short that stock buying it up while it's depressed and then when it recovers after news of the deep fake surfaces and the launch goes as expected they sell off those stocks this is a real threat that exists today that could easily be exploited by attackers so if I can manipulate a large organization like Tesla and and manipulate their stock prices such to my own benefit what about entire markets could I manipulate the entire automotive market with just a few well-placed deep fake videos and if I could manipulate a market what about the entire economy of a small nation so as I said before everything we know about the world around us comes from what we're able to perceive with our
five senses but how do we define reality when we can't trust those five senses this is the paradigm of attack that deep fakes have opened for attackers now how do we create these deep base how our attackers creating such convincing video well I told you it all begins with neural networks deep learning neural networks that we call Gans Gans stands for generative adversarial network if you break that down the name it actually describes exactly what they do they're generative they're going to create something their adversarial in the sense that there are actually two neural networks that operate within again there is what we call the generator which is responsible for creating content and then there's the discriminator which is
responsible for assessing that content as to how real it appears now we train both sides of the scan with a training set and this training set typically is a set of still images a large number often times but not always and we're seeing increasingly fewer number of still images needed in order to properly train these scans we then feed the generator the target media so the target media is the video we create that has an actor that we want to replace in that video with our subject and the generators going to take that video and frame by frame it's going to attempt to replace that actors face with that of our subject the discriminator then takes a look at those images and
says yeah either these are a very good representation or no I can detect that that's fake when the discriminator detects that they're fake it sends it back to the generator and it updates the generator and we're gonna see more how that works in just a moment the first step in creating a deep fake however is that that training that I mentioned and in order to do the training we first need that set of training images and to get those training images I have to normalize them through a process we call extraction and the extraction is literally taking each image and they could be still images like this pulled from a video they could be still images
found on the web and using facial detection algorithms it detects and maps out the face so you see 68 points of facial recognition being mapped out here those landmarks are studied by the GAM and you also see the green box which is the alignment that is this particular algorithm aligning the face determining what direction the face is facing if you will how its oriented to the screen but it's not enough just to identify it we then have to normalize it in terms of size and resolution so what we see is that these algorithms will then take that image take a fixed square around that face region that it's recognized and reduce that to a specific resolution or in some
cases if it has to would increase it in this case you see 254 by 254 pixels and that is the standard training set image that we're going to work from so the entire training is based off of that standardized image size and shape and we'll see a little bit later why that becomes important but so now when we go into training what we're going to do is we are ultimately feeding those images to a two part system within the generator the first part is this encoder and what the encoder attempts to do is take those images and represent them in what we call just a model and that model is nothing more than a a numeric way to
represent all the aspects of that face that we need to capture in order to recreate a face so the encoder writes that to our model it then passes it to a decoder and what the decoder is going to attempt to do is the decoder is just going to attempt to recreate that face from no information other one than what's include encompassed in that model so it creates this new image that's an attempt to recreate the original image and then it passes it to the discriminator the discriminator looks at that and says how good or bad a job it's done in recreating that face and it expresses that value in terms of what we call loss so loss is really just how
well did the decoder do in creating that image it then updates the model that the encoder is using it updates what we call the weights and I'm not going to get into all the inner workings of how neural network models work but just know that the weights are how the model adjusts itself and continues to get better and better so the discriminator updates those weights constantly making the encoder better and better and so this process runs in what we call the trainee and so in the training now this is a process that can take 24-48 even up to a 22 24 48 hours even up to a week and what we're doing in that process was
were actually training two sets of images the first is our target the face that we want to replace in the video the second is the subject or the person whose face we want to insert into the video so here you're seeing images from a project that I worked on where I was simply replacing myself with Alyssa Milano now I take those two sets of training images and I actually pass them to the same encoder so now that encoders trying to build a model that not only mathematically represents my face but also represents Alyssa Milano's face so it's it's more generalized and it's able to represent the aspects of either of our faces then that model it it creates
is used by two decoders decoder a is going to try to recreate my face decoder B is going to try to recreate Alyssa Milano's face and in that process we're constantly improving that model and generalized generalizing it to recognize both our faces and to be able to recreate them reliably when that training is done we now move into conversion this is where we actually create the video frame by frame I'm going to take that video or those images of me and I'm going to pass them to the encoder so the encoder is now going to take those images represent them in the model but now we're going to pass them to decoder be so decoder B just knows it
has this generalized model and it's got to try to create Alyssa Milano from that and that's exactly what it does and the end result is my head with Alyssa Milano's face this is the traditional process for creating deep fakes now 48 hours 72 hours a week to do training may sound a little onerous it can be and indeed it takes some powerful GPUs to be able to do that if you're trying to do it on a CPU can take even longer with our apps out there that help with us the one that I've been using and is most commonly seen these days is one called face swap it it's a GUI interface it's super easy to use its built-in cut and
Python makes the whole process a lot simpler but it can be made even simpler still with this website called deep fakes web deep fake swipe you just upload some training images a video a training video as well and it will generate a video for you now I can tell you I've tried that I paid for it it is there is a cost associated with it the results weren't great I gave them very good training material and it took them about I think six or eight hours to create the video back and it wasn't very convincing but it's there and you can create some if you're not looking for something to try to fool somebody if you're looking more
to do something satirical it's great for that you know another application that was built a while back called fake app is the the blue icon here that one's kind of disappeared from the market you can still find github repos that claim to have the original source code although I wouldn't trust them because most have been found to contain different forms of malware and other nastiness you probably don't want to mess with so I really wouldn't trust that one the final one I mentioned here is this app that was released last year called deep nude deep nudes whole purpose was to take still images of women and supposedly show you what they looked like undressed now the reality is
those images it was creating were just deep fakes where they took the face from the image you provided and they deep faked it on top of the picture of a naked woman that thankfully that app only lasted a week as you can imagine there was a lot of backlash to something like that and the author pulled it from availability very quickly some people again claimed to still have the original apks for it I would be really careful if you'd go pursuing those because again they're typically filled with malware and other nasty things you probably don't want to put on your phone but defect technology is getting better some of you may have seen a recent article
where people are talking about how somebody created basically real-time deep fake of Elon Musk within their zoom session and indeed there is a zoom plugin now that can create almost real-time deep fake images you can replace your face in your zoo meeting with pretty much anybody you want how is it doing this well it's using a different approach so instead of mapping out all these landmarks and doing all this training and trying to create a model specific to a particular face it's taken that idea of a generalized model up another level and it's really tried to create this very generalized model that you can see third from the left here of just how to detect a face in general and create a
deep fake from it now the results right now not terribly convincing there's a lot of visual artifacts that you can tell it was deep faked lots of distortions and things but it is bringing us one step closer to the point where we can create real time deep fakes so how do we even tell what's real and fake anymore well researchers have been doing a lot of work on how do we detect deep face and I'm going to share with you just some of the methodologies that they've been using the first that was looked at quite a bit were just very obvious visual artifacts that were part of a deep fake so remember I mentioned that whole idea of using these
normalized images that are sized to have specific size and shape early on gans weren't really good with applying those images when the face shapes didn't match and as you can see here that red arrow on the right hand side points to some of that visual distortion that occurs near the eyes when Nicolas Cage's face is put on top of Elon Musk's head so this is a case where just simply the Gann and some of the image processing behind the scenes that were used just made some errors and they resulted in visual distortions and these artifacts were easily detected by ganz even of course your eyes could detect this sort of thing but it didn't take long ganz have
gotten better and better their use of different imaging techniques in order to place faces and them ask them off and to to blur transitions so that things fit better have gotten considerably better over time and in a very short amount of time I might add so researchers moved on in 2018 in June of 2018 researchers at University at Albany State University of New York released a research paper discussing how they had done studies where they found that blinking was not appropriately replicated in deep fake videos and so by analyzing the rate at which people blanked the number of blinks per minute they could accurately identify deep fakes and there are a lot of reasons for this and in part because
training sets typically didn't include pictures of people with their eyes closed but within two months of the release of that paper we started seeing deep fakes coming out where I blinking was being addressed more realistically within the videos research continue to look at it they're now studying the rate of blink the so both the velocity of how fast the eyelids move and the direction in which the eyelids move things like that to still detect but of course every time our detection techniques get better so do the gans that produce these images another issue that comes up and this again is related to those standardized imaging is this idea of warping and again those same researchers at SU NY
released another paper this time in May of 2019 where they started looking at the problem of warping as it pertains to physical facial characteristics as well as the issues of resolution so remember I talked about having you know images that are 256 pixels square so in a 1080p image like this it works well if the subject is far away so we see Sharon Stone here with deeps again Steve Buscemi's face I don't know why he's so popular to be inserted into these videos but he is and from far away it looks quite realistic but later in this same scene when the camera zooms in on her face and suddenly that Gann has to apply a small image to
a much larger region we see the issues you can see the difference in resolution between the facial region and other areas of this frame her hair her shoulders her blouse you also see if you look at the left cheek some very obvious distortion in the shadows and the shape of that cheek these are things that again gans are very easily able to detect and so researchers have been able to do this with fair accuracy but again as these models improve and they use larger and larger training sets I've seen training plugins now that go up to 512 pixels by 512 pixels if you think about that in terms of an HD frame set the 1920 by 1080 that's that's a
considerable amount of sizes for a facial region if we're talking 512 pixels so we're going to continue to see that get better and better researchers know this so they've continued looking at things they're now looking at this idea of behavioral analytics and in this case they're not looking for visual technical issues or artifacts instead they're actually looking at the facial expressions and different facial characteristics of the speaker in context of the the content that's being delivered so researchers combined at Berkeley and USC release this research in the latter half of 2019 and if you look what they're taking a look at is this first row of pictures of Barack Obama still images from a talk that he
gave this was a case where he was delivering a very gentle and caring message he was trying to appear very empathetic and as a result you see his eyebrows are raised his eyes are very open he has a very soft overall appearance he seems very caring the next row down however was Barack Obama delivering a stern warning to North Korea and there you see his eyebrows furrow a lot his eyes are more focused and angry looking over all his face you see more of the lines around his mouth because of the serious message that he's delivering initially when they released this researcher this research the researchers reported a 95 percent accuracy rate they expect that to be 99 percent accurate by
the time the political season rolls around this year in the US so that's great when we're talking high profile individuals but what about somebody for whom we can't build that big behavioral analytics model for because there's just simply not that much material it doesn't work so some other researchers at Cornell University are looking at how can we prevent deep fakes from being created at all and they've leveraged this idea of adversarial perturbations and what this is is it simply noise that's visually in detectable to the human eye that they insert into images that actually can can wreak havoc on those facial recognition algorithms that deep fake generation relies on so the first role you see images that are
untreated and you see from the green box that the facial recognition is easily able to detect the faces here in the second role you see what happens after they insert that noise suddenly there's numerous faces being detected and none of them are actually accurate to the face that appears in those frames and then of course that bottom row is you can see that's the actual noise that was inserted into each of these images now this is promising technology but again it's not realistic to think that we can go out there and for every person in the world take their images that are on the internet and replace them with images that are so treated this is long term
maybe it's something we can do at the camera level but it's not realistic to think we're going to get there anytime soon and indeed there's already research being done into how to improve those facial recognition algorithms to get around this so it's great that we're focusing our research on all these technological innovations that can detect deep fakes but the fact of the matter is deep fakes are not a technological problem at all deep fakes are misinformation and the reality my friends is that misinformation is a human problem misinformation is a human problem we've been dealing with for decades for centuries even in human problems require human solutions how do we go about addressing misinformation that's the key to combating deep fakes
the problems with misinformation if you followed misinformation in terms of politics or in terms of yo cold war-era attempts at propaganda the biggest issue with misinformation is that it tends to be very sticky and what do I mean by sticky well misinformation is designed to play off of our existing thoughts beliefs and perceptions it uses our current prejudices our current perception of the world to tell a story that we're already happy to believe we almost want to believe it and for that reason when we perceive that misinformation our minds do this incredible thing they build this logic map and they insert that misinformation into that logic map because it just matches up to the things
we already know and so it just fills a point in this logic map of how we get from evidence a to conclusion B and so it's embedded it fills those gaps it gives us that evidence to support conclusions that we already wanted to make as a result it is exceptionally difficult to remove that information once it's there in somebody's mind and this is the struggle of propaganda this is the struggle struggle of misinformation this is the struggle when we try to undo the effects of brainwashing which when we talk about misinformation yeah that's not an exaggeration it does go to that level so what do we do to debunk this misinformation for starters we need to put the
awareness out there that this misinformation exists at a macro level it's just awareness that deep fakes are thing and we see the media doing that today I'm giving this talk today April Wright is another person who gives us a talk about a lot of the threats around deep fix it's getting attention from researchers we see researchers doing all this work to produce different detection capabilities we see social media outlets like Facebook working with Microsoft and Amazon to create million-dollar challenges for researchers to produce detection techniques so we're starting to understand that this is the capability and that's what we need and more specifically when deep fake videos surface we need to get that information out there as well the second key is
repetition those messages need to be repeated over and over and over again it needs to be something that is culturally aware throughout our entire society we need that ability that awareness to recognize that there are things that we used to trust that we can't trust anymore our standard of evidence has gone up we used to look at video evidence as kind of the gold standard if you will if it was on video we knew it to be real that's why we have so many surveillance cameras that's why we see video evidence used in courtrooms being used to such a strong powerful degree deep fakes threaten all of that and we need people to be aware of that we need
to make people aware and repeat it over and over again that they have to question the things that they're seeing the things that they're hearing the step is the part that often gets missed and that is we have to take that truthful narrative and use it in a way that we can replace that misinformation in that logic chain in the person's brain so what that means is we have to understand how that propaganda how that misinformation was constructed to play off of specific prejudices specific beliefs specific value sets and we have to craft our truthful narrative in a way that plays to that so when we think about what I talked about in terms of
the threats to businesses business has to start looking at misinformation as another incident it should be part of our instant response plan we should have those strategies in place for how we become aware of it how we respond to it how we work with the media to get that truthful narrative out there to replace the mythical narrative that's been injected into the minds of those who already we're set to believe because maybe they weren't real happy with our company or they they're just looking for a reason to believe that corporate America did something terrible or that our high-ranking officials did something terrible so that is the lesson to be learned here from a business perspective start looking at how you can
build this into your incident response plan and treat misinformation as an incident the good news is it's there are some positive intentions there are positive uses for this technology and in fact many people have questioned why did researchers ever create this in the first place how could this have ever been a good thing well despite or in addition to rather the fuehrer lot the feel theoretical let's try that again the theoretical improvements in deep learning overall some of that idealistic academic sense there are real-world applications where we've already seen the use of this technology why having Star Wars fans out there and betting I do see Rob one remember grand moff tarkin making a return appearance even though the actors
dead remember a young Princess Leia showing up yeah that was deep fake technology that made that possible now there are ethical concerns here of course the good news from a Hollywood perspective is Hollywood actually seems to be taking those ethical concerns into consideration they spoke to Carrie Fisher before her death and got her permission to create this in fact Carrie Fisher was really excited that they would be able to create this video of her and her younger days and make it more consistent in the movie they also went to the actor and I apologize I always forget his name who played Grand Moff Tarkin and they did get permission from his estate from his family to use
his likeness in the movie but this isn't the only place where we're seeing it used Ganz the technology behind deface are being leveraged in medical imaging recent studies have shown that Ganz can more accurately detect tumors at a far earlier stage than doctors better still they can predict with amazing accuracy what the growth of that tumor will be again far exceeding the capabilities of even the best cancer specialists out there today so this is really promising because this is something that can help inform better treatment options less of the you know try it and phrase sort of approach that sometimes is still a reality when we think about cancer treatments today and we're starting to see other research
being done around this to even expand the usage beyond just tumor detection medical imaging will improve but it's not just medical imaging there are other cases in health care space where we're seeing this used we're seeing augmented reality start to grow in terms of telehealth options and what you see here is an augmented reality based telehealth solution that uses deep fake generated video of a doctor that doctor that this person is looking at doesn't even exist that's not even a real person but it's a doctor's image that was constructed by again to maximize the the friendly nature the inviting nature of that doctor's face the goal here being that people will be more apt to open up
and to more openly share information about their current health situation by a telehealth that's important if we think about the current situation we're in right now with COBIT and the number of us who are having to to use telehealth options today to speak with our doctors making people more comfortable in those situations and more able to accurately Express to their doctors what's going on is crucial then finally we've seen political application now you could argue and there is discussion for sure about the ethics behind this and whether this is really a good thing I look at it and I think it's very innovative at minimum and I actually think it's a good idea this is
a politician from the country of India and in his district he represents a constituency of people that span twelve different languages if you know anything about India there are numerous numerous dialects that are spoken across the country so he doesn't speak all 12 of those dialects so he created an address to reach out to his people he then had that address deep faked both audio and video to make it appear as though he were delivering that same address in each of those 12 different languages that make up his constituency he was widely popular as a result people were very excited to hear one of their leaders expressing a message in their natives tongues something many of them had never
witnessed before this was truly groundbreaking and I think we'll see more of that that's to me that's a positive use because it's it there was nothing dishonest about it he didn't suggest for a moment that you know he was speaking in all these other languages it was it was made known that this was how this was created but it was just a way for him to more directly connect to his people so as I start to wrap up here we're just about out of time I want to leave you with this quote this quote is from Yuri basmanov he's a former KGB agent and this was him discussing the mass brainwashing techniques of the former
Soviet Union back in 1984 they called it their ideological subversion program and this goes to the heart of the problem with misinformation when we can no longer trust the information around us and we don't trust our own senses we start to make decisions that no longer are in our own best interests that is the breakdown of so many systems that our society is built on so it's important that we be aware of this it's important that we focus on the human solutions to how we combat deep fakes some other materials for you really quickly a hashtag project deep fake if you follow that on Twitter or LinkedIn you can see some of the results of the
projects I've been working on I do have a system here with some dedicated GPUs that I have been doing some deep fake work with seeing just how realistic I could get those videos to be you can see some early results that Alyssa Milano video that I created if you go to that youtube link I also have a link to these slides they're available both in sked for this conference but also if you'd rather you can use that link to go get them from github they're available to you I'm happy to share those if those are at all helpful to you additionally some references for you all the different studies that I have cited in this talk today if those are useful to
you by all means please go ahead and check those out as well a lot of great reading some of it gets pretty deep into some machine learning concepts so I will warn you it can get kind of dry just be aware of that as well and then finally I open invitation to continue the conversation I love the reason I get out and I speak at conferences like this the reason I work as an advocate is because I love sharing ideas and hearing other people's ideas I love to have my ideas challenged to challenge yours if there's knowledge that I have that I can help you know you with things that you're working on please feel free to reach out
my twitter handle is up there my DMS are always open my LinkedIn if you'd rather address me there that's fine too and a link to my website as promised and then finally on behalf of my employer myself and besides Knoxville just a great big thank you all I really appreciate your time and attention today with that if there are any questions I think we're happy to take those at this time ya know like almost down to the second timing there that's that's perfect and I think we do have some questions I I had one just off the top of my head I think one of my biggest concerns especially talking about you know using this
technology for manipulation the media game is all about exclusives and scooping and getting it out there as quickly as possible like how how could we ever pump the brakes on that kind of thing or educate them to the point to where they can they can not be fooled by this I mean already with the amount of media that exists out there that intends to be fake in the first place posing as real media now we got to worry about the real media getting getting duped what are your thoughts there so yeah so it's twofold right you've got the first of all there's the media who wants to do a good job and you know their problems there they're the ones
that are being addressed right now in part by different research like I mentioned the challenge at Facebook Microsoft and Amazon are doing part of their goal there is to release that not only just for social media but to the greater media community as a whole so they can authenticate videos and things before running with a news story you know I think the media is also already kind of gotten the message that they're going to be you need to do even more than that though and really authenticate the source of videos authenticate the story through other means as well before they just go and run with it now the media that doesn't care if it's real or not
and we can all think of a few examples I'm sure the best example of that is the Nancy Pelosi video all right that wasn't even a deep fake that was just manipulated to slow it down but even after news came out that that was fake we saw multiple right-wing news sources releasing that video claiming it to be true you know countless millions of people in their audience rishe airing that content claiming it to be true and believing that it was true so that becomes the harder issue and that's where you know some of the greater aspects of trying to fight misinformation in general where social media is now taking and sometimes removing that media and so
forth that's gonna have to continue to accelerate the other challenge we have though honestly is that there's some gray area here that social media is struggling with where is that line between free speech and acceptable you know use of these different videos to create satire versus what violates copyrights and privacy laws there's definitely legislation needed here unfortunately lawmakers right now are only focused on election security there nothing can be on that and so there's a lot of work to be done there as well yeah definitely it's and just let you know and in the the discord is having a great time with this but I'm gonna have steve buscemi nightmares tonight I know I keep seeing that if
that's like the third time now it's passed by in my peripheral vision I've seen that that that one in particular is horrible I don't know why he's such a popular subject for deep face it just everybody liked him and Nicolas Cage there that one looks like him he's just it's he's just has a very unique like quirky look which is why I get so much work in Hollywood right I guess yeah so yeah I think we had some other questions here let me see what I've got somebody asking how many covered 19 related fakes have you found you know honestly I haven't seen a deep fake yet from that and I'm actually surprised like I've
been kind of watching for like dr. Fauci to show up somewhere deep fake like you know saying some nasty stuff about the president or something like that but surprisingly there hasn't been as much lately I don't know what's kind of toned that down but yeah I haven't it's please let me know if someone sees some because I would definitely be interested in knowing about that so somebody's asking they're interested in how the noise is applied to live images in order to prevent faking so it's basically like a summation so if you take the you're taking two images you layer them together and you apply it and if you got down to if you were like
so say you were gonna do it in Photoshop you would literally take two layers and just by applying I'm trying to think all the mask works exactly I apologize but it's basically a max masking technique where those pixels that aren't that static gray in that you saw that was the static gray is added just to make them more visible but the way they're at it is it's it's just individual adjustments to those pixels it's it's a slight change in coloration and in I believe it's if you're taught if the parents are you talking RGB or CMYK as to how exactly that that gets adjusted but yeah they it's just taking that and applying it over top so I think the follow-up
question to this and I think I know the answer the answer is probably going to be yes is whether or not like major social media you know services can apply that as you upload that's what they're talking about doing the research on it is still not complete but I think that's yeah as you read through some of the follow ups to the study that was one of the suggestions and I know you know Facebook at least had uh released some commentary about the possibility of doing exactly that that's the hope the issue comes in to you would also have to do it with video and that's where it becomes a little more difficult because a lot of the training right now like
when I did mine for in Face Swap I used video and I just I had it extract still images from video for the training set and so you know in that case you would have to have every frame at 30 frames a second or 60 frames a second apply that to it and I that could get pretty time-consuming yeah and I it sounds a lot like we're getting started in a game of leapfrog and like like techniques to prevent it you know like very similar to what we've seen with malware and other stuff where like it takes them takes us six months to come up with a defense and a day for them to figure out a way around it yeah
well and the worst thing here is when you think about how Gans work where you have kind of that the detector if you will that that discriminator network it's already making the attackers better because it's doing that instantly as part of that training so a lot of the techniques as you build those into you know we build these detection techniques well people take those they build them into the discriminators and now that just updates the generators model to account for those additional elements and so that's you know you're right it's a constant cycle of attackers making the defenders better and the defenders making the attackers better and it just happens at a faster pace now because
we're talking machine learning yeah all right looking for other questions here I don't think we have any more I think that's it definitely a great conversation in the in track one on that and if you want to check that out later it's it's there's a lot well maybe I don't know if there's a way to turn off the images as you browse it skip the skip the doubled-up steve buscemi eyes and mouth one yeah yes yeah all right that that was awesome I appreciate it thanks for bringing such an awesome talk and excellent delivery and we've got plenty of time before the the next speaker here all right I appreciate it thanks again everybody so much thank you