← All talks

Shattering Trust: Live Deepfakes and the Fall of Legacy Verification

BSides Charleston · 202438:08192 viewsPublished 2024-11Watch on YouTube ↗
Speakers
Tags
About this talk
Paul Vann explores how live deepfakes and AI-powered impersonation are compromising modern identity verification systems and enabling fraud at scale. Drawing on real-world case studies—from the MGM ransomware attack to deepfake-assisted hiring fraud—he demonstrates how adversaries exploit video conferencing platforms like FaceTime using accessible open-source tools. The talk covers the deepfake landscape, technical attack methods, and practical detection strategies for organizations and individuals.
Show original YouTube description
Paul Vann, CTO and co-founder of Validiti, explores the escalating threat of live deepfakes in compromising modern identity verification systems. With over a decade of experience in cybersecurity, Vann delves into the evolution of deepfake technology, from simple lip-sync videos to sophisticated real-time face swaps and audio cloning. He discusses the rapid advancements in generative AI tools, which have made it alarmingly easy and affordable to create highly realistic impersonations, posing serious threats to businesses and individuals. Vann highlights real-world cases where deepfakes have been used for financial fraud, corporate espionage, and even during hiring processes to infiltrate organizations. He demonstrates how adversaries exploit platforms like FaceTime by bypassing security measures through virtual camera setups, effectively eroding trust in traditional verification methods. The talk underscores the importance of staying vigilant and educating oneself and employees about identifying potential deepfakes.
Show transcript [en]

Unknown: I'll get this really quick. Paul Vann: Awesome. Hey everyone. How's it going? Good, good. Thank you guys for having me today. Today we're going to be talking about live deep fakes, deep fakes, and how they're being used to break modern verification mechanisms, and kind of talk through the deep fake landscape today, where things are going, how these types of attacks have been used in the in the past, and how we anticipate they'll be used in the future. Just a little bit of an introduction on myself. My name is Paul van. I'm the CTO and co founder at validity. We are a deep fake cyber security startup focused on identity validation in communication channels. I have 10 years in the

cybersecurity industry. I've spanned across threat intelligence, threat hunting. EDR, XDR, I spent some time over at cybereason About a year or two ago, threatened threat connect in the past. And so I've kind of, what I'd like to say, follow the path of emerging technology in the cyberspace. I hail from the Northern Virginia area. So made the drive down earlier this morning, which was pretty long, but excited to be here, excited to share this with you guys, and please feel free to ask any questions throughout the talk. What part of Virginia? Fredericksburg. Fredericksburg, yeah, right, right between Richmond and Washington, DC. Awesome. Well, to kick things off, I wanted to start by just introing the rising threat of

deep fakes and where this where people see this threat today, how it's being interpreted, and where we're seeing this throughout the world. Over the last few years, we've seen a massive increase in AI powered fraud across the world. We're seeing a lot of it happen in the APAC regions, and we're starting to see this move into the US regions as well. We're also starting to see detection companies bring up some being like ourselves trying to tackle this problem. But frankly, it's a very new problem space. No one knows what the right solution is yet, and there's a lot of different ways to tackle it. You can put something on device, you can do in conference or any call

verification. And the other thing is that business leaders are perceiving pretty much all aspects of AI impersonation, whether it be voice cloning, live video or just impersonation in static videos online, as a major threat. Again, I'll get into a little of the some of the attacks that we've seen thus far, but people are starting to interpret this as a threat, and one of the organizations that we've talked to has even gone as far as to set up multi factor for their video conferencing meetings inside of the meeting. When people join the meeting, they get put into a breakout room. When they get in that breakout breakout room, they fill out a multi factor code, and then the admin has to verify and let them in. It's very

frictionful, and so it's deep fakes are causing a lot of problems. People are concerned, and we'll get into how those manifest themselves today. My objectives for today, I want you guys to be able to learn a little bit about what a deep fake is, how they manifest themselves, how we've seen them in today's modern environment. There's a lot of different types of deep fakes out there. There's a lot of different ways that they are created today, and there's also a lot of different ways they're used. So there's a lot of stuff to tackle there. We'll also get into live deep fakes and exploiting face time to use them. One thing I'll open with is one of the, one of our,

few of our customers that we've talked to, when we first started talking to them, told us that when they don't know if they're talking to their employee or someone they work with, they'll give them a FaceTime call. And that led us to kind of take a look at how can we break FaceTime? How can we get a deep fake on there? And we'll talk about how we did that today as well. And finally, I want everyone to take something away on how to protect yourself against this. One of the most important things, frankly, in this space, is that the deep fakes are so new, and a lot of people don't know what they look like, how they pop up, and being able to just spot and identify

those and at least some meaningful way is incredibly important. So let's jump right into understanding the deep fake landscape today, what is a deep fake? So in this section, we're going to walk through a couple different types of deep fakes and some of the tools that are commonly used that you guys can all use, actually at home to create some similar ones. The first one is lip sync videos. Now I started with lip sync videos because these are the ones that have been around the longest, and these are the ones that are tend to be the ones you see on social media. Are the ones that you're seeing in some of these more political disinformation misinformation

campaigns. A lip sync video takes an initial video of an individual, takes an audio sample, whether it be just someone else talking, or a deep fake audio sample, and matches that person's face to make it look like their lips are saying those things. Now this is incredibly dangerous, because you can take something like this video of myself here, and you can instantly throw any audio or any voice or anything you want me to say in that scenario, especially with the wide use and wide availability of Audio, Audio cloning and voice cloning. And I think one thing that's pretty important to note too, that we've recently realized with lip sync deep fakes is, in the past, you needed a video sample of somebody to then

create this lip sync, keep fake. However, now you can take someone's LinkedIn profile picture, turn it into a video, a very highly realistic one, to be quite honest, and throw someone else's voice on it, or that person's voice. And so they're incredibly dangerous. We're not seeing these as much in the live scenarios. Again, these are ones that you'll see on social media. A great. Example of an attack, or a simulation attack that we've run is sending an email to employees of a company with a lip sync deepfake of their CEO, trying to get them to go to a new link that is a part of the organization, having them type in their password and username, one of the more traditional

forms of attacks, but with a little bit of pushing as well. And so here's an example I actually used in my first FaceTime deepfake example, where I gave my co founder a call and just basically told him I was changing my phone number. I got a new one and listed it out here, and this is both deep fake audio and lip sync. Hey, Justin, sorry for the bad service here. I got the new iPhone 15, which is super cool, and had to switch over my number. My new number is plus one, 123, 4567890, on WhatsApp. Let's chat there. Hey, Justin, sorry. So the point of all this being this took me 15 seconds of my voice to create

that audio clone. I just needed 15 seconds from a talk in the past. And I also, I didn't even use a video for this. I used a static image of myself, put it through a model, generated a video, and then attached this on top. And so it's really scary. Again, they're not, you're not seeing them use as much in the live scenarios. However, we are starting to see some companies in the space start to create real, live versions of these clones, which some people are using to look like they're working some people are using for attacks. So lot of different use cases there. The next one, and the one and the one that I think is one of the most important to talk about today,

especially with FaceTime deepfakes, is real time face swapping. This is one that's a little bit newer in the space. We've seen things in the past like Snapchat filters that enable you to face up face swap with someone, but those are highly unrealistic. They're taking a version of someone's face and just kind of stretching it across yours on an image some of the new, real time deep fake models and tools that have been released open source and publicly are very high quality and require not a super significant system. I'll show an example today, and I'm running this on a three year old gaming laptop, and buying any sort of any sort of GPU server or anything with a little bit of power can get you a really high

quality deep fake and so I'll show this example here, and also hit on a little bit how it was created after we go. So if you guys are familiar with the show the boys, you'll recognize the actor on the right, but this is a real time deep fake of an individual. The left is the original, the right being the deep fake. And I think this sample right here, and I'll pause, is super important to show, because these deep fake samples have actually been they are these deep fake algorithms and models are now able to interpret different lighting and face movements which are not some which is, which is not something that they've always been able to do, and it's

starting to cause some of that confusion and allow some of these attacks start to start to occur. And I think another important thing to hit on that we'll get into as we take a look at some of these is that they're only improving and new features are being added to these as adversaries start to realize, hey, I need this. I need this. And we've actually watched this progression happen, as we've seen comments on these tools online. One other thing as well that you'll probably notice right here is that this does not require a video of an individual. It doesn't require multiple photos. It requires one image of their face. A LinkedIn profile picture could be enough.

An Instagram photo could be enough. Whatever it may be, any one photo of you on the internet could be used to create one of these highly realistic clones. The next one I'll hit on is a little less pertinent to communication channels, but I think it's still important to mention is AI generate, generative AI image creation. When we get into the hiring aspect of all of this, this will make a little bit more sense and kind of how this plays in but generative AI image creation has gotten so significant, so significantly good, that it's able to now replicate driver's license and contribute to components of driver's license that adversaries are forging. We're seeing fake images being used with some of these

generative AI image tools, and we're so we're seeing a lot of fake passports, fake identification, and sometimes even you'll see not necessarily always with generative AI image creation, but kind of a combination with llms is fake bank reports and fake documents. And so this is an important one to hit on when we get to the hiring side of things. And also I will add that all three of these people do not exist in real life. These were all generated by a tool and do not exist. And finally, one of the ones that I think is also very important to the conversation we're having today is audio cloning. Audio cloning has been around for a little while, and I

think recently, it's gotten substantially better at being able to actually mask or encapsulate someone's voice with very little samples. Today's audio deep fakes generally take around 30 seconds for a semi good quality audio deep fake and generally around anywhere over five minutes is going to give you pretty much exactly what you're looking for. And I think another thing that's pretty cool with these is, in the past, it was text to speech, and the speech to speech was very poor. Now you can encapsulate emotions where I can be almost I like to call myself a voice actor these days, as I will go in and I can showcase emotion, I can basically express exactly how I

want that person to express it, and just throw their audio right on top of it. And so this is a very quick sample here. I believe it should be. This is a quick sample here of me actually creating a lip sync video, little poor quality, but also attaching and creating that audio. You open up a new project. Unknown: We will upload our Paul reel MP four file of me speaking, and then we will upload the or we'll actually have to go back over to 11 labs here. And so this right here is really what I wanted to hit on. Is right here in the text to speech aspect, not only are you able to control what they're

saying, choose whatever they're saying. They also now have conversational AI, where you can load in a bunch of information on yourself, create a voice and genuinely, genuinely call yourself and have it have a conversation with you. You can prompt it the same way an LLM works, and it sounds and appears to be you based on what you provide it. And so I think, again, the audio cloning aspect is definitely significant. But I think one other thing to hit on as well is, while we've gotten really good or adversaries have gotten really good at live video, deep fakes, live audio modulation, or being able to sound like somebody else in real time is probably the next big

thing that adversaries are pushing for right now. It's really difficult to do. It requires a lot of compute power, and frankly, there's not a lot of available open source tools to do that, live audio modulation today, but there are a lot of free or $1 per month tools like 11 labs, where you can just mess around, throw audio samples in and generate some really high quality deep fakes. Awesome. And one thing I just wanted to hit on as well that I think is important is the history of deep fakes and why this is important. Now, a lot of people have asked, you know, deep fakes have been around since 2017 2016 and in this document, or in this image,

people argue that with the first video rewrite program, that that's the creation of deep fakes. But I think that one thing that's important is that over the span of 2014 to 2020 2021, we saw the creation of these generative adversarial networks. We saw examples of deep fakes. But one of the things that I think is really important is that they were really technical and really hard to create. You had to have a very deep technical knowledge of AI at the time. You had to put a lot of effort into it. And I remember when I tried it a few years back, and I was like, I don't even know what I'm doing. And I think the change that's happened, the shift, is that all

of these tools not only are easy to use, they're inexpensive. You can clone audio run these face or these live face swaps for $1 a month. And I think that's that's significant, and why there's this. Now, this problem has become a lot bigger in the cyberspace, and we'll jump in now to kind of some of those examples before that, though, I will quickly hit on how they're made in the first place, and how this these models actually work. It really just comes down to data. At the end of the day, if you can find data on an individual, whether it be an image, a video or an audio file, of them, it is incredibly easy

to create any of the deep fakes that I showed there. You can fine tune some of these generative AI image creators with images of an individual and generate completely non existent images of them that don't exist anywhere and are completely new. You can just take one image of someone and create a live face mask. And so usually, what will happen when adversaries are doing this, they'll create a base one, at least this is what I do when I when I go through creating these for an attack, is they'll create a base one and do a lot of iteration on that. A lot of times, these deep fake models will have little elements that you can't control. AI models are, unfortunately, black

boxes, so you have to kind of hope that they're going to spit out something awesome. But if not, you have to keep refining and so, but they're very easy to make, very quick. And really, what it comes down to at the end of the day is, how much training data can I get on you? The more I have, the better I'm going to be able to create a deep fake of you. Now, let's chat about the business impact of deep fakes and where this plays into the cyberspace and the threat that exists today, according to Deloitte, losses from deepfakes are expected to grow 36% year over year, which is incredibly significant we've seen this year. I'm sure maybe you guys have seen some of this in the

news as well. We've seen a $25 million wire fraud loss. We've seen a cybersecurity company, which I'll mention later in this slide deck, hire an individual that was using a deep fake we've seen recently. Actually, we talked to we talked to a G SIB, globally significant bank, and heard mentioned that they, one of their high net worth clients, was deep faked on a call, and they were on this call for 15 minutes and were not able to figure out why or how that individual was on the call, and so the losses are expected to grow the algorithms and the models are getting better. And frankly, the only way that people are identify identifying these today are very small slip

ups. A great example I like to use is Wiz. They just had a news article go out two days ago. The Wiz CEO spoke at Tech Crunch and mentioned that an audio geek fake was passed to a lot of the employees of himself trying to get their credentials. And what ended up happening is super small thing. The way, the CEO actually said that when he when he speaks in public, he has speaking anxiety, or public speaking anxiety, and when he talks to his employees, he doesn't, and they said, and this is, this is in the article. This is where I found. This is that the employees were able to identify it because of the presence of that anxiety in his voice, which I think is super

cool, by the way, but also, on the other hand, is very indicative that these small things are getting figured out. We've already figured out the emotion side of audio and how to remove certain aspects of that. As people start to utilize that, things are going to change, and some of. Areas where this can be used for, one is financial loss and reputational damage. Again, we've seen fraudulent wires being sent out of an organization, and one of the toughest parts about these is if somebody gets on a call with an individual. We saw this in Hong Kong. Actually, an individual got on a call with five people he believed to be his coworkers. One was the CFO. I believe one

was his boss as well, and they were all deep fix, and they convinced him to send that money out of the organization. We've also we've seen it. We've seen it many times now. We've seen a lot of organizations targeted as well. And again, some of them are finding it because these AI models aren't perfect yet, but they will be, and that's really the problem that we're targeting today as well, loss of business opportunities online. We see things like being pushed out. We saw what's the company it is. There was a large pharmaceutical company last year. It wasn't even a deep fake that was used actually, but they when Elon Musk released verified Twitter accounts that you could pay for,

somebody created an Eli Lilly fake Twitter account and posted insulin is now free, and it caused their stock to tank for at least that day. And now imagine if a deep fake with a press conference of the CEO went out there. Now you have to, there's a lot more that you have to go fall back on and convince people that wasn't our CEO. Insulin is not free, and so financial losses can definitely be big. And I think also another big thing is operational disruption. Do you have someone within your organization that you don't trust or that shouldn't be there, and your communication channels and your video conferencing platforms on phone calls that can really cause a lot of not not only stress, but also cause a lot of

potential issues, and having without any way of identifying them or verifying them, you're constantly, you know, asking yourself, Is that actually, Paul, is that actually the guy I work with every day? Am I on the right zoom link? And it really makes people start to think and create this erosion of trust in digital communications. And I think just for a couple case studies here, funny enough, this one did not use a deep fake but I think that's why it's important to show is one of the biggest attacks of 20 I believe it was last year. Was the MGM ransomware attack. It was all over the news. It caused a major disruption for both MGM and I believe Caesars got targeted as

well. But on the MGM side of things, there's a lot that they did once they got in, but the long story short is that the way they got in was by using voice phishing. They called an IT admin, and they convinced them to reset their Octa codes. As soon as the octa codes were reset, they were able to get in, configure new accounts under a different it was really weird. Actually, there was like a separate Octa umbrella they were able to get under. But the long story short of all of this is that they used not even a deep fake. And imagine if they had used a deep fake that would have, that would have passed for

sure. There's not even a chance they're stopping it at that point. And as much as we want to say that our individuals that we work with on a regular basis are trained or wouldn't, would be able to say, Hey, I know you're a deep fake I know that's not actually, Paul. They don't. And frankly, these are so new that you really can't tell. And so training is big. Being able just to see samples of these, and I'll actually, we have a YouTube where we have a bunch of them, being able just to see these deep fake samples is incredibly important. And I think another case study that's really interesting is hiring. It's one of those things where, I think, over the last 1015, years, cyber

security professionals have been a lot of the conversations we had have been around hiring to some CISOs, and they've said, you know, I've never really thought about our hiring process and the security of that because there's background checks. But the reality of background checks is that they're not that good for one. We've seen that North Korean adversaries pass them time after time. And also, if you combine a background check with someone using someone's REAL ID and showing up as a deep fake on a video conferencing interview, now you've got an entirely different problem, that background check does absolutely nothing. Even if they didn't, they would, they wouldn't have even forged a document there. And so we've seen this. We also

we saw no before targeted earlier this year. And then we also recently, it was not disclosed the organization name, but there was another company that hired an individual and got held up for ransom. As soon as they hired that person, no before, got lucky. They hired the individual, sent a laptop out, and it was instantly loaded with malware. However, that did not leave the desktop or the laptop that they were on. So I think these are two really important case studies to look at, or sides of the deep fake problem where I'd say there's the most risk today.

Out of curiosity, has anyone you you know, or have you, as any of you yourself, seen a deep fake heard about a deep fake example. There's a lot of individual examples as well, like targeting individuals, targeting family members, and not just organization. But I'm curious, show of hands, if anybody's seen one themselves or received a call. Awesome, awesome. Well, if you haven't seen one yet, I hope you don't see one in your near future, but I do hope, I hope you see some for some training purposes, and we'll see some here as well. Awesome. So jumping into live deep fakes and using them with FaceTime, I think the first thing we'll start off with is live deep

fakes and how those actually work and how easy they are to use. There are two main open source. Repositories today that I would say are incredibly good for live deep fakes. Anybody can run them. I'd say for, like, device, what device you'd need? M2 m3 are perfect for running this. There are going to be, there's going to be, like, a little bit of a lag. GPU, laptop, anything with at least somewhat of a GPU can usually handle this. I'm running this standalone here, and I'll actually show you guys here. I can open this up. Sorry about that, bring this back. There we go. So there are two main repositories, and I will make sure I share these as well after the talk as well. But there's

deep live cam, and then there's a fork of that that I actually think is pretty significant and worth exploring. Deep live cam is, again, it was, it was actually the number one trending repository on GitHub for a little while. I think it may still be, and it is incredibly easy to set up for one. Not only do they have this full repository where you can set it up yourself, and it has instructions for every single device. Again, here's one of that example the examples we showed earlier. Um, earlier, but it also they have a commercial version now where there's no setup. You can spend $15 and they will just send you a deployed version that you can

just run on whichever operating system you use. So if you don't want to go through the setup process, it's super easy. But what I'll do is I'll actually showcase this here. Let me see if I can pull this over. This right here is what deep live cam looks like when it's running on my computer. Now, some of you may be familiar, but this is Kevin Mandia. We've got a couple of just big tech guys that I have used as examples today, and there's a couple really cool things that you can do with deep live cam. For one, not only can you do live, but you can also select a target image or video, if you don't want to do live and

load that person's face onto it, and very, very high quality. Frankly, that's probably the easiest use case to use. But you can also now do many faces. So you can upload two different pictures, or one like one picture with two people in it, and then have two people inside of the call, and you can both look like that. That other person look like different people, and each be wearing a deep fake mask. You've got face enhancers. Now, for those who have a really powerful GPU, those actually do a lot more work and make it look a lot more realistic. And so what I'll show you here is let me see if I can pull this up. But I've actually got this. I've had this running

the entire time. It's been staring me in the face. But hold on, yeah, it'll be a little bit slower than normal, because we're on HDMI, but what you'll see is, this is a live deepfake example, again, running on a pretty low, powerful, low power GPU laptop. We've run examples of these against some employees and organizations and really convince people, especially when you're not talking and just looking at the camera, that it actually is the individual you think it is. And I think another thing that's interesting is it's super, super easy for you to go and change out who you're being. It usually takes around like 10 seconds for that switch to occur. If you guys are familiar,

this is Alexis Ohanian, the founder of Reddit, and we will see this spin up here shortly.

So as you can see, not only now, one thing I think is actually really interesting is that facial hair used to be something that was not able to be done with these tools. I know because anyone who had facial hair that I tried to deep fake, it just used my face as it is, so I was not able to get anything there. But what I think is interesting now is they've gotten a lot better about adapting to those very minute, miniscule, deep fake features. And I will also add, usually, won't see these lines going through here. This is actually, this is because we are, I think it's because I'm on HDMI, but I'm not, I'm not

positive. Usually, what we'll do, though, with these deep fakes is we will load them into a virtual camera software, and then you can jump right into Microsoft Teams, and that is your camera. You can also load up a virtual mic there. A lot of the virtual camera software. Softwares offer it. And you can legitimately stand here and and just look, look like that person. And then also the, again, the live audio modulation is a little harder, but we're for one. We're working on an open source side of that. And aside from that, as well, people are able to create audio samples and just run them in lip sync to them. It's I actually have done it a few times now, now jumping

over. I think one thing that's important to hit on as well is kind of how this tool is updated, because that's one of the things. Like, when I downloaded it, I never looked at, you know, the updates that people are making, the new pushes that were coming through, or any of the forks. And frankly, I think this is the coolest fork of this, of this deep pellet live cam right here. What you'll see here is they have now updated it with a ton more features. One of the problems that adversaries had actually noted, and there was we actually found some messages pertaining to this, is that when you have the deep, deep fake mask on, it makes it really hard

to move your mouth. You can kind of see in that video there with the old deep live cam that my mouth was like moving, but it's like, it wasn't really going to my words. It was a little bit slow, a little delayed. And so what this does is they've now enabled a mouth mask that you can set how big you want the mouth to be on that individual that you're deep faking, and get perfect quality mouth movement when you're talking or when someone's actually trying to impersonate somebody. And the reason I wanted. Show you guys this is, I think it's it speaks to the levels that this is increasing and that this is getting better, and we're seeing this constantly and constantly

improve. Now what I'll do is, I'll jump back over to the slides here. I will make sure I send you guys these links, if anyone wants to, if anyone wants those as well. They're very easily findable. Deep live cam, again, is the most popular GitHub repository right now, and there's been a lot of lot of good and bad buzz around it back and present. Oh, we gotta go back through. Sorry about that. I

sure, awesome. Oh, wait, two more, two more. Sorry about that.

Perfect. So now I wanted to hit quickly on the problem with deep fakes on FaceTime, and why. I wanted to address this specific topic in today's session. We've talked to a few customers, and I know I had mentioned this earlier, but one of the things they said is, you know, I don't if I don't trust someone on Microsoft Teams, if I don't trust someone on Google meet, if I don't trust someone on Slack, I know I can trust FaceTime. I'll just give them a FaceTime to their phone number. I'll get a call or FaceTime call from them. We'll verify there. There's no way that someone can impersonate somebody else there. And then it got me thinking, you know how the really the problem

is not, how do you do a deep fake on FaceTime? The problem is, is, how do you get a virtual camera to be used with FaceTime? Because Apple hates using anything but their own device and on which is good. It's frankly, really good that they don't, they don't allow those things. However we, we wanted to take a look at it. We started diving through some Reddit feeds, looking at, like, some existing solutions. There weren't really many, except with, like, a 2010 Mac Mini, which I did not have access to. So I had to form one of my own. And so the problem, though, is that, or the I guess the solution in this scenario was a couple different things, and

anyone can actually do this. However, it's arguable whether I recommend doing it. I'll hit on that as we go through. But the first piece, frankly, is disabling system integrity protection on the Mac device. One of the things that MacBook store Mac has done a really good job about is they do, they protect a lot of what goes into your applications, especially Mac related devices. I'll hit on how you can disable sip and why you have to disable SIP for this specific use case. But that's the first piece of the solution. Was basically creating what I'd like to say is a more insecure, but very insecure MacBook for this use case. The second thing, though, is already with system

integrity protection enabled. There's no ability to set up a virtual camera with FaceTime, but even when you disable system integrity protection, it is still incredibly difficult to find a virtual camera that works, because Apple has done such a good job about making it so most virtual cameras don't actually work. And so finding and installing a virtual camera that's not blocked by MAC involved taking a look at a lot of GitHub repositories, a lot of open source, half open source, or previously open source, tools to figure out which one was going to be able to kind of get through that jump. And then the third piece is deploying a live deep fake in the correct view,

or deploying any deep fake in the correct view. FaceTime is a little bit different than zoom or Microsoft Teams, because you have the phone view, you've got the desktop view, and you kind of have to accommodate based on who you're calling and what device they're going to be picking up on. And so that's the other part of its face time as well. And so jumping in for disabling system integrity protection again, this is arguable. I don't recommend it, or at least if you guys do it, it was not recommended by me, not because it'll do anything bad, more just because it creates a much more insecure Apple device. And if you do disable it, I'd recommend re

enabling it after you try it out. But basically, you reboot your MacBook and recovery mode, open the terminal after rebooting and run that command. And then you can reboot or restart the computer in regular mode, and you should have system integrity protection disabled. And then you can go to the FaceTime app on your Mac device, and up on the top you can click File or in I believe it is tools, I believe. And you can basically choose which camera you'd like to use or which microphone you'd like to use. And if you have a lot of virtual cameras, I think I have about six. Now, I saw all of them there, and I tried all of them, and all of them did not work.

OBS was the first one, probably one of the more common virtual camera softwares did not work, and they were just giving me this black screen of death. So again, Max done a great job about about stopping it, however, not, not a good enough job. So finding the virtual camera was the second piece. So I spent a lot of time diving again through some Reddit, through some Reddit feeds, and kind of looking at what other people have done, had done in this space. And one of the solutions I found that actually worked, but I found one that worked a little bit better, is there was a GitHub repository that was a very lightweight, like, low like, you had to be a

little technical to deploy it, but it was this virtual camera that was not a production virtual camera. And so that one worked. However, it was really difficult to use. And I will, I can share that link as well, but the one that I ended up landing on is a pretty popular one that I guess, like, was a little bit harder to get onto my Mac. At least it's a little bit of an older software was many cam, M, a n, y, C, A, M, and what I was able to do is, as soon as I loaded that in, like, got that set up, and loaded any sample, whether it be a real video of myself, a deep fake video of myself, I was able to

automatically configure the size of the video. Using this, I could choose how much black space to have on the side, because sometimes when you call someone on the phone. You want that black space to be there because that adds to the realism. I spent a lot of time messing around with it, but mini cam is basically how you do this, and then what you're able to do is you can load anything that you want into a virtual camera. So mini cam has a lot of options. You could choose a Chrome window if you wanted to. You could choose a Microsoft Teams call. You could choose just an image of somebody, and you can load any of that that you want, and that is your

FaceTime video feed in reality. And you can do the same thing with audio. They've got a microphone as well. We didn't mess around a ton with audio, but you are able to do so. And what I will do here is actually show you this example. I was planning on going through a full, like live demo of this, and then I realized, to disable system integrity protection, I would stop sharing my screen. So I here's, here's an example, though, of actually showing this in practice. Once we get through this ad here, and this video is and I will share my slides as well, and kind of how we approach this. This right here, though, before I start, it is going to show me grabbing real

audio and video. It's going to show me, or is going to show me grabbing this, these audio video samples, creating that deep fake that you guys saw earlier, the lip sync one of me, loading it in to the mini cam, the virtual camera, and then FaceTiming my co founder and having that run, see that here, actually jump here. Since we've seen this, there's no audio on this part. This was just grabbing a video sample. So this right here is me generating the text, Hey, Justin, sorry for the service here, brand new audio that I can then utilize. Hey, Justin, sorry for the bad service. Here. I got the new iPhone 15, which is super cool, and had to switch over my number. My new number is

plus one, 123, 4567, and then here's me loading that into mini cam as just a static file. Again, you've got a ton of different options there that you can choose from. And then once you load that up, your virtual camera should be started. You can go ahead and FaceTime whoever you'd like and that will show up. Sorry for the bad service here. Just super cool. And had to switch over my number. My new number is plus one 123 4567890

the point of all of that being this is how we were able to kind of get this on to face time. We had to mess around a lot with it. There's what I like to say is we didn't break anything. We just found this really niche way to kind of get through and get and have Apple not stop us from doing so. But I think one of the things that it going back and before we jump into how you can protect yourself from these kinds of threats, I think so. Let me make sure I present from this slide. Oh, sorry, guys, I lost my slides here. Disappeared on me. You know what? All right, until this comes back up, we're going to run slide lists for

now. So basically, the next slides, we're going to be on, protecting yourself. I can run through those, and I'm happy to share the slides after the talk. But I think before we jumped into that, one of the things that's really important about FaceTime is same with teams. When someone joins a teams call or Zoom call, they can change their name and make it whatever they want to be. But one of the things that we've heard from some customers, from people we've talked to you about these FaceTime deep fakes, is with FaceTime, you can't really set that. It's almost like you're calling from an iCloud email address or a phone number. And how does one trust that? How

does one verify that that's like, how does one create some trust around there? And one of the things I think is really interesting, and I actually realized it was a scenario of myself is one time when I was switching my phone numbers or switching my like, phone devices, my all my FaceTime calls were coming through as my iCloud email. And whenever people would get those, my iCloud email was like, probably 15 characters long, and you'd only see like the first eight and when someone's FaceTiming, you don't get that whole thing, especially when they're long. And so what we've actually been able to do is, if you know the target of someone, you're trying to be a deep fake of on FaceTime, you can actually just

take the first eight characters or whatever the email is, make the rest of it complete gibberish, or whatever you'd like, or even try and keep it as similar as possible. But you don't need to have it perfect, and that can convince people to get on calls. I've literally faced on people from my iCloud email, they only see the first eight characters, and they just, they're like, Oh, I knew it was you from your email. And so I think that's something important to note as well. And I think on the on the point of protecting yourself, there's a lot of there's a lot of things you can do. There's a lot of things that you can't do, unfortunately, as these are just getting to a

point where they're so realistic. But some takeaways that I have for you guys today. The first thing, and I mentioned this a lot throughout the talk, is educate yourself on what these look like. Educate yourself on, you know, how these actually appear. I think again, one of the things is that this space moves so rapidly that a deep fake that you saw a year ago has no meaning anymore, because the deep fakes look so real, so much different now, so much more realistic. They're manifesting in different ways. If you ask someone. Would only see Elizabeth lip sync deep fake if they've ever seen a live one. Those are two entirely different things, and we're seeing that

that that that process for creating them and the way they look change. So keeping up to date with that, keeping your employees up to date with that, and frankly, keeping your family members up to date with it. Especially on the audio side, we've seen a lot of individual attacks focused on people trying to convince that someone that their family member is in danger using those audio deep fakes. And on that note, one thing that's also important is that a lot of times, especially myself as a cyber guy, when a scammer calls me, I like to stay on the phone, and I like to try and, you know, like, try and get something out of them figure, Figure out what they're doing.

And that's actually probably the worst thing that you can do. Now, in today's landscape, it wasn't it was pretty cool in the past. It's not anymore, because what scammers are trying to do now is they're actually not even trying to scam you. Sometimes on a call, they're just trying to keep you on the line for 30 seconds and get your audio sample, then they can create a deep fake of you and throw it right back at your wife, your mom, your son, your daughter, whatever it may be. And so be diligent about where you share your audio, where you're talking to people, don't waste time with the scammers. They'll probably don't let them get your voice clone. Once they have it,

they're going to use it. Um, and I think a couple other things as well is look for look for discrepancies. These deep fakes aren't perfect yet. Um, and kind of the way you should go about when you don't trust something, or when you think that somebody may be off is always, don't trust it first. Don't trust it like right out of the gate. Don't be assume you know this is that person, because why would it not be? Always go, don't trust them and try and find some way to validate them. One of the ways that we've seen people doing it with no code, no technology is safe words. Is basically just having a safe word with somebody. Another example that actually thwarted a

big deep fake attack earlier this year was the individual asked what the deep fake what book he had recommended earlier in the week, asking small details like that, personal things that wouldn't be anywhere, anything that's on the internet. Those individuals that are deep faking you, if they have enough will to try and beat you, they are going to know all of those things on the internet. And I'm sure most of you guys in the cyberspace have seen that in less sophisticated impersonation attempts, phishing emails, whatever it may be. And I think another thing as well is one thing that I it's really funny actually, but people always ask me, How do you spot a live video

deep fake? What are the tells? And frankly, there's not a ton they it used to be, if you turn to the side, that it would work, they've actually really done a good job of fixing that. I think one thing that I like to tell people is, if you're ever really unsure doing a 360 turn, we'll throw the mask off for one, for one, like one to two seconds. Very funny to ask someone to do a 360 turn. But not a bad idea. As much as we're trying to break the multi factor approach to how people are approaching it, because we think it's really friction full. It's not about approach. If you have multi factor in your organization, if

it's it's especially if it's a start getting something in there for saying these people should be here. But yeah, I'm happy to answer any questions at all. I'm happy to share anything that share the slide deck over with all anyone or any of the content we share. But thanks for thanks for listening. You.