
Hi everybody. My name is Aaliyah. Uh I'm a senior in high school and this is an extension of a research project that I did for one of my classes and I'm really excited to share it with you all guys with all of you today. Now I want to start that's feedback. So that'll come back later. But I want to start us back all the way in September of 2024 which is when I started looking for a topic that I wanted to research on. Now, I knew I wanted to do something in like the technology space, but the guidelines for my class were that it's a fully independent piece of research that you put out by yourself and it has to fill a
gap in existing media. So, it can't be research that's on something that's already been done. It had to be fully new. So, that was a lot of work to try and figure out what I wanted to do. But, I am an avid reader of a newspaper called The New Yorker. And I remember reading this article and talking to my parents about it over dinner. And it's about a AI scam about a deep fake voice. And these are really dangerous because they're basically they use your voice, they use tidbits of you that they can find online and then they turn you into like a phone call and then it'll call somebody in your family going, you know,
your kid is now in prison. You know, you have to pay us this ransom to get them out. So my parents and I were talking about this at dinner and I kind of wanted to research something that would be interesting and impactful. So I wanted to research this. Now you've probably heard the term deep fake in the news as well. So it's not just for monetary scams, can also be used for awful other things like revenge pornography, all other sorts of scams. So I wanted to research something like that and I will cover the effects of that in the latter half of my presentation. Now deep fakes are defined by Oxford languages as a video of a
person in which their face or body has been digitally altered so that they appear to be someone else. Typically used maliciously to spread false information. Now, when you search this term up on Google Scholar, you get 37,2 and plus studies. So, I personally identified about 50 of these in my review of the existing literature. So, I could get a good idea of what I needed to research or what technologies I needed to take a look at so my stuff was independent. Now, through that initial exploration, I discovered some of the major technologies that were being used and misused um right now. So while accessing some of these platforms, I realized that most of this was actually
pretty old. So 2019 and 2015 may not look old on paper, but the technology actually has advanced like leaps and bounds since then. So I knew that I was looking for something newer. And I stumbled upon this launch video from a company called OpenAI, which you probably all know. And they launched their uh platform Sora, which is a artificial intelligence that makes videos fully from scratch. So, it doesn't take an existing video and alter somebody's face or body. It's a fully new type of thing. So, I really wanted to research this and it was launching in December. So, I was like, perfect. I'm going to have a whole new field to research on. And I recalled seeing this
one video and unfortunately the media playback isn't working. But, it was this one video of people on a train in Japan and it was on a bunch of news channels because people were just so shocked that this technology was so good. And that made me want to research it. And this gave me a good amount of space to experiment with the type of study because the platform was such that you would basically give it some tokens to make um new artifacts from and then it would it could generate anything you wanted based off a text prompt. Now teenagers were a really accessible population for me because I go to a high school and the nature of my research
class was such that basically you know other people in the class could support you as long as they didn't know what you were trying to prove with your study and obviously you could reach out to friends and other people to participate in your study. So I wanted to talk to them about it and my initial question was do teenagers get better at detecting deep fake technology or deep fake videos over time if they're given feedback on their performance. So my idea at first was to separate the group of teenagers I would be surveying into two, one experimental group and then one control group where I would give half of them feedback on like hey you're doing really well at um
looking at these or you're doing really bad and see if they would get better over time. Now, I didn't know it at the time, but I was kind of over complicating the, you know, purpose of my study because there wasn't a lot of existing research. So, I would whittle it down over time. But before I get into that, I want to talk about some of the stuff that existed before I did my research and kind of determine what I needed to find. So, one of these studies is called Fooled Twice. This was published in 2021 by a group of researchers. And what they were testing is both accuracy and self- assessed confidence of adult participants in
identifying these deep fake videos. Now they were sh they were shown a short series of video clips. So 50% of them were authentic, 50% of them were deep fakes and then they were asked to determine if it was a deep fake and how confident they were in their um assession of that. So they rated on a numeric scale and that kind of influenced my direct um methodology for conducting my study. Now, there was another study that I looked at in depth, and this one was called deep fakes and disinformation. This was published in 2020, and it was similar to the previous study, but in this case, they just had one deep fake video of President Obama
where they had him saying, you know, something that wasn't real, and then they had adult participants again test both their accuracy and their confidence. So, again, that contributed to my methodology. Now, come February, I started doing my research. And at this point my method was to break my participants into groups of two and then show them a series of fake and real videos. So this is the interface of Sora you know circa like I think about eight months ago. So it has changed a little bit but at that point there was like a for you page and it would show all these videos that other people had generated. And then there was also if my my pointer
will work. Yeah. So this one on this side is a stock video website that allowed me to download stock videos for free and then I would use those in my surveys alongside the ones from Sora that I had generated. Now my method with this was to basically separate the groups into two like I said before and then detect how good they were getting over time over five days. I wanted to see if they would get better but unfortunately people were really busy and people ended up dropping out of the survey. So I didn't have enough participants to make a statistically significant analysis which was one of the guidelines for actually finishing and publishing my piece of work. So I
decided instead to pivot to a sort of simpler u method regroup restrategize and I ended up reorganizing my question entirely. I found this study and I'll get back to that later. I found this study called synthetic media um that came out of the Harvard Kennedy School's misinformation review and this defined what Sora material really was. It defined it as synthetic media which is basically something that is fully generated by an artificial intelligence or um sort of model and it's not based on anything that previously exists. So this put my research in a new sort of uh area of study and then I decided that I could abandon the feedback over time and try to be simpler with that. So at this
point I wasn't trying to see how how good they were getting over time but just see how good they were at identifying synthetic videos currently. So that led me to my new question. How successful are teenagers at identifying synthetic videos? Now, I used the same format of survey from that fool twice study I was talking about earlier and my previous attempt at research and I made a new survey with eight videos that I would pull directly from the pexels and Sora websites. And then I would pick them in four categories. So, it was people, nature, animals, and objects. And then I would ask people if they believed it was an AI deep fake or an
original video and how confident they were on a scale of 1 to 10 in their assession of that video. And I never picked anything that was like very clearly fake. So there were no like you know koalas running marathons. It was all very plausible stuff that you know anybody could believe. So this one for example that's on your screen right now is actually an AI video but a lot of participants misidentified that one specifically. Now I was able to get 32 brand new participants which led me to be able to do a statistically significant analysis. So I was able to make a couple of conclusions from my research. So their average uh confidence rating was 5.91 out of 10 with a
standard deviation of something like 2.1. So mainly people were actually like pretty okay at rating themselves on how confident they were in their session. So they didn't really know what was real and what was fake. But they were actually a lot more accurate when it came to detecting if it was AI or not. So somehow they were doing better than they had assessed themselves. So that was showing that they were kind of underestimating their confidence. Now 875 87.5% I believe um underestimated their confidence. Now with the data I had I wanted to see if there was any correlation between self- assessed confidence and the actual accuracy. So I conducted a Pearson correlation coefficient analysis which is like a
fancy math way of saying if two things are linked together or not. So basically, were they actually overestimating or did I just find some sort of random pattern? So I found some numbers where my R equals 0.35 and my P equals something like 0.047, which basically showed that there was a moderate connection. So people who were doing better on average were not really ranking themselves as confidently as they should. And this was actually a direct um like uh contradiction with some of the adult studies that had been done because in those studies people had actually overestimated their confidence at detecting these videos. And that was actually pretty uh pretty interesting to me because that technology was also a
lot more primitive. So there was one video that I remember in particular where it was just um a news a news channel broadcaster who they had switched her facial features slightly but people were super confident at detecting like oh this is a real thing but it wasn't actually a real thing. So that was really interesting for me. So their detection was like 57.6% 6% at their success rate and then the success rate of teenagers was about 66% in my study. And I haven't seen any work that's on my specific group of people that I looked at since I conducted this research back in like last March, like last April. So there's no reason to think that my study was just I think
like a fluke. So hopefully there is some sort of reasoning to that. So that led me to the question, why could teenagers be more confident in their detection? Right? So I wanted to explore that little bit of a relationship more. Now, while the technologies in each study were that were used were different. So, it was traditional deep fakes versus synthetic media, this comparison still suggests that teenagers have a sort of heightened sensitivity to synthetic content. So, what I found was that on average, teenagers were reported to be using social media platforms and um portable devices. So, like cell phones more like 4.8 8 hours a day versus adults would usually use them about 2.35 hours a day which suggested that the
more people were exposed to this sort of material the more uh the better they were getting at detecting it. Now this environment of like being online could train teenagers to understand what AI content is which brings me to something that I find really interesting. So about a year ago um Meta launched this uh AI platform type of thing. So, it would be an AI account with all sorts of AI images and you could chat with this person, it would just be like a large language model type chat. Um, but this was the type of media that was appearing on a lot of young people's feeds from what I found. And so, it seemed pretty reasonable that the more you're seeing
this sort of thing, the more sensitive you are to realizing, okay, now I know that that's fake because I've seen it so many times before. Now, to get some more insight into how some of those participants achieved those higher scores. So, this is just another study that found more exposure to something is better detection. to get some more analysis on how they found better scores. Um, I sent a follow-up survey to the participants who scored the highest score, which was about a 13 out of 16. So, that was four people, and these were sort of the responses that they gave me as to why they thought things were deep fakes. So, a good um version of that is
like I come a lot around a lot of AI deepig videos via YouTube. I often watch these videos of people creating AI content, deep fake videos. So, they've seen stuff like that. There were some people that were noticing, you know, glitchy hands, oddly fluid movement, colors that were too vibrant, and textures that are too smooth. So, all of these people were noticing these sort of patterns that they could only have identified through having been exposed to this before. So, it's entirely possible that that repeated social media exposure was kind of giving them a passive training that helped them recognize the videos in this study better than the other people. But more research would be needed to confirm that
relationship. So my question was, if casual unintentional exposure to synthetic media might be helping some students at detecting stuff, then how would we formally teach them how? So this is another part of that first survey where I sent it out to four people and they said, "Oh, I do think I come across this a lot, but the other, you know, the final response was that I don't think I'm regularly exposed to it, but I think that the more you observe, the more you're going to be sensitive to it." So I thought that was really interesting. And then I wanted to look at see if the further body of research sort of supported that work. So, I found
a 2024 like peer-reviewed study about um the relationship of digital literacy, exposure to AI generated deep fake videos, and the ability to identify deep fakes in Generation X, which was published in an Indonesian journal, but I was able to translate it over. And I found that individuals with higher levels of self-taught digital literacy, so people who were more aware of what they were reading online, especially in an older generation like generation X, um it sort of showed that they were getting better at detecting things over time if they knew if they spent more time online and had more exposure to that sort of content. So it points to that larger implication that structured media education and not just regulation
may be necessary to help sort of reduce the harmful effects of this technology in the future. Now, to sum it all up, throughout all of my ups and downs, I was just able to get a baseline. So, how good are teenagers right now at Spotify synthetic content? So, that's really important. Um, especially when personally I've learned that people my age are more sub more susceptible to some of those negative impacts like scams or, you know, revenge pornography, all those awful things that come as a result of this new synthetic media. So, while I didn't prove any sort of improvement over time like I had originally intended to, I helped sort of define like a starting point. So now
researchers who want to study like exposure, attention, like spread over time um will have something to start off with. So my opinion is that if we want to protect people in sort of the age of AI, we have to look at how do we teach people to protect themselves instead of just focusing on regulation to catch up with us because that's sort of been historically so. So I hope I've helped answer a little bit of that question today. But before we close out, I want to explore what's been done since I submitted my research about like seven months ago. So in most academic spaces, seven months is barely enough to do anything or to research anything. It
took me, I think, like almost nine months to do my paper with all the like loops and hurdles that I had to jump through for my class specifically. But the use of these generated videos has totally exploded with the expansion of the technology. So now creating the videos is free versus when I first started, I had sort of like a payment barrier. So it's much more accessible to people and it's much more available as well. Like you'll see people posting um you know AI generated videos online. This was like a movie that went through a lot of backlash for having an AI generated trailer even though it was like a big budget production. So it's
definitely a lot more um frequent in today's times that we're seeing these sort of things. So, so far only one bill limiting uh the creation of synthetic content or the spread of it online has been passed so far and this is called the Take It Down Act which sort of criminalizes the spread of synthetic content created by malicious actors um specifically in order to spread like non-consensual images of people online or you know in person. This is just the beginning because the extent for malicious uses of harmful content extend you know far beyond just non-consensual images of people being posted online or in public forums. So, I hope to see like governments all around the world sort of
take further action on this issue. Now, before I take questions today, I'd like to leave you guys with a question of my own. This is kind of a two-parter, but how often do you see synthetic media online? And how do you respond to it? So, are you fully able to tell immediately that, hey, this is not real or you know, maybe you leave a comment or something like that, but just sort of reflecting on that helps us understand our behavior as a society at large, which I think is really cool. So, thank you for your time today and I'd love to hear any questions that you guys have. And then this is the feedback survey
that I was told to put up here. So,
have a question up here. >> Sorry, there's a mic coming your guys' way. >> First of all, fantastic talk. >> Thank you. 8year-old daughter and I have all the time she says that she's right and wrong knowing this and this is very very inspiring as well. I think uh many parents who have kids they definitely like their kids to do something like this. So great research and great work. >> Thank you. >> Questions? >> Great talk. Thank you so much. >> Great talk. Uh, I was curious in those categories, um, did you find any differences based based on if they're looking at people videos versus animal videos and stuff like that? >> Oh, yeah. This is a really, um, really,
really good question. So, I had um I did some analysis based off how pe how successful people were at identifying videos of people versus of animals. And I found that because we interact with people more in real life, it seemed that people were better at detecting like this is a fake video of a person or a fake still of a person versus um questions. There was one specific one that it was like a lion and it was focusing on a lion's mane sort of moving in the wind and people I think zero people detected that one because it was so difficult to look from your own point of reference and go that's not what a
lion really looks like. Their their mane is a little bit rougher than that. So, it seemed like people were better at detecting videos that weren't um that were of people, which is something they understood, versus things that they didn't understand. So, like outdoor life or wildlife or things like that. Do you think that uh an argument could be made for increasing screen time in order to improve people's >> I think this is a question that a lot of people that I've researched on would like to hear a yes to. But I think it's like it's sort of um it's a double-edged sword where it's like the more you spend time online, the less you're interacting
with people in real life to sort of just for the sake of this argument, understand how people look in real life. But at the same time, it's like it seems like exposure online is getting people to be better at this. But I think now that it's spread to more corners of the internet, it doesn't take as much digging or time spent on online platforms to understand what this is like. And I hope that, you know, now we know the risks of this technology. We know what it looks like. So people will have to go search for it less. Yeah. >> Hi. I just wanted to say that like I totally relate to this like entire bit
of research like on my feed now I get like the AI video the AI training videos. It's like which one like the top bottom which one's AI which one's not. And I'm I'm getting a little bit better at that. I'd say um I do know one thing my grandparents are not. They I caught them watching like like AI stuff on the TV and like they didn't know and which was kind of concerning to me. I guess a follow-up related question to this is like maybe how could I help maybe structure some guidance towards them to make sure that they're not getting misinformed or seeing ridiculous things that you know are synthetic. >> Yeah, it's really really it's really sad
because in the preliminary research that I mentioned, it seemed like older people who were the less like technologically literate were definitely feeling the impacts of the scams and sort of the malicious uses of this the most because obviously they don't have as much experience. So from what I found in that like one Indonesian survey and then just like some ideas that I have is just like you know just being fair with them and just being like sending them the AI training videos you're getting and then also just telling them like hey if you see something that like feels a little too good to be true then maybe it's not or like like people said look for
patterns like you know really smooth movement or glitchy hands or stuff like that and the technology is getting more and more advanced every single day so it's really hard to look out for it like sometimes I'll see something and be like wait that is AI like even I'll be confused by it. But yeah, it's like I feel like it's the little things we can do. So, I hope that helped answer it just a little bit.
>> Hey, uh great talk. Um in your nonAI source from Excel, how confident were you that those were not AI generated? two, did any of your survey participants at some maybe interesting correlation point those sources as AI more so than like uh AI generated one is not AI? >> There were definitely a couple where because how I did this was I downloaded all the videos and then I would name them but then sometimes I would not name them Sora or Pixel like one or Sora or Pixel 2. So that what would happen is I would have to go back to my search history and see like is this beautiful sunset AI or is it like a real video? So
that definitely happened to me a couple of times and yeah people totally have misidentified it where it was like at the end of my research period I had like a couple people I knew came up to me and just be like are you like because they never got their results back and so they were like can you tell me if that one video was AI or not because I genuinely can't tell and nobody I know can tell and I was like I can't tell you I have to go back in the file names and look for it. So yeah, there was definitely a lot of that confusion for sure.
>> Um I'm curious with the um regarding the cues and the accur like did you do a comparison between the cues that were stated and that participant's accuracy? So like just because they said they were looking for something doesn't mean that that's actually contributing to their accuracy. So did you do any additional look at that? >> Yeah. So the survey questions were there for sure. Um when people talked about that I did follow up with them like via email and just say were there any videos that where you explicitly noticed that sort of thing because I tried to filter for that thing and I understand that that created sort of a little bit of a
bias where I was only presenting the best kind of video. But the ones where, you know, it was obviously very much like there was a glitchy hand in the video, I wouldn't include those in the survey to begin with. And then what I did was when I followed up with them, I would just say, "What did you, you know, is this just a general trend or is this something that you're seeing in these videos specifically?" And so you're only looking for those patterns in these videos, if that makes sense. >> Great presentation, by the way. Um, my question has to do with um the age group that you work with. You mentioned working with teenagers. Um, in your
research, did you did you see any uh fundamental differences between say younger teenagers, 13 years old versus older teenagers, you know, 19 18 19 year olds? Because it's a very broad, you know, big difference in life, right? >> Yeah. >> So, I'm just curious about that. >> No. Yeah, that's a that's a really good question. It's something I addressed in my paper, which is a longer piece of work, and it like looked at more of sort of the things that could have been possible results. So, in the presentation, I chose to include what I thought was the the definite positive or possible um reason for the things the way that things turned out the way they
did. So, I worked with groups of people that were I believe the youngest was 14 and the oldest was I think 18 and a half or something like that. And it was a smaller group, so we definitely need a bigger group to tell if there was going to be a trend between younger people and older people. From what I found when I sent up a follow-up survey, it seemed that the people with more exposure, so like more screen time or more time spent on social media platforms were the ones that were better at detecting. And it wasn't necessarily about age itself. It was just about exposure.
>> Hello. Can you hear me now? Um uh I I read the uh Dunning Krueger paper from 99. One of the things that they found when they surveyed their participants was that regardless of where the scores of the participants fell, there was just pretty much a flat line of the self assessment of how each person thought they did, the people that underperformed thought they did better than average. Uh the people that did really well thought that they did worse than they they actually scored. >> Did you see a flat selfassessment line when you look at what uh you said about 66% of of your participants did uh were able to to get the um to do well.
>> Uh did like what what approximately was the self- assessment uh uh ratio of folks who did well did they under value their own performance >> consistently or did you see a different kind of trend occur? So the four people that did the most well which was 13 out of 16 that was the top score across the surveys those people actually tended to underestimate and I think I might have accidentally skipped over this slide but this slide kind of shows that in a graph form. So there were four people that overestimated uh over the entire participant group of 32 and then the other 28 underestimated. So the a the average amount of people or the average
score that people underestimated by on the the confidence or the accuracy I think was about two points. So it was two points out of 10. So people are actually really good, really prone to underestimate their scores. And the way I explored that sort of in my paper and what the reasoning for that could be was mainly the fact that teenagers are a little bit self-conscious and that they think that because it was really hard to visually tell that they did worse than they did because from what I learned from the follow-up studies, there were a few people that were like sort of guessing on some. And so if they felt like they guessed on one or two, then
they thought that their whole rest of their responses that they were confident in were kind of bad responses. So they had sort of a a tendency to underestimate a lot. Okay, I think that's it and we're out of time. So that's really perfect. Thank you guys so much.