
Hi everybody. Thanks so much for for joining me today. Um very happy to be here. My first time in Seattle. So I am very very excited uh to to be here. So um I'm actually not a typical uh wait yeah as was mentioned I'm not a typical like CTI person. And I come from the national security background and I've been dealing with the IO's some call it influence operations some information operations. Um there are so many terms at this point and I think it was not enough because European Union uh recently came up with another one which called FEMI. Have any of you heard what FEMI is? Oh, I see people nodding. It's a foreign inter foreign in information
manipulation interference. So yeah, like we needed another term for that. But okay, so this is the my background where I'm coming from. So I've been dealing with the hostile states typical thing of uh spending conspiracy theory running hundreds of bots trying to meddle into the elections and uh as Ukrainian like we've seen a lot of the things firsthand happening but basically what what has changed within this period of time with which is um um AI came and what it means is that when it comes to the IO's uh I will be using the term IO just for this session at least uh when it comes to the IO what changed is that the access to IO's has
um been expanded to different types of groups like historically it's been like you would have like Russia China big hostile states it would be expensive it would be long-term it would be mostly like intelligence running this these kind of things so with the when it comes to the u social media in this space. It's historically like you would have typical conspiracy theory stuff and uh synthetic identities and bots but mostly in the governmental space. So with generative AI, what happened is that um the economics has changed. Like buying the profiles has never been a problem. Like to buy a handful of profiles on Facebook or so has historically always been cheap, but what was expensive is
like running and maintaining them and then uh personalizing them. Like imagine if you would want to run an influence campaign that targets a few countries, you would have at least have to to be proficient in the languages of those countries to be able to run those or you would have to create a lot of content. So that's why typical cyber like cyber threat actors, they would not care enough to do that. That would be a lot of work for them. Um but that's actually where AI came in handy. So to explain better how that works and to showcase you, I will share with you the case which we internally called Sora Ganganger. Um it kind of goes with the
name uh with the dppelganger. Have any of you heard about the doppelganger Russian IO? Yes. Uh I see the group of IO people there. So uh like really recommend checking it out. It was one of the biggest exposed um attributed to Russia ios of them cloning the websites of basically top media worldwide like CNN, BBC like huge huge thing they were running for four for four for four for four for four for four for four for four for four for four for four for for months. So we call it a Sora Ganganger. Why Sora? Because Sora from Open AI. So uh there's going to be a lot of AI here. I apologize for that. That's just the
case happen to have a lot of AI. So what happened in this case is that we stumbled upon like our product pickup a bunch of meta ads. So the meta ads that were they they looked very harmless. All they were doing is they were promoting Sora as the model. At that moment Sora was not yet released. So it was the promotional post through meta ads saying like hey check out the Sora put in your prompt and get your video completely harmless. Um these were like 53 unique AI generated ads launched on Meta. So if you see like there is a video here generated using AI to impersonate AI. Again, apologies, a lot of AI, but uh
also this video over here, it was targeted for multiple countries, primarily US, Ukraine, Vietnam, Germany, Taiwan, uh and China. And this one over here was um redone for every country like they would change the video to specifically trigger people in that country to click on it more. Again, historically, for you to do that, it's a lot of effort. Somebody has to make those videos. Somebody would have to think about the script. Somebody would have to think about the images. Like, it's a lot of work. Now, it's one prompt. And some of them were quite creative. Like, for example, the one that were targeting Ukraine, they had like the Kremlin burning uh in them. So, just to provoke
again reactions and like motivate people to to click on it. Um, and here basically you would see it as a user. This is how you're going to see it. This is an example of one of the posts. This post explicitly mentions Sora and explicitly mentions OpenAI. A lot of them didn't. They would have the same video, the same call to action, but they would not specifically mention the company. And that's um what we see more and more this kind of trick. When thread actors want to get un go under the radar on social media, they would either flood you with the wrong keywords because they know that most people monitor social media platforms with the keywords. So
they either going to flood you with the wrong ones or they just going to try to avoid the keywords that you're using to again fly under the radar. So uh this is what you see as a user and then you click on it. You click on the ad and you basically go to the uh number of domains. This is an example of three of the domains that were in this campaign and you get the perfect replica of chat GPT interface. You everything looks great. You put in your prompt and you get your video downloaded like you get automatically the video downloaded as a result of your prompt and that is the malware installed directly on your
device. So this is the case of uh of basically thread actors going to social media to directly spread and install malware cross countries, cross languages, cross um in this case we caught it very early so not crossplatform uh it stayed on meta platforms but usually it would happen across platforms as well. An important part here when we talk about the social media as an attack surface. So all of these domains at the moment when the campaign was up and running none of these domains were flagged by virus total or any of the providers as malicious. That is a very very important part. And if you look into into this case and think like why this case is hard to catch right
like why why it may be hard or what why what makes it special. So first as I mentioned often no keyword match that's something that we see more and more again jai comes very handy it can rewrite the post it can uh like basically contextualize them differently like you could have multiple versions of the same post so also another thing we used to catch a lot of IO's by simply using a very old but very good technology which is like plagiarism one because that would be the same post mostly just like copy pasta type of the things not anymore they would just simply rewrite it the same structure the same message just rewritten the second
part of this is when people think about IO's they often think about um whether it's true or not whether it's fake or not thinking a bit of the context of like a fake news I think especially especially in in the US and I think this is a very very wrong approach because this is not a content problem. It's a security problem. It's a coordination problem. So that's why when you look at this case, if we would go with a fact-checking approach at this post, there is nothing to fact check like is the post technically correct. Correct. Like if you would try to verify the text, there is nothing that you would get out of it. So this is not fact
checkable. Clean domains. However, and this I will go back to the uh pages itself. So I showed you that it was 31 Facebook pages. And this is where it gets interesting because this is about the infrastructure that's been created, nurtured and often reused. So part of the infrastructure was created newly. So the part of the pages were absolutely new. They would have the these ones had like unidentifiable names like the names would be just like random words like nothing very specific, nothing that would uh catch your attention in that sense. uh but there was another part of profiles that were reused and this is a very important part to look at. This could be one of the strongest signals at
uh a lot of the platforms you could actually see the history of the page. So um on meta platforms meta was forced to add this part to all of the pages that they have uh as a part of the transparency thing. So you could see when the page was changed and to what it was changed. So this is one of the strongest signals for you if you're looking for the reuse of the of the infrastructure. And then uh I mentioned multiple geographies that is something again that we've seen more and more often because of the unlock of the languages. We used to catch a lot of the things by uh mistakes in the Google translate. So
historically they would be just using Google translate and with the low resource languages you would have a number of mistakes made. We literally had like one of the one of the ways of checking it is just basically spotting those mistakes. That is something that we barely see uh anymore. Like it's still the rasam uh for low resource languages of course that uh large language models couldn't be not as great but they still would be significantly better that if you would use the Google translate. And then uh when it comes to the country's localization, another thing that we see more and more often happening on social media is uh geoclocking. That is also something that um we have not seen much before
especially the only kind of application where we've seen it would be fraud. In fraud that would be a more persistent tactic. In other cases not not really. So um when it comes to like asking like so who's behind this type of the of the things right and this is where it's I think the biggest change what happened right what what I said at the beginning it's no longer like big hostile states or some very big groups in this case the infrastructure led to the corporation in Vietnam it's like it's been a company in Vietnam that has been already engaged in some of the similar activities activities. So the part of the domains and the pages were registered to a
number of companies that led to this one. So uh here we see a clear case of them reusing some of their infrastructure because as I mentioned part of the Facebook pages they were already used before and this is where again this line between hostile states, cyber criminals and a lot of fraud is converging altogether. And this is the part where we see more and more infrastructure being re reused a bit randomly. So you would have some pages like today trying to interfere in the elections, tomorrow pushing the crypto scam and then later doing credential harvesting. Um the reason for that is actually a bit of industrialization of the things. You have much more um disinfo for higher companies that would
just do such such kind of jobs. And for example, in this case, the Mudbau Corporation has like all of the typical indicators of being such an entity. So they're basically doing um being a contractor for somebody, right? So whoever gives them the job for those they will repurpose their their pages at that moment. And with this case like with the case of the spread of the malware through the impersonation of open AI like this is one type of the things not always it is malware at the end but what's important is that the TDPs at the beginning they all look the same till the last click. So this brings a lot of kind of operational complexity
to it because here's another two cases that at the beginning look exactly the same. So the first one is that uh this is something that happened a few weeks ago. uh Fortune 500 company bank uh bank was attacked by over 500 deep fakes of their chairman introducing a new feature, a new banking feature. Um and uh the deep fake was of very good quality like you cannot distinguish visually uh that it is a deep fake. Uh the funny thing about it they the they put $20 budget on the meta ads of this 500 deep fakes. The reason for that they just wanted to test they wanted to test how quickly can you catch it. They test both
things if they mention in the text the name of the company and if they don't if they you they just have the visual. Um and uh in this case it directed to the fishing infrastructure. So in this specific case, another case with the crypto scam looked exactly the same. You would have celebrities promoting promoting this great new tool that they are using and then you would go and you would end up in the crypto scam. Um if we would take out the content out of it and just focus on like the steps, they would look identical in all three cases. And that brings a lot of the question of so who's responsible for handling it,
right? like who like when do we understand what it is um when do we identify it and like who's responsible for identifying it. So a few important things to look to look at with all of these three cases going again to the point that it's not about the content it is about the patterns of behavior. So when it comes to the IIO's on social media, a lot of people would think that it is just like about the things that people write. So it's very complicated to distinguish organic from inorganic. Actually, of course, there is this complexity, but when you look at the behavioral indicators, that's usually very clear cases because the way people behave on
social media has its own kind of standards and rules and as soon as you see the thread actor trying to weaponize social media infrastructure, they have a very clear um tells of of how they do that. So primarily I think is behavior and coordination and when I say coordination important part it's not only the inauthentic coordination because I think when we hear word coordination we mostly think about bots but uh bots is usually the easy case because bots would behave in the most inauthentic way. So we had recently we caught like a a huge infrastructure of bots in the Balkans reposting an article about uh of uh sanctioned uh RT. This is the Kremlin media outlet that was
sanctioned by the US and European Union 89 times per second. So this is a very easy thing to catch actually because there is no world in which somebody would be reposting stuff 89 times per second, right? So that's why like when it comes to the bots I think they are much easier case a more complicated case where the things would be um where would be amplification but not in an inauthentic way. So we are talking about the real profiles and real accounts of people that are being paid to promote things. So that is one of the cases that uh we've seen a lot happening in Eastern Europe. uh people would be recruited on telegram to just go and write things. Uh
the pay usually would be very very insignificant. People would be offered sometimes like $5 per day to do that. And you would have very often actually like um stay-at-home parents do that. Um it's a very easy job and very easy money for them. So um you would have a lot of this recruitment type of things happening there. And uh also with these three cases that I showed you, they are primarily focused on on meta platforms and meta ads. Uh interesting enough like today in the morning I've read that actually the uh Meta sued one of the one of the Vietnamese based companies that were doing it. So I'll be very curious to uh to talk with them to understand if
this campaign for example was also the part of the of the big one that they are suing right now this company. So that would be very interesting to see where it goes and where it leads. Um but also there is other platforms where the things are happening and what we see also more and more is like trying to send the audience from one platform to the other. So primarily a lot of things like this would be happening on Tik Tok and I think Tik Tok is the trickiest one of all because with all of the other platforms it's accountbased. So you first need to establish account. You need to post. You need to connect with
others. That's why they a lot use meta meta ads because with meta ads like yes you do create the account but you could target the audience that you're not yet connected with right. You don't have to build that level of credibility on Tik Tok the same but they still have to build up the infrastructure. So you still have can catch them. You can catch them at the beginning when they just just testing out things. With Tik Tok, the trickiest part is that you could post one video and it will go viral. You create the account, you post it right away and it can go viral. So from the perspective of you identifying it and mitigating it, you basically have just a
much much less window for for identifying and acting on it. So I think Tik Tok is the trickiest and also the trickiest part of Tik Tok is of course that um it is the video content, right? So when it comes to the social media, a lot of a lot of companies do not monitor the video content. They would usually monitor the the caption, right? At best the sub subtitles on the platform, but very rarely what's happening on the video. So this is something that we have not seen yet much in the corporate space. We have seen in like more governmental defense space. Uh but there was for example an interesting case we discovered with the um AI generated
soldiers reporting from the front line basically calling on their fellow fellow mates to abandon their positions. Um like complaining how bad it is there and all of these things. And uh those videos had no title, no hashtag, nothing. So for you kind of to discover it by like typical keyword search to the platform, it's almost impossible. The reason we discovered is because of the patterns recognition because we were looking for the patterns and in this case it was the pattern and also we we work on the model that is doing descriptive analysis that basically describes on what's happening in on the on the video. So that has been very helpful but that of course brings up the
cost. That's a different cost to that. So I've covered I I touched upon some of the things of like why it is hard to operationalize but just a few things to um to summarize that first of all volume when it comes to social media the volume is already very high not to mention genai one of the companies we are working with they had like a typical keyword search set up for social media they had 200,000 alerts per day because the name of their company is also the a specific word in Spanish. So to filter that out was very very hard. So like there is no way in which you can process 200 thousands alerts per day.
And now this this thing is only increasing especially with AI slope and then again the thread actors that are sophisticated that are that want to do a sophisticated IO they will prep the space they would flood the space uh if needed to make sure that you don't see the things that they want to push. The second is data availability. Um especially like here it's a I like to say that like your dete detection is only good as your data lake is or your vendor's data lake is because it's still about like nobody can crawl the whole internet. So it is very important when you set it up or like when you choose the partner to work with to understand
what's the methodology for them discovering new sources mapping out new sources especially for platforms like telegram that have very complex um structure because it's a non-algorithmic social media so you need to uh have a very different approach of discovering new telegram channels compared to new Facebook groups and I mentioned for example Tik Tok that a bit from different perspective. Third part the coverage gaps. There is a lot of gaps. Like most people when they talk like the CDI teams like what are you monitoring out there? That would be like oh we are monitoring dark web and now we are monitoring telegram. But as you see in these cases and I have so many more cases that showed Tik Tok, YouTube,
Facebook becoming this new place where the the preparation starts. Uh and I think the key question to ask with all of this is to ask yourself like if this type of cases happening in your organization if you can imagine this like unfolding. So who is responsible for detecting it? So within whose mandate it's going to be falling, how quickly you will be learning about it and even if you're detecting it, you're learning about it. What's the response plan? This is the hybrid type of threats. Sometimes they require legal to do the takedowns. There's a lot of organization that um that I I've worked with that would actually delegate legal to do the takedowns. like they
would work with stock in corporation but it would be legal who's doing takedowns. Sometimes taking down is not an option on social media. It's sadly not always the option. Realistically speaking when it comes to the takedowns you could count only on two things. Impersonations clear case of impersonations. That's something that is feasible to to take down and also within the um operational time range. So we are talking like 24 hours to 48 hours. So like you still can do that. When it comes to the bot network, majority of the platforms have a completely different standards and like sometimes it would take you a very long time to prove that this is a bot network to actually for them to take it
down. So um sometimes you need so for the cases where takedown is not possible sometimes you even need some com support. We had a case of the hostile state trying to uh organize the rand on the bank posting information in telegram channels claiming that they have massive layoffs that they're not doing that well because they wanted to create this picture of like unstable security posture of the company and try to make them as a target. So in this case the company actually involved their comm's team as well to support and publish their quarterly report that they're doing actually very well and they're reaching the profit targets etc etc. So ask yourself is that if we even we
detect it, we know who detects it. If the response needs to be hybrid, can I connect to these people in my organization? Do I have the direct connection to them? How quickly can we organize ourself on this? So uh when it comes to so this is kind of the cases that we see already happening now. So when it comes to the cases that what kind of cases could be happening. So uh I will move a bit. What kind of cases could be happening? So things that we are not seeing that much yet but thing that I would expect actually happening in the next like six to 12 months. The first is like agentic IO campaigns.
There is one case now recorded. Um it's very hard to verify that case. There is one person that claims that they have been a target of agentic IO like when they uh it was like this the part of the open claw and this this agents going rogue. Um but realistically this is not something that we have seen yet but I think this is something that we would be expecting. um extreme microtargeting like the microtargeting not on the level these campaigns that I show you they are targeting on the level of language and on the level of countries. We are expecting the microtargeting on the level of preferences, personal data um and going like much much more granular because
especially platforms allow that majority of the platforms allow you to go super granular. Um AI power engagement especially in the comments that is also something that we uh we have seen the first instances but we are not yet talking about the mass adoption. So basically when you read your feed and you go into the comments and uh it's no longer dumb bots that are writing there. It would be the llm powered account that can reason can go into the discussion and create the the feeling that you're talking to the another human being in there. Um the deep fake clusters this case with 500 deep fakes that's something that as I mentioned happened just few weeks ago super super new. and
then influence as a service. That's just something that I expect um more companies popping up that will be doing this type of service and offering this type of service. So to conclude uh I think social media has been always an attack surface. The question for who and uh when and how quickly, right? So what we've seen in the last year of social media becoming a attack surface when it comes to the cyber security posture when it comes to the weaponization of narratives when it comes to synthetic identities when it comes to basically preparing the space for the attack and basically uh preparing the grounds before the breach is happening. So with that I will conclude and will be happy to answer the
question and I added the QR code for the session review. I was told that that's that's where you have to review the sessions. So thank you very much and would be happy to take the questions.
So about like two years ago I think Romania had a case where the AI manipulation I was ask like do you anticipate or forecast like >> uh I was told to quickly repeat the question before answering so everybody hears the question is about Romanian elections that were uh cancelled. due to IO and AI powered IO interference and whether I expect more cases like that. Uh absolutely like this is something like in Romania it's actually very interesting case because uh they had a huge visibility gap. The reality is that this the the candidate that was supported by by the Kremlin has managed to create the whole campaign and leveraged like huge huge support there. And the reason nobody saw it coming is
because they were not monitoring Tik Tok. Like that's the as simple as that. Just nobody was looking at Tik Tok. So they were in complete not understanding of what's happening there. So I think that's um that is something that we will definitely be seeing more in this case. When it comes to AI power, I would just say that the hostile states are uh hostile states are tricky when it comes to the AI powered things. They are experimenting but my hypothesis is right now they are lacking the incentive to switch to a lot of AI things. They're still running a lot of things by hand. They're still running a lot of things quite cheaply and they very often lack incentive and
lack talent for the people that are running these things. They don't have the vision of how they could actually innovate their IO's. So yeah, thank you very much. And I think I will be concluding because I think I'm uh going slightly over time, but I'm staying here and happy to answer any questions. Thank you very much.