← All talks

Securing AI Infrastructure: Lessons from National Cybersecurity Strategies and Attacks Against Other Critical Sectors

BSides Las Vegas · 202549:1917 viewsPublished 2025-12Watch on YouTube ↗
Speakers
Tags
About this talk
Drawing on analysis of 200+ cyber incidents across critical sectors and national cybersecurity strategies from the US, Australia, Singapore, and the UK, this talk examines AI labs as high-value espionage targets and identifies policy and technical recommendations for protecting AI infrastructure. The speakers argue that securing AI requires cross-disciplinary approaches—combining cyber lawyers, policy makers, and technical expertise—and that incentive alignment and clear accountability frameworks are essential to making AI companies prioritize security alongside rapid development.
Show original YouTube description
Identifier: R83DQJ Description: - “Securing AI Infrastructure: Lessons from National Cybersecurity Strategies and Attacks Against Other Critical Sectors” - Examines AI labs as high-value targets for espionage. - Reviews 200+ past cyber incidents in critical sectors. - Analyzes national cybersecurity strategies from US, Australia, Singapore, UK. - Provides recommendations for protecting AI infrastructure. Location & Metadata: - Location: Ground Truth, Siena - Date/Time: Monday, 18:00–18:45 - Speakers: Fred Heiding, Andrew Kao
Show transcript [en]

Please welcome Fred. >> Thank you so much for that introduction. I'm just going to start my timer here because I I forgot doing that because my my head was a little mushy. There we go. Thank you for some of you stuck around from the last presentation. That's amazing. And some of you are new and that's also amazing. And it's 6 pm. So I mean thank you for being here. It's kind of the last talk and we we have a lot of people. That's always amazing. And again my name is Fred Hiding. Most of you already know it but uh a quick blurb is that I'm a postto at Harvard working at the intersection of technology and

policy for cyber security. Also as of late AI and cyber security with me I have Andrew Cow and I will talk more about him on the next slide. I'm just going to jump on and then I'm going to tell what this talk is all about because the team is actually the most important thing in this talk as we will see. So I'm Fred do a lot of cyber security work. Andrew is a econ and economics PhD who's been doing a really interesting cyber security study together with Carfik at Berkeley who can't join us today but is also part of this project and the entire background of this story is basically we took a lot of my work a

lot of Andrew's work and we merged it together and what that means is better described on this slide it's bas kind of three parts so what Android and Carfik has been doing for the past years is they've been doing a massive study where we've been analyzing more 200 cyber attacks, espionage attack, insider threats from various industries such as, you know, semiconductor companies, you know, um, biotech companies and so forth. Companies that are not AI. Then we're trying to learn, well, what can we learn from this? And AI is definitely becoming a critical infrastructure sector. It's included in the software critical infrastructure sector, but it's not a dedicated CI sector in the US. And what does that mean? Should we increase

protections and so forth? We try to draw a lot of lessons for this. I myself have spent the past years doing a lot of policy works with the people who created the US national cyber security strategy and a few other cyber security strategies around the world. We're actually now something I should mention as well right now writing a textbook together with Jay Haley at at Columbia University and Harry Quasa who is the creator of the US national cyber security strategy and Alex O'Neal from previous state department about what does these national cyber security strategies entail and do they really encapsulate the quick AI development are we prepared to secure our AI model weights and so forth and so forth and

based on all this we're pretty confident that we've been able to narrow down a few polish recommendations that we really think we should do different and hopefully we can convey you that's a good idea as well. There'll be a bunch of time for questions afterwards. So, uh please don't hesitate to write down and later ask any questions you may have. So, just a quick sort of words to set the pretext for this. AI development is taking up a lot of speed. This is not news for a lot of people. I mentioned that in my last slides as well. Most of these AI experts from the big companies already talk about you know we will have

90% or 100% of all the code written by AI agents in the coming couple of years. That's a big deal and as I will talk about a little bit more next slide even if all this hype is overrated and this AGI is not coming and we will not have artificial general intelligence. It doesn't matter if we stop development right now we have a massive paradigm shift. It's definitely an industrial revolution. Some people argue it's already a bigger deal than the internet and that might actually be the case. It doesn't matter. We don't need to argue about that right now, but it's already a big deal and all the evidence points that this will get just much much much

better in the future. So, it's kind of important to take this seriously. We do take it seriously uh in in a lot of cases as we will see there is uh more AI policy makers than I can count in DC these days. There's just an extreme number of them which is good. There's still quite few cyber policy makers which I will argue is bad. But the amount of people with AI in front of their name is definitely increasing. So people realize that this goes fast and that is good. AI is increasingly integrated in critical infrastructure in often good ways sometime not not good ways. Again in terms of technical defense we can use AI for a lot of good

things. We can use it to find vulnerabilities in code and so forth. So that's great. But it is a new technology. There are a lot of inherent vulnerabilities as we will see. But the only point of this slide is that it goes fast. it will really change more than most people believe. the people on sort of the top of the game and know a lot about lot not about this are very very optimistic and hawkish that this will go fast and I kind of tend to agree although I'm I'm a little bit little bit more skeptical just this is not just me talking some some smart people saying the same thing a lot of a lot of

different articles about that we also have an administration that will probably reduce regulation which will also increase the pace of this development so I mentioned this it doesn't matter if they're wrong I think this is super important because a lot of people myself includeed included are very skeptical. So like yeah these AI leaders just when you know boost up their own companies and they say this going fast because then they can sell more. Maybe that's true but it actually doesn't matter because we already seen enough development to know that this is a complete game changer that forever will change the way the world works. So that means we should also make sure it's secure. And is it secure? Not really.

Not really at all. There's a lot of good report about this. We did our own investigation. I mean it's it's a it's a discussion in itself but there's a lot of vulnerabilities and why is that the case? I had a really interesting discussion a few weeks ago in DC with some higher politicians about just the difference about the department of defense culture and the SF culture and that's massive right because all this AI development most of it is happening within a few blocks basically in San Francisco right it's pretty crazy they have this move fast break things cultures they're very different from the traditional highly secure government culture being you know six floors beneath the ground with a car

to access your computer a computer and a phone that never leaves the facilities. These sort of heavy security requirements doesn't really exist in SF in that sense. It's, you know, you have these these event people mingle around at parties, outside work. You have some empropic employees hanging out with open employees. They talk, maybe they're drunk, maybe they're share people, maybe not, right? But it's an intermingling of competence that's useful. Like that's the beauty of San Francisco. Like it's such an amazing place of decentralized innovation that's really, really useful. But it's not built for security. It's built for build things, break things. It was a defense app in the beginning, right? SF started with a lot of defense

companies but that's kind of not the thing anymore. Right now it has a move fast culture and all the AI development is being done in this move fast culture. Uh we have a very high proportional of foreign staffs. So there's no real security clearance requirements. Um you know there's some very few except exceptions but wildly the sort of AI companies can choose that these people we deem that they're secure and that's fair enough. Should we require security clearance? That's something we're going to talk a little bit about today. But it's super duper tedious to get a high critical job in the government. If you're a good developer, you will get it at one of the AI companies. And you

know, again, that's good. We want AI development to move fast. Most people in the US, I guess, all people in the US want us to win the AI race. But this speed versus security trade-off is a very big culture shift versus other critical sectors and AI. I think the biggest thing about this is again security absolutely cannot be retrofitted. I mean, you probably heard some politicians talking about trillion dollar investments. That's insane, right? Like the amount of number is just staggering. And even if it's not trillions, it's hundreds and hundreds of billion dollars. Hundreds and hundreds of billion dollars. That's that's what it is. And that's you can't like shoot out in that direction and then change

things. You can try out, but we really should get security right from the start. That that's why it's good to talk about these things and good to think about these things. Like there's a lot of inherently insecure things about the internet, right? And they're still around. It's quite a long time since we started with that. So, we really want to be a little bit smarter as we proceed with AI. This is the last stop by adding these articles, but it's good to show I'm just not saying these things. There's a lot a lot of people saying this thing. This is like it's not small things, right? Like every AI data center is vulnerable. Maybe that's an

exaggeration, maybe not. But it's still like it's still pretty a lot of these things definitely happen. Not just the US US companies, of course, the Chinese companies too. Just a quick slide about AI infrastructure makes sense because I'm going to talk a lot about AI infrastructure. What is it? Right. It's it's pretty complex as a lot of infrastructure does. But this is global. It's also a lot of different technologies involved because training this model again it's not just something you do in one location. You often use training data from all over the world. probably read about all these gruesome articles that you know Kremlin has fed propaganda articles to train chat GBT and that's because it's difficult right

these are not isolated environments and when you have the data the model you have some deployment infrastructure deployment infrastructure and for the underlying hardware you have the GPUs you have different data centers training data centers yada yada it's all over the world some is produced in the US a lot of stuff is not being produced in the US and these are these pipelines are complex I think that's the main part of it and we definitely also So especially today like geopolitical relationship are complex right I was last year I was in Taiwan and gave some presentations for their ministry of foreign affairs and there was they're like in this tight spot right with between two major

superpowers yada yada so there's a lot of complexity and we cannot claim that we own it and US released a new AI report a few days ago I think July 23rd is pretty good the AI plan going to talk a little bit about that but where they say that we cannot ensure security if any part of this outside the US. That's a pretty good goal, but can we reach that goal? I don't know. There's like it's very it's very tough right now. So, some lessons uh from the the previous attacks, then we're going to dive into some potential strategy improvements. But this is a remarkable study. I very much encourage everyone who wants to know a little bit more

about this topic to not just listen to my talk, but to read the paper by Andrew and his colleagues. It's a fantastic paper. But what's happening here is that they basically took a lot of data from back in the day in the mid 90s to last year. So more than more than 20 years I guess less more than 20 years if my math is correct on this tired tired evening but a lot of years they took a lot of data from attacks such as insider threats cyber intrusions and yada yada from different sectors right so what happened to these attacks what happened to the companies after these attacks and it's very interesting because I really

enjoy the field of cyber security and economics but few people work in that field it's very underresarched it's not often you find these studies I think it's just fascinating and the amount of lessons you can generalize and translate from here to AI. I I think it's just spot on. I think it's often a onetoone uh onetoone translation as we will see. So here's some of the highlevel lessons. I'll show you more graphs about this in the coming slides. But for example, most victims are industry leaders in tech heavy dual sectors. Well, that's that's spot on what AI companies are, right? That's that's the prime targets. And again, you don't so in the general news, you kind of forget about these these

attacks because it's been a long time. There's a lot of them and they've been very severe and it's not just the company themselves that targeted. So if I'm you know a big I don't know big biotech company I'm hit by an insider threat and I lose some very precious IP secrets I'm not just the one being target my entire sector is down by 50%. That's crazy. The US export in the compromised sector is down by 50%. That's incredible numbers and all these are big level statistics. You should take a little bit with a grain of salt but that makes sense right? Let's say OpenAI, the model weights, OpenAI is stolen. Massive pain. I think it's

pretty likely Anthropic would suffer from that too, right? Google would suffer from that too because what if it's unlikely? Let's say Russia steals them. They just uploads everything for free. Yeah, all the sector would probably suffer. This is this also speaks into sort of attribution. who is responsible for what in terms of accountability and it's not just you and the current sort of liability framework we have for the AI companies I don't really think it makes sense because a lot of these topics are bigger than just one company we see some gruesome number here as well again this is other sectors than AI but uh we generalize it to AI as we will later revenue drops with 20 to

40% that's a lot the crazy thing about this it don't bounce back so even a decade later the revenue drop is still there. I mean that's a very good point and this is one way we may be able to use to incentivize these AI companies to increase the security because even people working at them right now they know that their security is not the best but they're getting really really attractive target as this moves on a lot of people believe there will be sort of one defiant AGI player you want to get first here right you don't want to be hit by big cyber attacks because these attacks last for a long time global innovation suffers one interesting

bullet of course Especially if Asian people are blamed for attack, Asian research staff workforce is reduced significantly more than workforce from other ethnicities. So this is incredibly interesting like there's a massive over representation of an Asian workforce in the AI labs right now. Like we don't have a framework. We don't have an incident response plans for this like in every other industry. We will see later graphs with very clear statistics that Asian workforce is fired really like over proportionally because there's a scare like similar to the red scare kind of now like we believe that you know maybe there's a Chinese insider threat fire all the all the Chinese scientists we can't do that in open like there's

too many like that wouldn't work so what do we do will like panic will there an outbreak I think it makes sense like people should red team these things people should think about these things it's it's good to plant these questions in our heads and yeah lastly we see a shift from trade secrets to patents, which makes sense too, right? If the companies feel they're just going to steal these things anyway, we might as well patent it. Patents is kind of tricky in the AI world because AI is a inherently opaque technique. It's is not it's not transparent, right? We don't there's a lot of parts we don't understand. We still don't understand. And I think people are already worried

that people break their patent and you know some US company patents something and then the the sort of word on the street is that in Chinese companies just steals it anyway even if it's patented because now it's public. So we don't really want to do that in that. There's a lot of stuff we don't want to patent things because we don't want the information to become public and we don't trust that the patent will be kept secret anyway. So in these other other industries we shifted from trade trade secret to patents. I'm not sure whether that makes sense in the context of AI. some graphs just to show show this in a little bit more telling way uh country

level exports of effects of espionage we see that it goes down for the entire sector and this is got a big I think this is good to remember it's not just you being hurt we have to think about this openi is obviously big player now they will hurt all the other other companies also us as a country some graphs for the things we talk about revenue goes down a lot maybe bounces back a little bit but it doesn't really bounce back even after a decade as we saw this this is something that especially now I don't think the AI companies thinks that they can afford to think about it because they just need to move so fast in their development and

they're also under the sort of SF move fast break fast paradigm but move fast break fast is good when you're a startup when you're perhaps the most important national security asset again that's not good and the revenue drop alone is is is bad but there's other bad consequences as well this is perhaps the most interesting graph and slide of them all you see the the black chart is just non-Asian employees and the other one is Asian employees. There's a significantly stronger decrease in Asian just firing Asian scientist postbach and I think this is so telling because this I some someone should think about what do we do if this happened in the US because we

don't we don't have a good response for this and it's again we don't have the same security clearance demands in the AI labs as in other sectors which is a problem. Yeah, the trade secret versus patents. We see clear that patents goes up. Uh the bars that goes up are the patents. And again, like does this make sense for AI? I don't really know. I'm I'm not I'm not sure how well this will map into into AI companies versus other companies. Uh so it's it's not certain that this and again if these things don't make sense, then sort of behavior may result in a panic that we really don't want in the case of an accident.

So again, we need better instant response plans. So this was some level of like what's happening in other sector how does this look and I think it's it's good to serve to show that this happen like this critical infrastructure is by no means secure there's also non-critical infrastructure in this data set right but critical infrastructure is being hacked and it suffers severely then we still don't really have this mindset about sort of AI development I think yeah they may steal some things but it will be fine maybe it won't be fine and these are companies of a severe national security that are growing very fast so what are strategies from a national level to cope with this. So me and my

team have been analyzing a lot of US strategies and US report and the amount of AI policies that pumps up these days is just incredible. It's a staggering amount. But there's some cyber security amounts as well, but the amount of securing AI is way less than the amount and again a lot of people talk about the catastrophic risk of how do we ensure that AI doesn't run a mock in 10 years and take over the world. It's good to think about that but AI security is far less prioritized than AI safety which you know obvious as a security person I think that's quite bad and we see that in the policies as well so some common

just quick background what are these things that I'm talking I'm going to talk a little bit about national cyber security strategies and how these needs to address AI as well and sometimes they do and sometimes they don't and it's not that important the history of them but it's good to know that AI is obviously a very nent field but sometimes we forget that cyber security is also ation field just a few decades ago most countries didn't have a national strategy for how to protect their cyber assets and even a decade ago many countries didn't have that like these are the largest countries in the world so these do we're kind of new the cyber policy makers are

quite new the way we work with this so this is all this is all pretty new but these documents what is the cyber security strategy right it's kind of like a a PDF document usually it's 100 pages some of them are technical it varies a lot across the world you know the German is super technical the US is more of a visionary document and so forth but it's It's a document that links to other documents and hopefully all these documents make some sense. One question that we always ask is sort of do we know what we're doing here? Right? And I think coming as a technologist my general stereotype is that policy makers in conso cyber they don't know you know

they don't know anything right and that's actually wrong like I've had the pleasure of meeting a lot of really brilliant people like the people on ONCD that wrote the US cyber strategy they're brilliant like they're amazing people very techsavvy also very policy savvy people at CES are usually very smart like there are a lot of good cyber policy makers and that I was honestly surprised by that I'm very happy to say that there's still some questions we find interesting. So it's like okay we have some you know cyber plan but for example the Germany's national cyber security strategy referenced critical infrastructure protection plans from 2008 that's ridiculous right like it's absolutely out of date the US is not

that bad the US has like up to date but it's always dangerous when you reference documents and maybe these documents are out of date so the the plans are definitely not always up to date and always ask the question like are these documents useful in practice I still don't really know like they are they are really useful and they set the tone But you know are they policy documents or will will these documents actually help you know the actual AI labs in San Francisco. Are they connected you know the real world on the ground setup because that's also something that's that sometimes feels like a dissonance right between the policies we implement and the on the ground development that's

happening. I think it's easier in areas where the government has a larger control which is the case for a lot of other critical infrastructure. We still have an AI development world that's very far away from the government. So, and who is accountable is is the biggest question, right? A lot of people in the AI safety community talk about centralization. Do we want government-owned AI? Maybe, maybe not. But this question that people ask, and this doesn't matter too much, but it's good to know like what what we've been looking at because there there's some parts and not all of this is directly related to AI. Some of it is related to AI, but this is a massive study that me

and a bunch of colleagues at the Kennedy School has been doing. We've been looking at a ton of the largest national cyber strategies around the world. We created a framework with a lot of evaluation criteria and we just went through all of them. We gave them some scores. We interviewed people from all around the world who created these strategies to try to figure out a lot of different topics where the strategies are good and where they're not the good. And what I want to talk about now is again like how that fits into AI and where AI is being prioritized. And as we'll see most of them actually really prioritize AI infrastructure which is good. that sort of speaks that some of

the problems you saw from previous sectors may be mitigated but it's by no means certain and this is some of the scores that's good to mention so it doesn't matter so much for this talk but we never really give numeric scores I don't like numeric scores in this sense because they're always some people work with these numeric scores of saying that the US have a strategy that's seven of 10 and the the Netherlands is 10 of 10 these numeric scores are always arbitrary I think they're never justified so we just try to see what are some outstanding things across the bar. What are some things that US did really well? What are some things that Japan

did really well? And this framework worked really well. Fortunately, like yeah, policy makers appreciate this because it it's a little bit more honest. That's our philosophy. But this if you're interested, there's a lot of long reports that I'm happy to share about this study as well. This is about what we're looking at here. And this this is kind of almost one to one very relevant for the AI infrastructure. First of all, obviously all all the strategies they work at protecting people and infrastructure. So this is our way of evaluating these strategies. What we think a good cyber security strategy should entail. And it should protect the people, protect the infrastructure, protect infrastructure. That's just one part, right? Protecting

the infrastructure and people is one part. Generating capacity is a massive part. It's a lot of countries work well with that. They realize that we need more cyber competency, but it's also some shortcomings. Partnership is a massive point especially in AI world like it's really crucial right how how should public private partnership works there's a lot of discussions about this could we have a soft nationalization which means that the government don't own the AI labs but they have some officials in the AI lab can we have a hard nationalization should we have international partnerships there's a lot of different questions around partnerships for securing AI and yeah codifying responsibilities basically mean who is respons responsible for that

this is still a little bit of a graysuit like If OpenAI gets hacked, whose head is rolling? Well, Sam some Sam some Sam some Sam some Sam some Sam some Sam some Sam some Sam some Sam some Sam some Alman's head would probably roll but other heads should roll as well and it's still a little bit uncertain about where exactly that responsibility live. If there's anything we learned from analyzing strategy from around the world is that we need to have super clear accountability of who's responsible for what and some strengths across the board from both the US but this also includes countries from the other world from all these different policy documents and strategies that we've been analyzing

like there's a lot of documents about AI no one can no one can neglect that so if if you would secure AI just by creating policy documents we would probably be in a decent spot because there's a lot of lot of lot of policymaker work on this I think that's good it's also topic. So there's definitely focus on protecting against emerging threats. We obviously need to make this more practical, right? But that's good. Like we would be able to say that's not good. So I think that's that's fine. The general just this action plan is good. For example, it came very close. Uh very recently I think this this actually pretty reasonable action plan. If if you

haven't read it, it's it's kind of like useful uh literature to read if you're interested in AI security. Anyways, general critical infrastructure is also very prioritized and it would be weird if it was not right because we need we need some documents. It's not always good again like for example in Germany they they have documents from 2009 which is just ridiculous but usually the surrounding infrastructure is good as well because AI is not in isolation right it's it's just as we don't only need to secure AI we also need to secure the power plants we also need to secure the data center which is now part of AI we also need to secure the chip

manufacturing so all of this needs to be secure for AI to work it's a big deal partnership is also work been doing quite well in all these like there's a lot of strategy for how to partnership and I'm actually pretty happy about this. It's not all of them are good, but we're going to see some really really good success cases for how to actually remove bureaucracy and improve public private partnership. So there's there's some hope. I'm I'm really happy about what I've seen there and I was positively surprised and there's also a lot of good policy that actually seems to work relatively well. I mean we need to get better but everyone even in the government knows

that the technical workforce and entrepreneurship need to be you know encouraged. We can't just create security in isolation. We need to work with the industry and create a lot of really smart talent and we actually have policy that does this. That's that's pretty good. So there there's some good things across the board that you know this strategies are working on this some areas for improvement across most countries. There's no still good metric. I mean we have the the MITER atlas is pretty good for for some metrics for uh security against ALAs. There's some other other work out there but there's no clear official metric that will really go. Some of you have probably read the Rants model weight report and

that's also pretty good. But there's still a lot of abstract numbers. There's still a lot of reports kind of saying saying a lot without saying anything. So we would need to know again to quantify, right? Try to figure out what would it cost if this breach happened. What is the exact what's it worth to remove all the potential insider threat from OpenAI? We basically assume that there are a couple of insider threats spiced there already, right? But what does it mean? Like how bad is it for our national security? Is it terrible or is it manageable? And we don't really have a metric for this. I mentioned all these strategies. There's a lot of good plans

for creating cyber competence. But there's only technical cyber competence and that's actually not enough. Cyber security becoming crossd disciplinary. We need cyber lawyers and cyber policy makers because in my last talk I talked a little bit about incentives and this is actually a big part. How do we incentivize people and that's has to be created by for example preventing class action lawsuits. then you need skilled cyber lawyers doing that and no one no one is really talking about how to create anything else than tech cyber competence and that's by far not enough so that's something we need to get better I think incentives is a big deal that's I'm really sad it there's no good

strategy for this there's some some people talking about it but the talk is generally quite handwavy and unless you can go to the leaders of the AI labs and you say them clearly you will make a profit by implementing the security tool because of XY Z maybe this because if you don't do it you keep doing your business or something else. But we need to incentivize it because it will not happen by itself. Like it right now it's kind of a nice to have and it shouldn't be a nice to have. I think it really needs to be a mustave. So the incentives are pretty terrible. I think we should talk more about just doing more red

teaming maybe having governmentally owned red teaming of this air lab. That's there's no really talk about this right now and I think that's that's just sort of shining in its uh in its absence if if I may say so. Uh some open question there. It's not good or bad, right? I can't say I blame people for this, but the balance of regulations and incentives and recommendations is really good because you definitely don't want to over overregulate. I met so many examples from from way around the world where people overregulated and that's bad. And there's also no no set answer, right? A lot of people in the US talk about Europe overregulating. Europe talk about US being, you know, the wild west

and and a crazy place. And it's kind of finding that sweet spot again in AI. We want to foster AI innovation. We want US to dominate AI. AI world. I'm all for that. I think that's great. But so where is the trade-off point between free innovation and and security? Uh few too few discussions. But this not a criticism, but it's just a question. And what are this is good, right? What are the rules of the modern AI agency? The the National Institute for for Standards and Technology have a pretty good uh center for AI branch that that came up and that's good. So there there are some good works happening there. But still not sure, right? How much power should

they have? I like this sort of soft nationalization versus no nationalization versus hard nationalization trade-off where we talk about how much how much the government agencies own and do and say about AI and I think it's clear that they have to say something because they sort of a complete free market scheme with AI seems kind of dangerous to me and I think this is just too many externalities so we probably shouldn't do that but it's a question that no one has the answer to and definitely different parts of the world tackles this in different ways you Europe is way more conservative right and way more regulatory which is good for them but it's also if everyone else runs it

doesn't really matter and collaboration sort of ties hand in hand but international collaboration is a big one here right because we we see a trend towards being more and more siloed we have the EU AI world and the US and the Chinese and so forth that may be good it may be bad but it's it's kind of happening it's good to be be aware of so some highlights and I think these these highlights are you know some of there's a lot more of them I could have a billion slides of this banner Uh I wanted to keep this relatively brief but one thing that we can learn from Australia and I think this sort of maps

one to one on on the US and AI development is Australia has a really good way of encouraging and incentivizing people to report attacks. So we always want people to report right there's a cyber breach and want the company to call the government and say hey we had this cyber breach I want you to help us that often time doesn't happen and the reason that doesn't happen is that the company knows that okay there was a breach but it was my fault maybe like I broke the broke the law a little bit or at least I was careless so it's way better for me to just pay this ransomware or like not report it and just go under the radar

and what what Australia did is pretty smart is they basically separated completely the incident response and law enforcement there's still like some connection of course but companies seem to feel quite secure that they can try to report things without being penalized and you should take with a grain of salt but I think it's a good move towards that direction in the AI world I think this is extremely crucial like no AI companies wants any regulations they're very scared of having this happening to them so if we can find a way to sort of make sure that they have full assistance from the government if there's insider threat we're not going to blame them we're just going to help them get it out

and so forth we still have to blame them right it's tricky how to make this good but to try to separate it. It's a good thing. I think it's worth to think about. The UK did something extremely interesting. It's called the industry 100 initiative. I love that. The US has an equal thing now. I think it's called the J100 or something. It works really well. They basically take a bunch of industry practitioners and they give them pre-approved security clearance. They keep their industry job. They do exactly what they did, but once in a while they come to the government and they share their insights with the government police. This is just brilliant. It works super well. The

government people love it. The industry people love it. Everyone just loves this. And it is a good way how to reduce bureaucracy to foster collaboration. This kind of work I think we need way more of in the AI world too. Try to make make this actually happens right there is everyone knows that there's a lot of things up in the air. So open AI has a lot of people on the ground in DC always talking to policy makers. So they try I think it's when you have this the good thing with the I 100 in the UK was tech people. It was like government tech people talk with industry tech people and they genuinely shared knowledge and

this yeah we would like to have something about this in the AI infrastructure sector. I think that would be very good. I have a general highlight from the US. This this is not a super big deal but I mean I'd like the they really do press it at CISA to shift the responsibility from users to consumers. I think it's kind of nice to think about this term in the AI too, right? Because right now there are a lot of national security risks linked to AI companies and I do not feel that open AI take full responsibility for what could happen to me as a person. Just we talk about the scam bots that I mentioned in

my last slide. There's a lot of these AI companies, they even acknowledge, right? We had most people from these air companies say that hey we accept and I know that these models are being used to send scam emails and that that's just what it is because we can't stop them from creating marketing emails and then the marketing emails are used for scam but like the kind the little guys hurt by the big guy making profit and so that that's that's not good right I think in general we should try to shift that responsibility more and so in that same line right this is not about AI infrastructure security about security from AI Japan is the only country in the

world and the only country I've seen who has something called cyber security for all. It's a national strategy approach where they include old people and like people with different disabilities and yada yada in their cyber security approach. I think it's beautiful. It kind of like plays a little bit into the Japanese culture of I guess being a bit more honorable to elders but it's really cool and no other countries working with this and is sort of cyber security for all is a good concept with AI. This is also super relevant because again this talk is about AI infrastructure but we would be naive to not acknowledge that AI is also being used to attack and to

the strategies should be used to defend us as well. It goes kind of hand because also build build positive uh positive enforcements. Anyway uh a few sort of concluding remarks and recommendations from all this uh all this analysis that we had here. So, as we mentioned before, an espionage and insider threat. I there's more information, I guess, if we crunch down Andrew's paper, but I talked a lot of things about cyber security. We talk a lot about cyber security. What is that all worth? There's a couple of spies in the companies anyway, right? And there was a super interesting talk today that I didn't listen to, but I hope some of you listen to it at I think 2 p.m. with

this guy about the North Korean spy was his employee. And I met a bunch of other folks who had the same situation, right? That they have they they work with someone and that someone turned out to be a spy. And these things happen and so cyber threats are massive but the insider threats are even more pressing and I haven't really heard anyone with a good defense plan for that is really tricky right and again many people just accept that yep we know that there are a couple of spies in here and we don't know what to do and that's just really bad and I I'm not sure how bad it is maybe you know maybe we can accept that

cost maybe we can't accept that cost but it seems it seems very troubling to me that that's a fact we accept of course it's even more troubling by the large amount of foreign talent in these labs and we don't want to have to like we don't want to remove that. Like I'm a foreign talent. I'm born in Sweden. So I'm I'm I'm foreign talent. I want to be here. So we we definitely don't want to remove that, but we should also acknowledge that it's a potential problem. Fortunately, there are few Swedish cyber incidents uh towards us. I say that on on the record, but yeah. So again, this revenue drop that's substantial. It's sort of good to

highlight that like this the thing that's interesting is that this re like the companies don't bounce back. That's sort of my message to the AL labs. Like it's easy to think that we take a cyber attack, we take the hit and we move on. That's often times of the general opinion, not just in AR, but in a lot of companies like we don't want to overseure. If there's a lawsuit, we'll take the lawsuit and move we move on. Um >> just to clarify >> clarify see that normally in regular cyber security incidents. >> This >> Yeah, you're saying revenue R&D drop by. >> Yeah, this is normal cyber security companies, but not in AI companies.

>> Not in AI. >> Yeah, but because we haven't there's not that many AI companies like we haven't seen a massive breach yet. So that that's kind of interesting what would happen because there's just a few companies. What would happen if this happens? We will see. So that that's sort of the that's sort of the remark, but the remark is that there's persistent damage and they stay for a long time. We would be naive to not expect this to happen for AI companies as well. And maybe the most important thing is is this part that like one breach can more than just a company. It hurts the entire sector. The AI sector is in many ways the most

interesting important sector we have these days. So we definitely want to secure it. Few more words. Uh so yeah these sort of trade-off I really want to narrow down that point. AI is critical infrastructure. It's sometimes treated as critical infrastructure but not really like I think one of the keys to just solving AI infrastructure security is super clearly narrow down where exactly like what security requirements do we need and heavily reinforcement because we haven't done that yet. like we know it's important but it's treated a bit half and half in different sectors and that's that's just bad and it's kind of this is up in the air how close AGI is artificial general intelligence but even

if you buy into the hype a little bit you know that the person who reaches this first is a complete game changer that will just change change the world in so many ways and I think it's actually kind of likely to it's reasonable to to think about this and even you think about that even if there's just 1% chance we you really look into that and that means that this security is quite important because what if you're in the final stretch is one year away or six months away and then the cyber attack happens that's obviously terrible so these things are it's so much bigger than the air I don't really care too much if a biotech

company get hacked that's bad right but if a US air company get hacked and that deeply affects me as a citizen yeah I care about it and I want more things to happen there and the the response protocols I think they're interesting like we don't have a clear plan for what to do if this happens like are we going to war probably a lot. But like we we we should think about these things really clearly because that's it's going to be a few years that are incredibly powerful going forward. And during these years, a lot of things is going to happen. If we don't have these action plans mapped out, we're just going to wing it. And

winging things can be very dangerous in these situations. Last words. Uh this is my last slide here. But the supply chain is super large, super international. The US really wants to own everything. I don't think we're going to make it happen. I think we have to accept the fact that it will be international. There will be a lot of different partnerships that are tricky. It's not a lot not always secure. Again, finding compromises here is tricky, but it's very important. There's a massive workforce gap for there's different areas, but especially non-technical cyber practitioner. We have a few like the people I met are good, but we need way more people in this. We also, as always, just need more

cyber security practitioners. There's a lot of AI people out there. Maybe we need more of this too, but I'm not as worried about that. Incent. I always talk about incentive alignment in all my work. I mean, we talked in the break a little bit about insurance companies. They're kind of the best guys that know in the how to quantify these attacks. But I think incentive alignments are always more like important because again, if you're the leader of an AI company, you want to run, you know, that you're you're potentially the company creating AGI, massive economic economic revolution, you don't want to pull the brakes, right? So, how do you incentivize these guys to pull the brake

a little bit? And I think some of this work can really do that by showing that it's worth protecting yourself because if you get a hit that maybe you maybe only have one life, right? If you hit one time, you're out for good. So that could be a way to incentivize them a little bit, but we need to get way better at this. So that was my meager description of of AI security. Thank you so much for listening in late night. [applause] Any questions now? I'm very happy to take.

So is what we should take away from this that the most efficient way to get the advantage in AI or in the AI sector is just to go steal it from the other guy because it disadvantages them and then you have it for free. Well, roughly. >> So this is a very interesting question that ties a bigger thing. I often kind of controversially talk I think the west works almost too little with sort of fighting fire with fire sometimes. I think that's not necessarily it depends right like how like do you want what kind of partnership do you have if you take that approach obviously like you you reduce a lot of diplomacy point it

depends like if if the only thing in the matter in the world that matters to you is that US reaches AI supremacy first yeah I mean some some sabotage is probably not a bad idea is that ethical do I want to live in a country that does that I don't know it feels pretty bad to me but if the sort of sole goal is uh is to win that like and like thinking in these terms makes sense because yeah maybe I don't think about that. But other countries definitely have these strategies, right? So like it would be naive not to acknowledge that. So I wouldn't say yes because it's a bit of a loaded question, but I wouldn't say no

either. >> Yeah. >> Yeah. Very good question.

>> Hi, thank you for a really cohesive presentation. Um I also have been working on some things related to AI and critical infrastructure. I want to share one or two things with you and some questions. The first and you talked about it. Um I really think that supply chain is an effective lens. I find that talking about security, you talked about incentives and trying to motivate when I've had conversations with frontier model companies or AI export control questions around that security and risk and resilience don't really register the incentives that I up but supply chain does because of the thing that we're talking about acquisition cross procurement when it comes to strategic competition with China in particular um

you know there's a sort of panic headline driven thing we saw around deepseek but if you think about it through a supply chain lens companies like OpenAI are really trying to think should we build, buy or purchase >> this and and that ends up being a helpful lens um to get people to focus on security and risk. Two other things >> uh on the supply chain. So there's a few initiatives you may know about but should know about. One is the AI bill of materials. um they just put out their use cases for AI incident response and some really practical examples of what it would look like to ripple across to model to MIT has a um open-source uh

looking at open source software and AI which packages are most >> used in models and things like that that specific feels like a a really >> solid painoint. I know one of the professors from that group but yeah that's a super super cool >> okay runjen you know okay great and then the question I had for you um is where do you see so I like that you brought in the national cyber strategy >> and the thing I've been tustling with is that when you designate something as critical infrastructure I've worked in critical infrastructure it means regulation it means the government breathing down so AI is going to fight so hard not to be designated as CI

and you you nailed it in the beginning this DC versus SF paradigm. [snorts] So what have you either seen from that laying out as a cyber strategy industry 100 is a way to think about it differently? We used to have CPAC here in the US. It was the same kind of idea. What have you seen that really pushes >> that into a domain in the middle that's a little messier? Maybe not as good but better than the zero we have otherwise. >> Yeah. Yeah. Thanks for these remarks. Super super insightful points. It's great that you're working on this as well. I think that question is spot on, right? That's sort of everything we have and I think we have to see the reality

for what it is and the reality is that we have a few more years with an administration that not going to regulate much. And again, that may be good, that may be bad. But I think it's very clear that we want to dominate AI, right? We're going to move forward and in that context, what do we do to make it secure? And I think like yeah, maybe that is good because I I would like us to win the AI race. I think that's a good thing. And I think that yeah, this maybe using some of the I I 100 having like pre-made security clearance. I think security clearance is actually a really good tool to ensuring that people

are safe, right? I think that's like a good good sort of toolkit in our arsenal. If you can use it more, I think that makes sense. I think the insider threat is really pressing. I think like we never really manage to make something critical without being susceptible to insider threat. It almost feels like I think that it's I'm not again like maybe we don't want to stop the people from being inside the companies. We just want to stop information from leaving the companies. That could be one way to do this and [snorts] but I think that's something we have to be aware of. Knowing where we are right now is sort of the first step

and we are in an environment where I think there'll be very low regulation like we will have the SF thing basically and I think that can be cool that we can work from that but how do you so then the question should be or like the thing we think about should be how do you secure some something in a move fast break fast context and how do you secure a national security come yeah company or sector that's not really going to be too regulated I think in the coming three years and I mean that that's sort of where we stand and then then you think okay there may be some attacks coming in. How do we maybe allow them but work

more on resilience in that case? I think that that could make make quite a bit of sense just accepting that this happens or yes these are some sort of high level of work but trying to squeeze in some security clearance there. I think it's good to have as large a workforce as possible with some amount of security clearance. I mean like that would probably my priority if I did something right now and like right now if let's say there's a trade-off you know 20% of the people are security clear you want to get that up to 50% maybe like it's as okay as possible if some people need to be let let go of that's I mean that's

always the main thing and then you would obvious taken from the company if there's an incident but at least you can keep going because the worst possible situation if there's a big attack you lay off a lot of people and then you stop like we do not want to have that happening because that that would that would sort of put us out of the AI race. Very good questions. Um I wondered in your review of national cyber policies if you'd seen any effective policies around tax offsets in the incentive structures for encouraging better cyber performance. >> Right. Incentives to encourage better cyber performance. Uh yeah, I think I've seen some pulse in that. I have to think

about it, dude. It's nothing that pops on on top of the mind. I mean, so I talked with um with one of the the heads of the UK cyber agency and he he kind of said that his answer because I asked him or the exact same question is he said like, "Nah, we're just using class action lawsuits." But I think that it kind of works in a way, right? I mean, that's sometimes that sort of lets let and it's kind of a nonUK response because it's more of an American response, but it kind of lets the market figure it out. And honestly, that's I think the problem with that is that we we probably didn't have good enough

cyber lawyers yet. But like I I am like in many ways I'm a free market believer and I think the market should figure these things out if but again the problem is if the external analysis are too big. So my incentives in that case would probably be to but we don't do this in a good way but I think the the secure by design from CISA is really good because they try to empower users. I mean the problem of of this is that yeah when there's for example a big you know a big cyber breach there is a lawsuit but it's not that big and then the real loser is just the millions of people lost their information they're

not like maybe they get you know I think T-Mobile had this thing where they gave you know each compromised user 20 bucks or whatever it's nothing right like it's too bad so trying to make sure that the users still has more rights and maybe maybe that doesn't mean penalize the company more maybe it means penalize the company more but just clearing these things out so like a little bit I I think that secure by design like in theory that's kind of in that area and I I like that but it's not it's not everything but it's something >> I mean carrots are carrots are tricky right because then we we I can share a report about this we talk a lot about

sort of yeah budget cuts and sort of a lot of different sort of allocating budgets for different technologies and what we often seen is the carrots from the governments often times the market just figured this out better itself because it's like having some amount of regulation then let letting companies do their things. I think that's good because when you you can have carrots and it can work but it's there's a lot of ways in which it don't work too because you can sort of you can stifle natural competition and I think fostering just natural competition in a health way is always powerful for innovation. >> Sometimes a carrot is a removal of a stick. >> It's true. That's true. It's a good

point. Wise words. I think we had a question maybe one more back there or if it was >> just question I guess um there's no incentive but government is also doing business with the corporate if you think about it so if you're not going to follow the rules they have laid out they're just not going to do business with you they'll just take it wherever you know whichever company is following the rules they are kind of laying out >> so using the government >> yeah yeah that's what yeah because u when the EU14208 um came out it was all about hey you have to make sure that you follow XYZ things for your critical infrastructure

otherwise we are just not going to do business with you it was pretty much >> I think that's a really good thing point and one one incentive you're saying like we're going to do business with you if all is there's a lot of nightmare scenarios here this one I think it was in Australia there was this story about they added this you know you need to have this certificate and then you can be the pentesting agency or pentest the critical infrastructure but it turned out that kind of like the good pentesting companies just weren't good at getting certificate and the companies were good at getting certification. they were bad pentaging companies and this so like that's why like these things they

can be gruesome right and you I really don't like these things but I think that that's a it's a good example it's a very good point these things it's definitely a good way from the government to incentivize companies but I think sometimes it works and sometimes don't

any more questions all right thank you everyone thank you Fred >> [applause]