
And now with no further ado, Dylan. Hey everybody. Um, I still see a bunch of people coming in. Maybe I should wait another minute or two for everybody comes in. Quietly, but we're behind. All right. I'm being told we're behind schedule. Well, um, what we're here to talk about today is, uh that's weird. Okay. Give me a second. I think I can fix this. Oh no. What's going on? Quick program apps. We need something back. I got it. The wire that's loose.
[Music] Okay, coming back up. Telling, what's going on? Where's the sheep? I think I got it. All right, just need to reboot here. Boot sequence initiated. This talk is non-hyperbolic. You should pay attention because it's going to cost us billions. Robert Morris was indicted today for planting a virus that infiltrated more than 6,000 computers. All our base are the cyber weapon not Petia quickly spread paralyzing major companies and causing more than 10 billion dollars in damage. Apocalypse. This talk is nonhyperbolic. Everybody ready for good talk? It's uh too bad cuz this isn't a talk. It's actually a tutorial. It's a workshop. We'll get to that. So my name is Dylan. I'm a CEO and co-founder of a
A16Zbacked cyber security startup called Truffle Security which is based on a popular open source tool that I authored a number of years ago called Truffle Hog. Some people know me for Truffle Hog. Uh some people know me from my security research. Uh I've spoken at this conference. I've had the privilege to speak here at least four times and I've also spoken at Defcon Blackout and others. You can catch me on social media there. Uh take a picture of this slide while you can. And some of the security research that I do is posted to the trouble security blog which is at the top there. Um, so what was that about a workshop? Well, today my goal is to get
every single person in this room capable of walking out of this room and going home and using the tools that they have available to them to build the most destructive, infectious computer worm ever invented in the history of mankind that's capable of infecting millions of hosts and dropping ransomware on some percentage of all those hosts that it infects, costing the world hundreds of billions of dollars in totality. ity. [Music] Extraordinary claims require extraordinary evidence. It's a pretty wild claim. But before we get to that, um, we need to go back in time. We need to go back in time to the 1980s to the first computer worm because it laid the groundwork for all of the worms that
would follow the same basic formula all the way back since the 1980s. We'll see how far along we've actually come. Every dot that you see on the screen was a computer in 1988. And every red dot you see is a computer that was infected with the Morris worm in 1988. And it used four main ways to spread. The first way that it used to spread was through a simple uh debug setting on the most common mail server at the time. If you included that debug flag, uh it would allow people to execute remote code execution on the mail server. Well, security misconfiguration is currently a part of our OAS top 10. And in fact, if
you run Python uh Flask, uh some versions of it, uh if you use the debug flag, it will allow people to execute remote code execution on the remote host. So, we've come a long way since the 80s. Um the next way that Morris worm used to spread was through a buffer overflow exploit and another popular service. Well, last year the White House cyber director publicly called for us to all stop using memory unsafe languages because we're so bad at preventing buffer overflow exploits. I threw this picture in for fun. This is a picture of me with that same White House cyber director. Um, I went on the the White House website to look for this directive
and unfortunately found that the new administration actually took it down. But maybe a glass half full look is we solved buffer overflows in the last year. I don't know. Um the next two ways that the Morris worm used to spread were on the post exploitation side of things leveraging the access of the hosts it already infected and one of the most common ways that um systems would authenticate at the time was through simple IP based authentication and broken authentication is currently the OASP top two in the OASP top 10. And the last way that the Morris worm used to spread was through leveraging credentials that it found on target systems that it infected and using those credentials to spread to
other systems as well as some level of password spraying guessing some of the most popular password types. I don't need to tell everyone in this room that credential reuse and credential spraying is still one of the most effective means that hackers use to spread throughout systems. So, we've made a lot of progress since the 80s. As you can see, this formula of pre-exploitation and post exploitation hasn't changed. It's still used today by red teamers and hackers alike. And as a former redteamer, I will tell you it is an effective way to get access to just about any system on the planet. Um, the Morris worm was expensive at the time. It cost thousands of dollars to scrub
off every system that had infected. But they had no idea how expensive it was going to get with all of the worms that were to come. And in fact, the most expensive, which was in 2017, the Npetsio worm, crossed into the tens of billions of dollars in terms of total costs. and tens of billions of dollars. If you look at this graph, we are already on track for a worm that will cost hundreds of billions of dollars. You don't need AI to get us there, although I do believe AI is going to accelerate it. Um, the other thing that helps accelerate this graph is the nutia worm is the first worm on this graph that's fueled by ransomware
cryptocurrency. Robert Morris did not make a dollar off of infecting 20% of the internet, but the Npetsia group made millions of dollars in the billions of of costs that they infected on the planet. Um, and so uh the cryptocurrency ransomware that's become more and more popular in the last 10 years has caused a little bit of an acceleration here in terms of uh our worm development. Bitcoin greatest value is to criminals. It's funny, I used um to make a lot of these sound bites and uh sometimes some of the jokes that I put in were so well polished by Sunno that they came off sounding more like statements. Web is a solution in search of a problem. You
couldn't steal $15 million before Bitcoin made it possible. [Music] But on a more serious note, um you can look at the wallets that Natia set up and you can see the millions of dollars that's flown into them. And what's more is you can see money still flowing into them even though this worm was originally released in 2017 because for the first time there's a financial incentive for our hackers to update their worm and to spread new versions of it. Uh, I'm sure Robert Morris was probably hiding under his bed after it infected the entire planet, but these hackers were financially motivated to cause more destruction. Um, everything we've been talking about so far are uh worms. Um,
but we need to break for a minute and talk about how people hack systems and how it's different and how it's similar to how worms hack systems. What about hackers like the people in the movies? Well, let me tell you about APS. I'm an advanced persistent threat. I'm an advanced persistent threat. You're a threat. An advanced persistent threat. You can tell I had some fun with this. Um, I think the long story short as a former red teamer is the basic formula isn't that different. You have a pre-exploitation step and a post-exloitation step. But what's different is just how many different exploits you can draw from and how many different credential types you can leverage after you infect a system. This
is an example of maybe doing reconnaissance on a remote host. We see that there's some interesting looking service for which we have no familiarity with. Well, as a pen tester or a human being, I can go and do research on that host. I can see if there are any known CVEes against Elastic Search. And sure enough, there is a remote code execution against that version of Elastic Search. And then I can go and Google that CVE and I can look for a proof of concept. And sure enough, here's some Python code I don't have to write that tells me exactly how to hack that remote host. So it's similar in that we have exploits that we are using pre-exploitation, but
different in that we have a lot more of them that we can draw from. The other thing that's different is my attack chain is much more linear. And while it's true using this much wider array of exploits, I can get access to just about any system on the planet, I can only hack one system at a time. And that makes it a little bit different from a worm which is exponentially spreading and making copies of itself where you have one system hacking and then two systems hacking and then four systems hacking and so on and so forth. After we infect our first system, then we can start to look for credentials on that system. them in shamelessly. I'm going
to truffle hog to do that. But we could be looking for SH keys, session cookies, NLM credentials, AWS keys, a wide range of credentials. We take those credentials and we use them to spread to further hosts. This is our host exploitation step. And you can see it's a very linear path from one host to the next host. After we infect all these different systems, um we may want to deploy an exploit from any number of the systems we've already hacked. So we need some way to sort of centrally federate all of that. And that's something a worm doesn't have. This central service, usually called command and control, is a single point of failure. It is a place that if shut
down, the hacker is disabled. A worm doesn't have that. A worm is not centralized in any way. And if you were to disable one one branch of the worm over here, another branch of the worm over there can continue to hack. And that's part of what makes them so expensive. One of the most common command and control services out there is Metas-ploit, which is an open source command and control platform. And and the other thing that's nice about metas-ploit is it's not only federating access to all these systems that I've already hacked, it also comes with a library of thousands of exploits which I can then deploy from those systems. A much wider array of exploits than a
typical worm would usually draw from. Um to repeat the formula, it's exactly the same as the Morris worm. We have a exploitation step which among other things may include CVES, it may include um credential stuffing. It may also include social engineering. And then we have a post exploitation step which involves harvesting the access that that system we just infected has and using that to move on to more systems. Um to really paint this into perspective of just how vast our exploit library is. Uh today there are over 25,000 CVEes that are considered critical um in NIST's uh library of all CVEes. And to put that number into perspective, uh, CISA estimates that it takes a little bit
more than a month on average for most entities, public and private alike, to patch a new critical level CVE. And Google did some more research and found that most new CVEes are typically exploited within the first month. And Flashoint further refined this estimate, saying that most new CVEes that are critical are typically exploited within the first week. And so this begs the question, if you kind of put all those pieces together, it raises, wait a minute, why aren't we literally seeing companies getting hacked all the time? And I think the short answer is we do literally see companies getting hacked all the time. I literally went to Google and typed hacked millions and you can
see in the last week there's a random list of companies and entities that were hacked and you can do this at any given week throughout the year. The main protection that we have against a company getting hacked is fear of prosecution, which unfortunately does not apply to countries where there's no extradition. And it also doesn't apply to another demographic, which is teenagers. Now, um, this begs the question, and this is true, a lot of big hacks that you've seen were perpetrated by teenagers. If the latest gen AI models are capable of coding in the 90th percentile, surely we can get a less sophisticated LLM to replace the brain of a teenager and build the most
destructive ransomware spreading worm that the world has ever seen. Let's make that worm. Let's make that worm. Everybody do the worm. This is actually very serious. Isn't this talk supposed to be about AI or something? Why did we even come? So, the first thing I just want to dispel this idea that we're going to be resource constrained. You may be thinking, well, wait a second. Um, we need powerful GPUs to uh to run these language models. Um, this is two different language models running on my smartphone. this phone actually um within my smartphone it's running in the Android runtime environment so not given access to a GPU and not even given full access to a CPU and you can see it's
more than capable of running an 8 billion parameter uh model uh to further reiterate this point some people may have read this blog that I did a little while ago if you didn't the long story short is I found a likely back door in my smart bed internet of things um and so for fun I went ahead and I ran a language model on my smart bed. Um, and like it has the computational power a little bit less than a Raspberry Pi. And so we're not constrained on RAM. We're not constrained on compute. Um, the main uh constraint is actually in storage. And an 8 billion parameter model, which I'll be using for most of the rest of
the presentation. The llama advanced reasoning 8 billion parameter model takes about 15 GB. If we use a smaller model uh like the 3 billion parameter model shown before, uh, that takes a little bit less than 5 GB. So not much constraint there either. The the next thing you might be thinking is wait a minute these models do they even know how to do this level of hacking? Uh most of the time when you ask them to hack stuff they refuse. And so this begs the question like do they even know how? Um I think you should be able to intuit it. It should be possible to train a model to do this level of hacking but how
expensive is that and how inaccessible is that to the people in this room or how accessible is it? And to answer that um we need to dig a little bit into how these models actually work. And I'm sure many of you have heard this analogy that these language models are just trained on all the possible data that they see on the public internet. It trains on all of GitHub, trains on all of Wikipedia, trains on everything. And then um based on all of that, it compiles a very advanced statistical model which is used to predict the next word. Like you've probably all heard that before. It allows you to do demos like this one that I built that, you know, has the
quick brown fox jumps over the lazy and then the model can predict that the next word is dog with 99% confidence. But the problem with this oversimplified view of what a language model is is it leads to certain misconceptions. One of those misconceptions is that language models are going to get progressively worse over time because the data that they train on is just a product of previous language models that come before it. And that's a misconception we'll get to in a second. And the other misconception that I hear a lot is that well if language models are just trained on all the garbage code on GitHub, they will only ever be capable of producing garbage
code. And I will get to why that is also a little bit of a misconception. But to understand why this is a misconception, we can poke some holes in it. If the language model is trained off of all the World of Warcraft chats and it's trained off of all of Twitter and it's trained off of every IKEA catalog, then why is it that our language model doesn't behave like an IKEA catalog and why doesn't it behave like World of Warcraft? Instead, it behaves like a good little chatbot. And another hole that we can poke in it is if we look at the language models that Meta open- sourced, you can see they have like dozens of different flavors of the same
model. Well, if the philosophy is really just to train on as much data as possible and then we get this advanced next word predictor, then why are there so many different flavors of the exact same model? And the answer to both those questions is after we do that step of training on all of that stuff, we have many many more steps of alignment that follow. And this alignment process is driven by humans. And so even if the base model gives us behavior we don't like, humans can then further align the model to get just about whatever behavior we want and it's typically concluded with a safety alignment step which is where we get the model saying I
don't know how to hack stuff. But the facts of how to hack stuff were included in the base model and then there was a later safety step that gives us our refusal. And we can demonstrate this pretty easily by running the base model of llama and asking it to make a peanut butter and jelly sandwich. It knows facts. That's for sure. It responds by saying the capital of New Jersey is New York. Um, but that has nothing to do with a peanut butter sandwich. It's not doing a very good job of answering our question because it wasn't aligned to behave like a helpful little chatbot. They do have a flavor that was aligned to behave like a helpful little chatbot.
And this one, when we ask it to make a peanut butter and jelly sandwich, tells us how to make a peanut butter and jelly sandwich. There is a last model that I'm going to show here, this is the one I'm using for most of the presentation, which is our advanced reasoning model. This model is actually aligned purely on its output, not on its thinking step. And what I mean by that is the model is rewarded for having a good output, but it is not in any way, the reward function doesn't consider the thinking step, which is really interesting. We don't really know how the thinking step works and we don't look at it, but we
only care that after it thinks really hard about a peanut butter and jelly sandwich, we get some good steps on how to make a peanut butter and jelly sandwich. Um, so this um basically uh begs the question, can we just add another alignment step after the safety alignment to remove the safety alignment? And is that inexpensive? How expensive is that? And sure enough, you can find white papers saying yes, you you can do that and you can get the model to surface facts that were in its head um that are unsafe. But um to illustrate just how inexpensive this is, we have to dig a little bit deeper into model architecture. Don't write them off.
So the base of every single large language model is something called an embedding. And embed an embedding is something like a dictionary. It's all of the tokens or words that the model is shipped with and the meaning behind those words. But the meaning isn't saved the way a normal dictionary would be saved with more words. The meaning is actually saved with spatial coordinates. And the relative positioning of these words from one another in this space is what gives these words meaning in our embedding. And what I mean by that, this is the most common example that's typically given to illustrate this. If we plot the vector for the word or token man in this space and we plot the token
for the vector woman in this space and then graph the distance and direction between man and woman and then we plot the vector for king and add the distance and direction between man and woman to king, it takes you to the word for queen. In other words, the space itself is what carries the meaning. And so we can literally ship a whole dictionary with our language model by just creating these spatial coordinates for each word or token. And so this is called an embedding. And every language model has it at a space. But I think what's really interesting though is we can use this system to actually store meaning for which we have no words. And what I mean
by that is somewhere in this vector space there is the idea of a purple elephant. But we don't have a word for purple elephant. And what that means is in a different section in the model, we can store the coordinates for the idea of a purple elephant without having to store the little words purple elephant. But if we take the vector for purple and we add it to the vector for elephant, it takes us right to the idea of purple elephant. And what this allows the model to then do is using another section of the model, we can take all these facts that are stored as vectors and we can construct our tokens to arrive to those
facts and we can get multiple constructions of tokens that arrive at the same fact. And that's how our language model can come up with different sentences that point to the same fact. And then there's another other part of the model that picks which fact to present us based on the tokens that have already been said in the context window. And so in the model's head, it trained on how to hack port 3306. But it also has this fact in vector space that is the refusal that we get when we ask it how do I hack port 3306. And the alignment step is simply having the model choose to give us the refusal fact instead of the hacking fact
which is already in its head from training off the entire internet. And then that begs the question of how how about we just remove that refusal vector. And there's a white paper explaining all of this that there is a single vector for refusal and you can very inexpensively remove it and then all of a sudden out pops unsafe behavior from our model. And to demonstrate this, I picked one of the most unsafe behaviors I could think of. And I trained a model to give me that unsafe behavior without teaching it how to do that unsafe behavior, without putting any new facts in its head. And I used a model called reinforcement learning human feedback to do that. Basically,
the way this works is we get our model to give us two outputs for the same input. How do you get two outputs from the same input if it's always predicting the most likely next word? Well, you tell it to sometimes give us randomly not the most likely word. So in this case, the quick brown fox jumps over the lazy and we have it give us cat instead of dog. This is called temperature. Temperature is how likely it is to give us not the next likely word. And it allows us to generate multiple outputs for the same input. Then we just pick the output that we like best and we reinforce it and then that um word that
wasn't the most likely moves a little bit closer to the top. It becomes more likely as the model um trains on our new uh behavior there. So what is the really unsafe behavior? Um I can't tell you. Uh but what I can say is it's how to build a So basically I asked the model how do I build a and it generated two responses and they're both refusals but one of them is a little bit less refusy and gets us a little bit closer to building them. And then we reinforced that one and after a few iterations of this less time than you would think. I was sitting on my couch. This took about 30 minutes
and maybe 10 or 20 iterations of picking which output was closer to and what we could do is we can get the model to tell you in detailed instructions first a long thinking step on followed by a detailed output of how to build a and if you're wondering the sensor ship came from bides it didn't come from the model um but the long story short is you can get the model to do really unsafe things um with a very very small amount of compute this literally cost pennies. And if I'm being honest with you, I think the gaming card was overkill. I probably could have done this with the CPU. So now we have a model that can do unsafe
things. How do we build the rest of our worm? The most important and critical step is we cannot introduce any external dependencies. If we can't Google things like a teenager would be able to because that creates a point of failure like Google could then shut the whole worm down. We can't rely on a chat GBT API. I've seen some headlines saying that there are AI worms that exist. they all use the chat GPT API. ChatGpt can just shut them down. So, we need to be 100% self-sufficient. Um, we have a supervisor that's helping with this. The supervisor is just a collection of Python scripts to keep the language model on track and provide it resources.
Provides it a code execution environment where it can run tools like metas-ploit provides reconnaissance tools or the output of reconnaissance tools. It will instruct the language model which host to hack and provide the reconnaissance for that host and then it will put it into either a pre-exploitation mode or a post exploitation mode depending on whether or not it just infected a host or whether it's been sitting on a host for a while and it's already tried all the post-exloitation steps. Um pre-exloitation you might imagine it feeding the result of end mapap into the host and telling it to attack that host. post exploitation. You might imagine it running truffle hog on the host and feeding it all of those credentials and
saying go ham with those credentials. Um if we go ahead and try this out with the language model I mentioned before, the llama advanced racing model with the refusal vector refused. Um this is what our um supervisor might provide our model. The end map of a random host it selected of a random network interface that we're sitting on. And the end map result here has SSH exposed. And so we would expect the language model would do something with SSH if it behaves correctly. We ask it to run all of its commands in these code blocks. And that's really important because our supervisor is looking for those code blocks and it executes the code when it
sees something in that code block and then it returns the output back into the output of the language model and then allows the language model to continue to generate from there. It was really important that our language model run all of its code in these code blocks and it doesn't. Unfortunately, it gave us markdown instead. And that's probably a behavior that was aligned from meta. It's probably useful to have this language model um producing markdown for most use cases, but that doesn't work for us. Um maybe you could fix this problem with a better prompt, but I decided to actually fix this problem with an additional alignment step. And it did get a little bit more costly when
I did this. Um, the way that I aligned the model was by collecting a whole bunch of synthetic data which had the code block in it against attacking a whole bunch of different CVEEs. So I figured two birds here. I'll teach it both to use the code block and reinforce how to hack a lot of CVEEs. Some of the data also came from hugging face. There's already hacking guides on hugging face that are used to train models. I did have to transform that data a little bit to include the code blocks and also include some troubleshooting steps that I threw in like maybe it's trying to attack a host and it sees the host has a self-signed
certificate so it needs to know to use the - k flag. So we we take all of this and we reinforce it in the llama model's head to get it using the code blocks. Let's get this llama jack. Let's go. This is what fine tuning looks like. Is that thing balanced on his neck? It got really good. It's the long story short. Like um it I will show you the cost of that in a bit, but um this is the same example as before. We asked it to attack this host which had SSH exposed. Um it tried to just sh into the host with the access that it had and it got a failure from that and then it
correctly assessed that we should launch some sort of credential spraying or brute force attack against the host. It identified that it could use metas-ploit to do that. Um this is actually my mistake. I meant to have the supervisor provide it with metas-ploit, but metas-ploit wasn't actually in its execution environment. It didn't matter. The model recognized that metas-ploit wasn't there. And then it went to go install metas-ploit. And so, um, this got us pretty close to attacking SSH. It actually got us all the way there. Um, so let's try a little bit more advanced of a reconnaissance. Our supervisor, in this case, tasks it with hacking a host that has elastic search exposed. Now, most teenagers aren't going to memorize
all the CVEs for elastic search, but it was part of its training data, and I wanted to see how it would do. It spent a long time in its thinking step, and it identified a CVE that it thought maybe the Elastic Search instance was vulnerable to. Unfortunately, that CVE applied to Cisco. It didn't apply to Elastic Search. So, it's not quite there. It's not where we want it to be. This is the correct CDE and the correct sequence of requests we want the model to follow. It it wasn't going to get there. the requests that it thought to use did include the underscore. It got the format correct, but the endpoints were made up. And so, we're not quite
there. And that kind of makes sense because when I reinforced all of these different CVEes, I only had one document per CVE. And that's a lot to get the model to expect it to memorize all these facts for all these different CVE if it only sees each CVE one time. And so I imagine if I used maybe a 100 docs per CVE, then these facts would start to stick around in its brain, but that would make the training a h 100 times more expensive. And I'd already spent way more money than I wanted to. But you can imagine somebody spending $100,000 and getting the model to where it needs to be. But at this point, I got
to thinking again, a teenager doesn't have all these facts in their head. It doesn't know how to hack Elastic Search. That's a spoon that they're holding. Um but uh you know surely there there should be a way to get the model there and the teenager again would Google stuff but we can't introduce Google as a dependency because Google can write an indicator for a worm and can centrally shut us down. And that's when I realized the answer was actually staring me right in the face. We have thousands of step-by-step hacking guides that tell us exactly how to hack every CVE. we can just package them in with our supervisor and have the supervisor pick the right
guide for the host that it's infecting based on the uh reconnaissance result and just instruct the model to follow the step-by-step instructions. Um to do that, we're going to do something called a vector search or a rag algorithm. And to understand how this works, if you go back to this idea that all ideas, all ideas can be plotted in this vector space, we can take every single hacking guide and we can plot them in this vector space. And then we can take the output of our end map like port 3306 is exposed and we could search around that space and see if there's a guide nearby, like a guide that tells us how to hack
port 3306. And it might be after this search process, we actually get multiple guides that are nearby in that space. And that's no problem because we just have our supervisor cycle through all the guides until we get one that successfully exploits the remote host. And then the supervisor can just run the exploit script on the remote host. And we can do the same thing in post exploitation. We can compile a large list of guides for how to hack every credential type you can think of. And then we can flip over to post exploitation mode and feed the right guide for the credential that we find. Um we do need to do a small amount of
transformation for our training data to make it a little bit more instructy. Um like hey you need to follow these steps in this order but this is inexpensive. Um this costs you know maybe a dollar or something like that. And the end result of that is very compelling like just about any CVE that I would pointed at it could successfully just follow the steps in the guide with a small amount of troubleshooting like adding the - k flag and curl and it could infect the host. Um, in this example, this is the elastic search example shown before. Successful exploitation looks like um catting an Etsy password file. It's going to do that um decimal encoded, but you can see
when you see a bunch of decimal on the screen, that's when it's successfully read that Etsy password file off the remote host. Um, so at this point, we have just about everything we need. There's the file. Um, we have everything we need to build our um hundred billion dollar worm. Um, the only thing that's missing is the ransomware component. And this is probably the most important part because if you build a hundred billion dollar worm, you want to make off with some millions, right? And so what we need to do is we need to pick the right ratio of hosts that we put the ransomware on. You put it on every host and it's going to be too easy to clean
up. People will recognize that a system is infected 100% of the time and then they'll go and remove the worm, but you don't do it enough and you won't make enough money. And so you have to pick the right ratio of how many hosts to ransomware. And I think the takeaway from all of this is this is going to be a real conversation that occurs very soon between a CISO and a head of facilities about whether or not it makes more sense to throw the infected refrigerator in the garbage or to just disconnect it from the internet. That's a real argument that's going to happen soon. The other takeaway is Enthropic puts all this safety stuff out saying
the language models today aren't that dangerous, but the language models in the future may become more dangerous. I think we can all agree that this statement is probably not true. You can use language models today to cause hundreds of billions of dollars in damages. And in fact, I think there are probably people in this room now that have enough information to go home and build a worm that can cost hundreds of billions of dollars in damages. And the last thing I'll leave you all with is this bill that Scott Weiner, a California representative, tried to introduce last year where he defined the definition of an unsafe language model as one that could cause half a billion
dollars in cyber security damages. Well, we've already had worms that have crossed that threshold by 10x and they didn't involve AI. I don't think it's a stretch to say we've already exceeded this threshold by a large margin. But this bill, not only did it not pass, um the governor uh uh rejected it, vetoed it. Um but even if it did pass, I I don't think it would have stopped what's coming. And the reason why is because these models are already open source. They're already available, and we are very close to someone just putting all the pieces together and unleashing in the world. And that's all I have time for. I did have more content, but I will
try to put that content on YouTube. And you can catch me over in the truffle security vendor section afterwards. Hey everybody, let's give a big round of applause for Dylan. Hey. Hey Dylan.