
Um, I don't know if you came here because you like ground truth or if you want came here to see me speak, but I'm going to read to you um the the purpose of this track. So, uh, because I think it what I'm going to share is very much like really aligned with ground truth. So it's a place where we come together to share ideas, ask questions, compare notes, and it's a place where we can uh talk about things rooted in scientific approaches to infosc. So if you think about a lot of what we do, it's very much uh what we call art, um black magic, uh defense in the defense against the dark arts type of stuff. And um it's
it works for a time, but it really leaves a lot of us confused about like how did you do that? Uh or why do you think this way or whatever else it is. And so um I think ground truth for me this this is like one of my favorite types of tracks because uh we can really think deeply about the problems that we're facing and seeing uh basically if there's a different way we should approach it. Are we doing the wrong thing? Right? and more uh from a from a uh almost theoretical viewpoint uh are we doing the right things and are we doing the wrong things. So what you'll what I'll share today is some of these
mental models that will help us uh have ground truth on what we're dealing with especially when it comes to AI. So anyway I I that's not part of my talk uh what I just shared. It's more just about what the nature of this track is. Um I'm going to probably stay for a couple of the talks myself but I'll be here for the rest of the day. So, if you want to find me afterwards, I'm happy to. In fact, again, it's a compare notes. So, I'm happy to share my notes. Uh, we'll love to hear yours as well. All right. So, with that, let me uh What time is it? I'm going to kick off almost exactly
at 10 o'clock. [clears throat]
Okay. 10 o'clock. All right. So, hi, I'm Sunilu. Um, I'm the CTO of a company called Gnostic. It's a company I founded that uh is tackling some of the problems that um I've discovered. But, uh really what what I'm wanting to share with you is uh the path that I took to discover some of these challenges that that my company is trying to tackle. But it's not about my company. It's more about the process for how we think about some of the challenges. And the way that I do it is through these mental models. And not just mental models, but also bringing mental models together. So I'll share with you several different mental models. Um but you know we we'll walk
through the the sequence as we uh go through the slides. So some quick background about me again I I uh run a company called Gnostic. I used to be the chief scientist at Bank of America. Now chief scientist it's a high fluting title. Uh but if you think about what a scientist does. They run experiments. They try different things. They see what works doesn't work. Okay. And I that's kind of my job. That was my job function. I I experimented with a lot of things. I ran very operational functions like red team, hunt team, um research and development, you know, looking at different vendors, but at the end of the day, um uh it was a really interesting
job that allowed me to to experiment and try, you know, just explore. Uh I've done a lot of other random stuff. Um, but when it comes to AI, okay, I think I'm probably like a lot of y'all in that, um, well, first of all, I think with chat GPT, a lot of people suddenly became experts in AI, okay? And I just want to be very clear that I am not one of them, okay? I I I would love to get a hundred million dollar, you know, salary and signing bonus from uh, Zuck, but, you know, that's I'm not there. Okay. I think there are people all the way on the far right here um who will probably
earn that kind of salary, but I'm kind of just past the peak of Mount Stupid. Okay. And unfortunately, there are a lot of people who are still at the peak of Mount Stupid. You probably run into them um yourself. Um I I hope I'm not one of them, but more so I what I wanted to share with you is how I got past that peak of Mount Stupid. But moreover, not just getting past the peak of Mount Stupid. But the question is, if you consider this bottom part, this competence line to be time. Look, it takes time to gain competence. Okay, how do you shorten that line? How do you make it smaller? How do you squeeze it?
Because this space is moving so fast that it's almost as if like you you learn something new and all of a sudden the world's already changed. And um I think the the the the skill that I have or the skill that I've I've tried to adapt to this is to use mental models to shorten my time to competency. Um and there's this applies for not just AI but just pretty much anything that we deal with in um security or life in general. We want to be able to shrink this time frame. And I believe that mental models is one of the keys for doing that. Mental models is how we communicate with one another as well. Um it's a shared
reference. as a shared reference point. If we have different mental models, then when I communicate to you something about networks and your mental model is one one person's mental model is the OSI model and another person's mental model is um Noville's uh did they even have one? I don't know if they even had a network mental model, but you'd end up with like what what are you talking about? Right? It's it's a very different uh communications uh protocol that you have and a human communications protocol. Our human communications protocol is mental models. That is our API for our brain. Okay, mental models are the APIs for our brains. It enables us to be able to share very elegantly
and efficiently. And what I'm going to share with you is again a couple of different mental models. Um but better than that is when you start bringing these mental models together. Okay. When you start merging these mental models together, you discover some really interesting uh things about the space that we're in. And some of these mental models you probably have already heard of. Like who's heard of the udal loop? Okay, if you haven't already, definitely there's a lot of applications in security uh for it. But you may um some of these other models that you may not have heard of, we we'll talk about them as well. Okay. But this whole notion of merging mental models uh is pretty key.
but also the notion of how do you use existing mental models to and adapt it to the space that you're in. Okay, so I'll start with a really simple mental model. It's one that some of you all may be familiar with. It's a it's a famous it's called a serenity prayer. Okay. Hey God, grant me the serenity to accept the things I cannot change and the courage to change things that I can and the wisdom to know the difference. Most people are probably familiar with this this uh prayer. But if you're in risk management, you should probably have a similar prayer. Okay. Grant me the serenity to accept the risks I cannot change and the courage to risk mitigate
the things that I can. Okay, you see how I've just it's simple adaptation but it's exactly what we do in security. we if you're in the risk management then you want to be able to accept the things uh you just you know you have risks that you can't do anything about and so you just say okay I risk accept okay all right now here's the thing I I've taken this simple mental model um and I merged it with another one the other one is called the kfen framework so the kfen framework is a simple framework it's it's a it's a it's a bit more complex but it's this construct that goes from chaotic to complex to complicated data
to clear. There's many things that we see in the world that go through this cycle. And when it comes to AI, let's say like think about where you are, where your organization is with AI. Are you in chaotic? Are you in complex? Are you in complicated? Are you clear? Are are things what what stage are you in? Okay. Now, I've asked this question to hundreds of people and they generally put themselves somewhere between chaotic to complex. Okay, but this mental model is useful because it says well what do you do to move from one stage to the next. If you want to move from comp uh complic uh chaotic to complex, it means that you should run multiple
experiments. Remember what I did at Bank of America? I ran experiments. My job at Bank of America was to move us quickly from chaotic to complex. Okay, not me not necessarily move past that, but the idea being run experiments, run experiments, run experiments. So if you are not running experiments with AI, you will stay in chaotic. You will not move from that. If you're prohibiting the business from running experiments with AI, the business will hate you and fire you at some point, but they're they're not going to move past chaotic either. You have to run experiments because you have to then understand what works and what doesn't work for the business. So this mental model is useful in just
being able to understand like where are where are you today and what actions you can take. But let me go back and merge it back with the previous mental model. On the left are things you cannot change. On the right there are things you can change. On the left uh you you're in a predicament. You're not you don't have a problem. You have a predicament. And if predicament is the best you can do is manage your risks. On the right if you have a problem just go solve it. Okay? And the way you solve it is usually you buy a technology. On the left you usually buy people and services. Okay, when it comes to automation, okay, and
these are all this is the same mental model, but I've layered all these different ways to think about it. Going from chaotic to complex, don't even think about automation. You don't even know what you want to automate. Okay, on the right, going from uh complicated to clear, you automate as much as you can. You automate too much and you flip over to chaotic again. Okay, faster, better, cheaper. Pick faster, better, cheaper, pick two. Right? Sometimes pick one. >> Right? You're in chaotic, pick zero. >> You're in clear, you get all three. So, it's a different I mean, yeah, everyone's well, not many of you have heard of faster, better, cheaper. But now you have it in a slightly different
context and the mental model of faster, better, cheaper with the kfan model with the serenity prayer. You see how all these come together and gives you a much deeper understanding of why can we only pick one or two faster, better, cheaper. Well, it might be because you're in one of these different states or rather let me ask a different question. When it comes to AI, do you feel like things are like in terms of the solutions you have, are they faster, better, and cheaper? Or is it slow? Is it worse? And is it expensive? Well, if it's all three of those, slow, worse, and expensive, you're probably in chaotic. It's just it's just another uh way to calibrate
where you are and it's it's just another mental model that we use. Okay. Again, uh mental models are a way that we can quickly communicate. It's an API for our brain. When I say faster, better, and cheaper, those who understand what that means, you're like, "Oh, yeah. Pick one, pick two, right?" And maybe the one you pick may be slightly different than the one I pick. But we instantly understand the construct of faster, better, and cheaper. And we move much faster in our ability to gain competence in that space. Okay. So that's that's an example. Now there's a problem with mental models. Okay. The problem is uh car called out by this famous u uh Nobel
prize winning uh economist uh psychologist who won a Nobel prize in economics. Uh he says theory induced blindness. So change the word theory, replace it with the word model. You have model induced blindness. Once you have accepted a model as a tool in your thinking, it's extraordinarily difficult to notice the flaws of that model. Okay? Because you just see the world through that one particular model. Um there's a corresponding quote from a guy named George Box. All models are wrong, but some are useful. Okay? So it they're both true. Um all models are wrong because that model is it's not a full representation of reality. It's a it's a it's a you know abstraction of reality
and you're trying to do your best to try to squeeze reality into it and sometimes it just doesn't work. Okay. And other times it works really well. You got to figure out what uh what's the right model for the right situation. So the example I just gave with the conne better cheaper it it's it's just a model that gives you one view of the world but doesn't work for everything and it works some some of it works for AI and some some of it doesn't. So now I I fall to this particular trap all the time because I created my own model. Um it's something called the cyber defense matrix. So if you're not familiar with
it, you can look it up. But I created this model because when I was at Bank of America, my job was to talk to lots of vendors and try to figure out what they do. And of course the answer from that I get is they do everything. Like no, you don't. Right? So I'm trying to figure out exactly what they do. And so I use this as a mental model and a working model uh to say okay tell me what you do and I'm gonna I'm going to put you in one of these boxes and this allowed me to quickly understand and remember a lot of these vendors that we see all the day all the time. Okay. But naturally you
would ask I'm sure the question or at least I asked the question okay great where does AI fit in this? Okay. I mean that's for me that was my natural that's a question I get all the time like where does AI fit in the cyber defense defense matrix and uh I was like I I have some ideas but I don't know if this is the right model for that um and I struggled to figure out what the right model is but I discovered a way to understand where AI fits because I started to merge mental models okay and to merge that now I have I have to caution you all I'm going to start um
throwing more variables into this and when I throw more variables people get confused and uh there's a there's an inverse correlation between the number of variables you give to an executive and their ability to make a good decision. Okay. Um one variable they'll make a decision very quickly. You two by the way just just so you know what the curve looks like in this. You give them no variables and they're like their decision quality their decision confidence is like down here. Okay? You give them one variable or two variables it goes up here. You give them more variables and it drops all the way down here. And then if you add more variables, eventually it kind of goes
like this again. Does that curve look very familiar to you? Okay. All right. I'll explain that more later. Anyway, so I'm going to add more variables. And apologies in advance. So I I looked at this and said, "Okay, gee, I wonder um what does uh each of these different uh asset classes has another domain or has a has a deeper view of it." And I said, 'Okay, what of I I know that OSI model, we talked about that earlier, um has a seven layered model for networks. And I said, gee, I wonder if all these other domains have something similar. Well, when I looked at AI, like the domain that was most relevant for AI was data.
Okay. And I said, I wonder if data has another domain, like its own sort of layered view of the world. And lo and behold, it does. It's called uh the DKW pyramid. And that stands for data, information, knowledge, and wisdom. Okay. So, another mental model. And when I when I studied this mental model more closely, I realized, hey, gee, look, it looks like a lot of the things that we're seeing with like you hear the word all the time. These are knowledgebased systems. These are knowledge systems. These are helping us manage knowledge, deliver knowledge. You're not really hearing these words much much anymore, but these LLMs are really about knowledge. And I would argue that we've
moved up this pyramid where we are now operating at the knowledge level and we're not really just operating at these levels anymore for to borrow a networking terminology. Um, we're not operating at just layer three and four anymore. We have like a new layer 7 protocol that we're having we're having to deal with. Now when I use use this mental model I was also trying to figure out okay how do I understand the problem space of AI itself like define the AI problem space well in isolation that's kind of hard [clears throat] I have to think about well like AI seems like a brand new thing but if I [clears throat] can replace the word know AI with the word
knowledge I now have a way to anchor my understanding because I know my problem space for data problems like data engineering and data security and data quality and data privacy. All these suffixes that you can apply to the word data also apply to the word information. And then when you apply it to the word knowledge, you now have the problem space for AI. Okay. So the DIKW pyramid plus AI gives us the shape of the knowledge economy. These are the problems that we're going to run into and the challenges that we're going to face when it comes to AI. Okay, let me give you some examples of just thinking through this. Uh, and by the way, when I
first did this exercise like eight years ago, when I saw this pattern, I didn't know what some of these words meant. Like for example, knowledge quality. I like what is knowledge quality? Anyone know what a knowledge quality issue is today? >> Hallucinations. >> Hallucinations. That's right. Exactly. That is a knowledge quality issue. Now, here's an interesting challenge. Let's suppose Okay. So, usually when when we have a when we say, "Hey, I have a knowledge quality issue." We say, "Ah, you know what? The reason why I have that is because my data quality sucks. But let me offer a perspective here. What if your data quality was perfect? Would you still have a hallucination issue? Would you still have a a
knowledge quality issue? >> You absolutely would. Okay. All right. Let's let's pick another one. Um, knowledge privacy. What's knowledge privacy? Well, you know what data privacy is, but knowledge privacy became pretty evident when uh we had Cambridge Analytica. And the issue associated with Cambridge Analytica, I mean like try to stop what Cambridge anal like go into Facebook and turn off what my what your political preferences are. There is no field for that. Okay, it was all inferred from this. So knowledge privacy was is an inferred attribute of you. And when we talk about privacy regulation, we focus a lot on this. But what we really mean is this. Okay, what we really really care about is this. But
yet most of the time we're focused on stuff down here. Again, hopefully you see the same problem. Again, fix your make fix your data. Does that actually fix your knowledge issues? You have perfect data quality, you'll still have knowledge quality issues. You've tried to make this privacy preserving in every possible way. Will you still have uh inference issues? You absolutely will. Okay. And this became evident for me when I I was looking at what what is knowledge security. So you know we're all security people like I was trying to figure out what exactly was knowledge security. Well it's pretty evident in the context of you roll out a large language model inside of an enterprise.
You you do it because you want to share institutional knowledge but it also accelerates your ability to find lots of overshared content. And so this is not necessarily a good thing because now you're starting to uh if if if your organization is trying to roll out co-pilot or Gemini or you know Glean or whatever else might be um you're going to run into these sort of issues. Okay. Now you people run into those issues and what's the first reaction that we get? My data governance sucks. I haven't done any data classification. All my permissions suck. Right? Agreed. Well, think about that though. That's solving the problem down here again. Okay? And if I solve the problem down
here, not only is it solving at the wrong layer, but here's the effect of it. I'm basically squeezing this bottom part of this pyramid. And when I squeeze this bottom part, you figure out what happens next, which is my LLM becomes stupid. And who wants that, right? Nobody wants a stupid LLM. What's the point of that? And ultimately the perspective is um uh like I mentioned the the layer 7 example. If you have a layer 7 protocol you need a layer 7 control. That is how PaloAlto ended up with the nextg firewall. We need the sort of nextgen approach for for this new LLM uh space. And as much as we keep turning to this type of stuff we have to
recognize that at some point these become somewhat obsolete. And let me explain why on that front. Back in the day when all you had were databases, you'd give people database level access. People run SQL queries. Well, at some point you're like, oh, you know what? Forget that. We'll write applications on top of the database, which meant that we can start removing people's database level access down here. What we have now is people rather come in at this level [clears throat] and at some point when we can control it properly here, we can start removing accesses down here. Okay. all the problems that we have here could could be simplified. I don't know if it'll get it won't be eliminated, but at
least it will be simplified. And that's the opportunity I think we have in the future to to be able to leaprog some of the challenges that we face today. And that and that's kind of what I'm trying to focus on. So anyway, again, um [clears throat] the DKW pyramid gives us a sense of like how AI fits into this overall construct. Okay, let me now uh do another mental model. And this is a mental model again I think I asked if people knew the udal loop. So this is the udal loop but with a small small modification. So instead of observe I call it sensing instead of uh orient I call it sense making. And in
[clears throat] in the construct of uh the udal loop um it's not a loop here sorry um but the but the basic construct of when I ask um well first of all let's let's think about where AI is in this construct. So you have sensing which is where you have raw telemetry coming in. You have sense making and this is really where AI fits. Okay, sense making and AI is really right here. We're starting to see some AI also help in the decision- making and then the acting is just the execution piece. I do want to make sure it's very clear there's a separation between what we traditionally call what AI is starting to blend a lot. Okay, but
I think just by having this sort of separation helps us understand how to think about this in a much more systematic way but also understand what the risks are as well. And so to understand that here's the exercise I went through. I said okay you take each of these four functions and let's suppose I turn it over to a machine to do some of these I'll turn over to a machine others I'll let I'll reserve the right as a human. Okay. Um and so if I have the machine for example do sensing and I let I do the sense making and decision- making and then machine does acting. Well that looks like essentially a case like automated patching. So,
automated patching is I have determined that Windows updates or Windows is a reliable sensor. I've also determined that um I'm okay to follow a a playbook that says patch only workstations. And as long as I've scoped my actions narrowly, as long as I have a reliable set of sensors and maybe a big red stop button and a reverse gear, I'm willing to let a machine do this. Okay. Um and another usea use case like this is uh uh the threat intel blocking but b basically this is a very reflexive action uh you get a sense you you act and the human uh is sort of in the loop in that they've already made the determination of what to do with the
sensing and what what actions to take. Let's now let the machine do sense making. So now let the machine do sense making. This is a classic uh sore use case security orchestration and automated response. um the machine does sensing, the machine does sense making. I have determined the playbooks and then machine acts. In this particular second case, um I I'm still letting the machine do sensing and acting. Therefore, the controls for sensing and acting still apply. Okay, they still apply. But because I'm doing sense making, I now need to have additional controls for sense making. In other words, the I'm turning over the sense making um uh activity to a machine. It can go south.
So I need to make sure I have proper controls around the sense making. Okay, follow so far. You figure out what the last step is, right? Which is you now let the machine do all of these. And this is squarely where we have agentic AI. So agentic AI is basically a situation where you're going all the way through sensing, sense, making, decision-m and acting. Now the additive piece of course here is the decision- making. Okay, the additive piece between case number one and case number two, which we're already well familiar with, the only thing that's really different with agentic AI, I mean, I'm generalizing a bit, but the the biggest difference is now we've turned over autonomous decision- making
to a machine. What could possibly go wrong? Okay, that's what WCP GW stands for. What could possibly go wrong? So again, the controls for the last case, I still have the control. I still want to apply controls for sensing and acting. the same set of controls. I still want the controls for sense making, but because now I have decision-m uh turning over decision-m to a machine, I need controls around decision- making itself. Okay? And I I give a couple different mental models. One mental model I use is interns. Imagine you hired 100 interns. You let them loose, see what happens. Okay? And what I find I I've actually hired hundreds of interns over a summer
before. They all work for me. It was my it was the best way to find uh build a really startup team because I get to pick, you know, I get first pick of the best interns. But who are the best interns? They're the ones who can navigate the intricacies of your business and find the new business process or new way of doing things that actually like work. Okay, 90 90 95% of the other interns, they're going to fail. They're going to suck. They're going to uh stumble over them. They're going to call create all these uh security issues. There's all these things that these other 90 interns are going to h do. Okay, that's a great
mental model for agentic AI. You're going to have you're going to unleash all these agents to do things and a lot of them are going to suck. They're going to be like it it ran into this issue. This broke this whatever whatever whatever. But, you know, five 10% of them are like whoa that's actually pretty clever. Okay. And what do you do once you find that new business process? You bake it into business as usual, which is case number one or case number two. The biggest benefit of H&TI in my view is actually not the action. It's the discovery of the process that led to that action. It's the process that led to that uh outcome. And so in the
context of how we think about agentic AI, we we generally think of it as, oh, look at all the things it can do for us. But what if instead it's a significant investment or opportunity to invest in process, not just the action itself. Okay, so let me now map it back to the cyber defense matrix. So cyber defense matrix uh I shared earlier. Yeah, I didn't really talk about this bottom part and the bottom part um I mentioned that there's a degree of dependency on people, process and technology. people process technology by the way is another mental model. Okay, this itself is a merger of at least three mental models and it's super powerful just because
I've merged them together. But one of the things I emerged of course was this people process and technology. Now we think of AI as displacing people as you know that's the concern that we all have right uh AI is going to displace us and it will it will happen. I I I think it's I'm not saying that it's not going to happen, but if you think about agentic AI, what if agentic AI is actually an investment in process? Okay, let me let me give you a scenario. Um I hire McKenzie or some consultancy. They come in and they say, "Oh, you know what? You need to rework these different this these different business processes." And you're like, "Oh, you
guys are great. that means I can I you know I actually I realized I overhired these 10 people because um it just I didn't I don't really need them anymore because the business processes are better. So these 10 people who get laid off, they can yell at Mackenzie and say, "Ah, McKenzie, you know, they're the ones who uh who uh they're they're the reasons why they that my job got um I got fired." Well, actually, sort of. I mean, McKenzie uh helped the business figure out how to make it be more optimal and more efficient. But there's no Mackenzie consultant that's taking over your job. Okay. All right. Now, replace the Mackenzie consultant with an
AI agent. And the AI agent says, "Hey, here's a new business process that makes it much more efficient." And you let go of 10 people. All of us are going to blame AI for that. But really, we should be blaming the inefficient business processes. Um there's a wonderful uh scene from the movie Founder um McDonald's story. Uh, anyone see the courtyard? Did anyone anyone know what the courtyard scene is? Or the tennis court scene? Um, I should have a video here, but it's a beautiful scene. It's a beautiful scene of the McDonald's founders uh u working with their team on a tennis court. They draw the uh the layout of the the kitchen and so on so
forth. And they keep redesigning it, drawing a different these different things. And they found out a way to get a burger cooked from 30 minutes to 30 seconds. Okay. And in doing so, this is a story that you didn't hear, but they laid off 22 people. They improved their business processes and they laid off 22 people. They kept only 12. So they had 34 people and they cut basically like uh twothirds of their staff to do the business processes being improved. Okay. So going back to my comment earlier um the uh agentic AI the I think the biggest most underinvested part of our of anybody is not the people and technology it's the business processes
and what if aentic AI actually allows us to really truly invest in business processes at which point we will absolutely see job cuts but not because of AI but because the business itself has figured out how to be more efficient. We will still lose jobs to AI directly. Okay, don't get me wrong there, but we may lose more jobs to uh to better business processes. Okay, I know that's pretty uh u not that's not very you know happy sounding but you know just being aware. Okay. All right. So now let me comp let me now combine all these different all these different mental models. So cyber [clears throat] defense matrix kfen udaloop. Okay. Uh so first is um this
notion of chaotic, the complex, the complicated, the clear, we tend to actually move left to right, right, sorry, right to left when it comes to um uh the cyber defense matrix, we usually uh find ourselves in some chaotic situation. Oh, hey, I just uh got hit by ransomware. Uh chaos, right? Um but along the way, what we discover is, okay, what what just happened? Um, let's figure out we have to we it when you're in chaos, you act you move in a direction. Don't The worst thing to do is stand still. Uh, moving is better than standing still. You even going in the wrong direction is better than going in no direction at all. Uh, because once
you know you're in a d once you go in a direction, you can know if you're going in the wrong direction. But anyway, over time, you figure out what the right uh processes and playbooks are. Again, this is where I think agentic AI is going to help us um identify and of course the codification of it will be accelerated through AI as well. And then eventually we want to codify into technology itself and when it becomes codified into technology that's when things become clear and with vibe coding and whatnot you know we might eventually get there as well. um in the context of of uh how I look at the problem space here um on
the left usually we're fighting against technology on the right we're fighting against people okay um but throughout the whole time we're also fighting against business processes and business processes again is the one uh most underinvested aspect of our business today all right so again this I had to um each of these mental models has its own sort of complexity If you saw this coming in, you'll be like, "What the heck is this?" But hopefully you see how these different mental models merge together to give you a much more complete picture of of the challenges that we're dealing with. Okay. All right. Now, me [clears throat] and now give you a whole different mental model. And this was this will
also give us a sense of like how to think about where we're going with AI in general, too. So there's [clears throat] this great book by um Max Bennett by uh a brief history of intelligence and he talks about these major breakthroughs in brain evolution and uh [clears throat] there's five stages. The first stage is uh steering basically you end up with these bilateral systems and it makes a decision to go left or right. Okay, instead of like this amoeba where it just kind of just floats. Okay. Um [clears throat] next stage is reinforcement learning. reinforcement learning. Uh, hey, I turn right. Uh, I get positive reward. I'll keep turning right. The next stage is simulating. This is
where the most advanced AI systems are today. And this is a situation where you say, hm, what if I go left? What if I go right? I go straight. Okay. Um, so that's [clears throat] that this is where the most advanced AI systems are. The next stage after that is called mentalizing. And this is a a hot area of like AI research today. uh this is also squarely where um uh the the study of like AI safety comes in. How do we how do we ensure that AI systems are aligned to our values? Uh so mentalizing uh it's also called theory of mind. It's trying to understand the intent of others. What what are you trying to do and can I
infer that intent and then adhere to that whatever it is? That's exactly what we want machines to do. I tell a machine, hey, get me to the airport as quickly as possible. I don't I I I want the machine to infer that I don't want to break any laws. I like to still be alive when I get there. I like to not be nauseated. Okay, I like to not, you know, well, there may be various other conditions, but they're all inferred. Um, and I need the machine to understand my intent and not just, you know, run over people and and uh kill me along the way. Okay, the last stage, interestingly enough, is language. Okay, we it seems
odd that we're languages at the end. Um, but it's it's it's interesting because uh more than likely whatever language the AI systems come up with at the very end, it's probably not a language that we're going to understand. Okay, but the the point is that the language um the the breakthrough is uh comes at the the language breakthrough comes at the end. Now I want to focus on uh two of these for now and because there's a mental model that uh is relevant here and the mental model well so first of all uh most systems that we have are reinforcement learning based again I mentioned the most advanced systems are in simulating but uh the uh most adv of
our systems including large language models are in reinforcement learning and it has these different shortfalls. It's prone to bias. It's overconfident. It's unexplainable. These are all things that are reinforcement learning uh that we see with reinforcement learning systems. So if you're trying to get an LLM to explain itself just this is like this is like a um uh this is like a um inherent flaw. Okay, a flaw that you can't just say oh I can go this is not fixable. Okay, that's that's the point. This is not a fixable thing. Um at least not with reinforcement learning systems. It's not fixable. Um but there are controls that we can put in. Um and the controls
are actually like things like checklists or uh uh uh the the control is actually simulation. Okay. So the control in other words uh you have simulating on the right. The control for reinforcement learning is what you see on the right. Okay. But what's really really fascinating on top of all this is that this maps to another mental model called system one versus system two. So I mentioned Daniel Conorman earlier. One of his claims to fame is that he figured out uh that our brains operate in these two different modes, system one and system two. LLMs operate in system one. And by the way, we we as people think that we're rational people. We make rational decisions. That's system two.
Except that we don't. Actually, most of the times we operate in system one. We make all these irrational decisions and we believe that we are system two thinkers and we believe that all you jokers are system one thinkers. Okay? Like that's why you're you're all irrational. I'm the only rational person here. We think of that um you know across everyone. Okay. But that's what that's the problem that we have with LLM. We have a whole bunch of system one things and we're saying okay how do hey system one lm explain yourself like hey the only thing I did was generate random to tokens that are based on a uh uh on a probability sequence not anything else
there's no explanation outside of that what you need is a uh system two type of construct and the system two construct is essentially um things like mental models. Okay, what is a mental model but a simula a a a way that we can simulate the world and say does this fit into this mental model not all mental models are right okay some are wrong but the more mental models you have the more you actually are operating like system two people so I think mental model again there the whole like if there's one thing that you get out of this whole talk mental models are amazing to be system two thinkers mental models are amazing to have these systems operate in
a way that helps you really understand what's right and what's wrong. Okay. All right. Um, now so what's interesting uh so I mentioned system two is a control for system one. System one is actually a control for system zero. What's system zero? Going back to uh this again that's a control for system zero. Okay. This is this is system zero. This is system by the way the system zero doesn't exist. I'm just using that term. Um this is system one system two. This is system zero. So mentalizing is system three. Okay, which is the challenge that we have when it comes to um how do we control AI systems? We're we're at at this point we're here or most advanced
systems are and simulating and the challenge we have is how do we align these systems to our interests and the answer is well we have to figure out what system 3 is but we already know what system 3 is. Okay. Well, what is system three? All right. Well, the the key question that I was trying to struggle with is yeah, we're talking about AI. I mentioned earlier mentalizing and theory of mind is this is where the forefront of most of AI research is is in. And it's also where we're trying to think about like the reason why we're trying to think about this is trying to define what's safe, right? But whose definition are you
operating on? Are you operating on China's definition? Are you operating under Trump's definition? Are you operating under you know uh Zuckerberg's definition whose definition of safe are we talking about u Musk anyone right um so I think the challenge is this is something that we see not just with AI systems this is something that we see in life in general within United States and other countries and the way that we struggle through through this is politics okay who's right is the right side or left side or you know red blue who's right here And unfortunately we don't really have a clear definition of right when it comes to these political def uh uh discussions. And so um the
founders of the country said hey you know what we know that people are fallible. People are going to disagree about what the right thing to do is and we need a control for that. Okay. And the control that we created is called the constitution. The the control for politics is the constitution. And the constitution has three branches of uh government that in theory are supposed to be you know equal se equal and separate and well balanced in theory. Um but what was also interesting was um Patrick Henry had this quote the constitution is not an instrument for the government to restrain the people. It's an instrument for the people to restrain the government. Okay. The constitution is
not an instrument for the machine to restrain the people. It's an instrument for the people to restrain the machine. So Anthropic has uh something that they call the constitutional AI and and you can read about it but the idea of a constitution is what I think will help us understand what to do with um with this next stage of AI. So again it's a mental model the constitution or the idea of a constitution but moreover the structure of the constitution of three but separate um bodies is what we're looking for. Now in many organizations we don't have a exe legislative executive judicial branch. Okay. But I was so I was trying to find a similar
sort of model with within organizations and I found one and it's called Westerm's typology of organizational culture. So Ron Westerm uh he wrote about this like 11 12 years ago and he identified three different organizational typologies and he said every organization falls into one of these. I think actually every organization has a combination of all three of these. But what are those three? The first one is called pathological. Um pathological information is hidden, messengers are shot, um ideas are crushed. What a wonderful place to work in, right? >> All right. >> Bureau of Labor Statistics. >> Um the next one is called bureaucratic. Uh most people understand what bureaucratic is. uh but generally it slows things down and bureaucracy no one
really likes but we tolerate it. Okay. But what was really fascinating in his talk in his uh construct is that the third he called generative. I was like wow interesting. This is like 12 years ago. He said hey we have generative organizations. So I was looking at this and I said well we have generative AI. Um what exactly is a bureaucratic AI and what exactly is a pathological AI? Okay. Um I mentioned earlier every organization is actually a combination of all three. And if you have an organization that is out of balance let's say Enron and they're all only generative only generative then you end up with the kind of chaos that we see. What do we have in AI today? It seems
like all we have is generative. But what exactly is a bureaucratic AI? Well, a bureaucratic AI in my view slows the AI system down, gives us a chance to say, "Wait a second, hold on, wait, wait." You know, let's let me see if this is the right answer, whatever. Okay. But a pathological AI, um, what exactly is that? Well, I mentioned earlier like information is hidden. Remember what what we're trying to do with oversharing of information? We actually want to hide certain things. So I'm sorry if this disturbs you but I think this is a security function. Okay, the if I think about uh an organization, the generative side is the business side, the money-making side. The bureaucratic side
is the legal, the HR, all that kind of stuff. Pathological, that's us. Okay. Um, but a pathological AI, what is its job? The job is not to terminate us. Its job is to terminate the generative AI. Okay? It's to pull the kill switch for the generative AI. And so, again, it's all in balance. What what do we you know as much as we like to say we're business enablers we're not okay we actually uh there's a legitimate reason to stop the business and say hey business no we can't go there and effectively that's what we want with a pathological AI hey AI generative AI system don't go there uh I'm I have the um the ability to pull
your kill switch and I'm gonna pull the kill switch and people may hate me for that like we do in security but that's essentially the I think the role of a system like this and a and a balanced AI system where a system where we have balance whether it's agentic or whatever we end up with in the future is going to have a combination of all three but today all we have is generative and that's a problem okay so multiple mental models I shared one the kfan model uh just to be able to figure out what you should do next if you're in chaos um knowledge the ikw pyramid we have a new layer of controls we have a
new layer of of uh um we have a new layer of the OSI stack so to speak with AI if you think about this as an a AI OSI model it's you can think of it as that and say okay we have a new way of thinking about what controls we need um if you want to understand the risks of AI LLMs and the controls for them system one and system two thinking it's such a perfect match um aentic AI the udal loop is a is a great model to think through uh what are the implications of what agentic will And then lastly when we as we go into the future um I think the
idea as much as we issue politics uh it may actually become even much more interesting in terms of who makes the right who makes the decisions of what is safe and what's the right balance of u these different systems as well. So um and and let me let me just uh before I end we get a so I've shared a lot with you okay and you'll hear a lot of people talk about different things throughout this week but how do you know what they're saying is correct okay and all I can offer to you is here are just mental models here's my a here's the API to help us communicate better and if the structure that I gave you uh or what
what I say to you what somebody else says says to you doesn't fit that model. Either they're wrong or the model's wrong. Okay, but it's an easy test. You'll hear for example, I don't know you you like the problem that around the oversharing of content, people will say, "Ah, go fix your data." And right there, you're like, "Wait a second, that doesn't fit the mental model that we just talked about." So you now have a way to to sift through um all this knowledge, all this data, all this information that you're getting and now you have a way to decipher what is actually true wisdom. So remember the top of the pyramid is wisdom. Okay. Uh
grant me the wisdom to know the difference and mental models I think is the key that unlocks that wisdom for us. So with that, thank you very much. [applause]
And I'll hang around for questions if you have any as well. So, and special prize if anyone notices anything interesting here. So, all right.
>> No. Oh, yes. I gotten older. Nobody notices anything >> like squint.
All right. Nice. >> All right. >> So, you I know it's like you expect it to be way out there and it's right here. Um, in the cyber defense matrix, you talked about kind of moving from the chaotic to the clear and that kind of AI will Ultimately, we want to get it to that system three of money to mentalize and understand what we want. But do we need more to go from mentalizing and knowing what we want to choosing to doing what we want and then actually doing, right? Because psychology people have to know the right thing to do, then they have to choose it, then they have to actually do it. Um, is it enough to
get the model to mentalize? >> Um, okay. So everything that you said uh can be done right here knowing what to do choosing it. Um it's at this it's me mentalizing is the point at which we start judging other people. Uh mentalizing is at the point we say no you're actually wrong. Before then there was no sense of wrongness in the sense like from a moral standpoint. Uh before then it was you know it it just works right whatever it is. But as soon as you hit mentalizing, it's like um that's shameful. Uh so the question of what is safe, what is right, um what's okay, if you start putting human judgment against it, meaning that m what the machine did is
not not the right, not right in the sense of wrong from a again from a moral standpoint. >> Um the trolley problem of hey, should I run over three people or pull a switch that runs over one person? It's the same sort of rightness wrongness type of question that's being uh posed there. >> But so there's getting the model to know and this is like you're here for Matt's talk yesterday, right? Um there's getting the model to know what's right wrong that something is a lie. But is that enough to get it to not lie or does it also have to want to not lie? >> Oh whether whether you want it to lie or
not a system one system one reinforcement learning system doesn't care, right? Um, but how do we know a uh LLM is lying to us? We have a mental model that says, "Hey, in fact, what you should instruct LLM to do is to fit things within a certain model." Um, hey, write a write a five paragraph essay for me. Well, it generates a four paragraph essay or a six paragraph essay. I don't really care about reading the rest. You know that it's wrong, whatever it is. uh the whole chain of thought the idea behind chain of thought is it's giving you the structure the mental model but the structure itself you should question and say is this the right mental model
itself but you once you have the structure you can then validate whether the response that comes back fits within that structure do we have um I know the next talk is at 11 right okay so we do have time right >> four more questions >> cool and again this is ground truth right so I hope this this this fits the nar narrative of what ground troop is all about. >> Um where you had the things on system one, system two a couple slides ahead. Yeah. >> Uh the the control for system one was system two. Where does that sit? >> I mean obviously system one system doesn't have that kind of process. So do you have
>> Yeah. So actually um Okay. So let me let me see if I can actually describe it from a from our brains. So our brain our our system two brain um prefrontal cortex uh versus our amygdala. Okay. So like physically where it sits it's uh in front of layering on top of the amygdala. >> Okay. >> On top of >> Yes. And so if you think about where these LLMs are going it's the reasoning function is really actually operating on top of the LLM itself. Okay. >> The brain. Yeah. Yeah. It's it's the science of like where the brain like brain science is amazing because it tells you so much about how these LLMs work. Uh but that's essentially the same
construct that's happening. Uh but where if you don't have that layer, you can you can create your own layer right there. What you when you ask it to say uh do a chain of thought or when you ask it to fit within a certain model, you're essentially creating that own you're creating your own uh layer on top of it. So, a couple years ago, I had the honor of attending one of your training classes on your matrix, and I've taken it back not only to my organization, but to my own community. We've added a bunch of extensions onto that because I love how it makes a very clear one-pager that you can have a conversation with anybody
in the organization. But if you add on to that like mappings to whatever your framework is, mappings to what norms are in play, mappings into what type of evidence, you get a really great strong uh view of what you're doing. And the biggest thing that I've seen in my own and my peers when we're dealing with AI, a lot of the less than desirable outcomes we're getting is because we don't have the intimacy or into me I see into what our processes, our procedures, our datas and what our failed uh what I what should I say assumptions are. >> Uh well going back to the notion of these mental models as a communications tool, right? It helps us the mental
model the the cyber defense matrix um has been extraordinarily useful for a lot of folks because it serves as a quick API to help us get on the same page like literal page on certain things. Um at the end of the day if this mental model or something does there's other mental models for business processes. I don't know what they I'm not a business process person. uh but you we should probably figure out what those are and as we discover them like this whole what you saw what I hope you saw was not just a single mental model but the bringing these mental models together and how powerful they are once you bring them together they're powerful
on their own but they're even more powerful when you start bringing them together and you start discovering some amazing things I don't I'm not a I think there's a huge huge opportunity here with agentic AI but I'm not a process person so I have no clue what the mental models are for this and so like Let's, you know, we'll explore and if there are people here who can think through that and come up with new ways to describe that, then we've created a new API for our brain and, you know, hey, we're all good for it. Any other questions? All right. Well, thanks for your time. [applause]