
Unknown: All right. Up next we have Sam Richmond, the Great overcomplication. Sam Richman: If you can't tell, this is going to be a rant, if anybody have not seen Dr Strangelove, I would recommend seeing Dr Strangelove. The last bit of that is maybe a little bit of overstatement. I don't hate technology, but I kind of do, and that's come from the time I've spent in the industry. So I'm a goofball, former fireman, current goofball. I've been around the industry a lot, mostly in DOD and federal use cases and customers. I've been around in the vendor world for a while. Currently at Red Hat, is an associate principal architect in the aerospace and defense team, kind of building how new
warfare architectures look, whether it's disaggregating large assets into smaller ones, and all the data communication and security around that very, very complicated architectures. I'm also on the zero trust community practice at Red Hat, which means I need to think about how to secure all of these things together. And things are getting pretty complicated. So 25 years has put me about in the middle of this. I'm usually yelling at the cloud literally, or grinding my gears. I look a lot like Peter Griffin here, more than I'm comfortable with, but I'm generally somewhere in the middle here. The reason for that is every time I'm looking at the problem of cybersecurity, I ask myself this question, and this question keeps me literally
keeps me up at night. Do I feel that we as a whole are getting more or less secure? And take that however you want to take it, whether it's data privacy, whether it's exposures, whether it's architectures, and I can't get away from the fact that I feel like we're not heading in the right direction, even though we have a lot of people working in the industry, different people working in different areas, we're all doing the best we can. But honestly, I really feel like we're not in a great place. So this talk is, I've given this a couple times. It's a version of a paper I published recently. The subject was, how do we mitigate cyber threats today into the future? And I
couldn't help but get this picture out of my head, like we're bailing out sinking ship with a tiny little bucket. Like we keep doing the things that we're doing really, really well, but I keep feeling like we're getting punched in the face over and over again. A few years ago, I put together a little getting started in cyber security blog. A couple pages. Didn't take too long. I did one recently. It took about 12 hours of brain dumping to put this out. Anyone who's interested, please scan it. It's on the internet. It's open, open, open. But there's our domains are getting our domain is getting much deeper and much broader, and it's really getting hard to secure
things in the right way. And again, I feel like we keep getting punched in the face the charge. Healthcare breach just got the post mortem came out. 100 million people affected, right? 2.7 billion records with social security numbers. It's already out there. The adversary needs their databases refreshed every once in a while. So there you go, right, confident health breach. This one was really interesting. This is a five terabyte leak of patient therapy sessions, audio, video, transcripts. You couldn't possibly misuse that, right? It wasn't an exp, it was an exposure, not a leak, but I guarantee it was scraped. I don't know how long it was out on the internet, but it was there five terabytes of this.
All of this is happening a lot. That's inside of my head. You can't see it, but it's kind of a spiral thing here, and so we're not in a great place. And by the way, why don't we just do AI everywhere and make all this better. I'm an AI skeptic. If you can't guess that already, I think there's some great use cases for it, but I really think we're in a really interesting place with this, because the democratization of AI is really good, but it's also really, really bad. That one I used to be an EMT. I don't want patient care records being transcribed in the wrong way, for obvious reasons. I don't want laws being
basically this little bit hidden here. But essentially, an AI ruling with current law would make whatever rule irreversible by the current law. AI girlfriend site hacked, of course, that was inevitable. Couldn't possibly misuse that information. So I just see us kind of layering this new complex technology onto an already complex environment without really thinking about what are we doing and is the complexity worth what we're doing now, we're all driven by if we're in profit for profit companies, we're driven by our bottom lines are driven by competitiveness. So certainly, we're all very motivated to do this from a business perspective, but it's really complicated. And I think the important thing to realize it's so easy to add complexity now
that we should really stop to think, is it worth is it worth doing? Is it making a world that we want to live in ourselves and where our data is being protected, like the hunger of AI ml for data is so vast that it's almost like forgiveness versus permission. Now, right? It's buried in the terms of conditions. It's buried somewhere where, oh, of course, we can use your data for training because you agree to it. Well, yeah, and 300 pages down, I agree to it, right? But I didn't. And so it's really important to think about that as we build new architectures to understand what the. Complexity really means just because you're API into open, AI sure you
don't, you're not, you're not responsible for that, but you're inheriting its complexity simply by interacting with it. So it's getting really, really easy to do this, and so I see ourselves kind of doing the same thing over and over again. I feel like we're drowning in complexity, but we keep doing the same thing like we didn't learn from Cambridge Analytica and meta and all that stuff. We're just going to do more data scraping and analytics. We just keep doing the same thing and expecting a different outcome. We keep jumping head first in new technology. It's a very human thing to do, and then try to Band Aid it later. We always add more. I almost see innovation
and Solutioning, never as a removal of something, but in addition to something, and we constantly add complexity upon complexity upon complexity, and that does not bode well for cybersecurity, right? We're all practitioners. We all suffer from cognitive overload. We have tooling. But ultimately, these vulnerabilities get really, really hard to find, and they get very, very subtle the more technology you put on top of each other. And ultimately, I think what it really comes down to is we don't really stop to think, and we, being the collective industry and everyone at whole we don't stop to think whether the existence of a system warrants the level of complexity involved. If I add a feature to a system that
increments the user experience by point, oh, 1% but results in a 20% complexity increase of the system, is it worth it? Is it worth it for the effort to secure it? And that was kind of the problem I was thinking about when I wrote this thing. And I started thinking about unnecessary complexity as actually technical debt. Because if we read the definition from Wikipedia, right, it's the implied cost of reworking something because the solution is expedient over long term design. And if you go through all the examples of what they do there, it's a little bit covered. There a lot of lot of reasons for it, right? Business pressures, I mentioned that documentation and whatever, but the one that stuck out was lack
of developer architecture knowledge of elegant design. Elegant design means the system does what it does, and it's not terribly kludgy. It does what it does, and it's very, very elegant. And it takes time to do that, right? If you're a coder, coders can spend a whole day on a single line of code to make it beautiful and tight and elegant and secure, but it's really easy to add complexity now, and so we do that, and we end up with a system that has a lot of this technical debt in place. So what do we what do we do about this? I'm a zero trust evangelist. I'm a big proponent of zero trust. Can zero trust help with this?
I'm assuming most of you have heard of zero trust, at least to some extent. Okay, can it help? Sort of, in my opinion, right? In my I kind of think zero trust is security. What DevOps is to software development. It's like the best idea we have so far about how to design secure systems. DevOps is the best idea we've had so far about how to build software but it's a philosophical and architectural approach to designing and implementing systems to eliminate implicit trust in every digital interaction. So it's a people process and technology kind of approach to building systems. But if we're trying to define every digital interaction and eliminate implicit trust, and I've used that term twice now, I'm going
to define it in excruciating detail, in a minute, complexity flies in the face of that. If we have a system that has a lot of sprawl and is not well understood, how do you even find all the digital interactions, let alone secure them, let alone build in controls that let you control it in the way you need to implicit trust can go very deep and very broad, and we'll kind of go through that in a minute. So we started out kind of when zero trust. This is not a zero trust stock, but it's important to mention, we started out a little while back with NIST and DOD, and they kind of defined reference architectures and guides. And 41 pages, 25 pages, 83 pages. Okay, great.
Good stuff. When it comes to actually building this, they put out a recent mapping of the NIST 853 controls to zero trust principles. It was 383, pages, right? It Unknown: makes sense. It's about the same length as 853, because you're taking these controls and mapping zero trust means that's a that's a lot to do. In fact, I gave this talk couple weeks ago. Some raised their hand. They're like, we're all exhausted. Everyone's throwing zero trust at us. Everyone's throwing these controls. Where do we start? And that's a very good question. And so some of you have Val Shultz right in the DOD, thou shalt do this and this and this based on the maturity of where you are. I
know that's hard to read, but don't worry about that. But honestly, where you start is figuring out the Protect surfaces that make sense. But the one thing that stuck at me is, how do we mitigate cyber threats into the future? And I couldn't help but think the answer is less technology, not more. And I don't mean remove technology, but build tighter systems, smaller systems, more elegant systems, so you don't have to do that to everything. The less you have to work on, the better. And the tighter it is, the more efficient it is, and the more elegant it is, the easier it is to secure, and the easier it is to understand. And again, with how easy it is to
build technology into things and build complexity into things, we're really getting far afield with this challenge. So when you do a zero trust assessment, first of all, the thing you've got to do is understand an architecture and understand the digital interactions. And so you deconstruct an environment and deconstruct every interaction that exists. And an interaction is a subject, whether it's a human or non human, trying to touch some kind of resource. Is it a network? Service, a piece of data or whatever. And generally speaking, if you want to build enough context into making to making a really good decision and a timely decision, you have to have more than one enforcement point and a lot of context around that to make that
decision. And when you have a lot of extra complexity that makes that even, even harder, any one vendor is probably not going to be enough to do what you need to do. It's always going to be ecosystem approach. That's what I do. I'm that's what I do in my in my work as a zero trust SIG is kind of work through those ecosystem partnerships, but I've used the term implicit trust a few times. So let's just make this really, really, really clear. I like the using the airport example. So I arrive at the airport, I've got my boarding pass and my identity, my ID, authorization and authorization and authentication. There we go, identity and authorization. I
don't just get on that plane, of course, right? One gets on the plane with just that. I sure have not. First of all, if I'm in a fly list, I don't even get the boarding pass, obviously. But then I arrive at the gate agent, I have both authentication, authorization. If I party too Hardy, I don't get my journey ends there. I don't get on that plane. If I'm carrying a machete, I don't get on that plane. If I'm carrying a skull and throwing stars, I don't get on that plane. If I pass all of that through and I'm okay, but I start acting weird on the way to the gate. I don't get on that plane. If I go back
out and back in, I don't just get Hey, you're good. Go ahead. You go through all that again every single time. And finally, all of that's true. I get to the gate agent. I party with the Applebee's too much. I don't get on that plane, even if I'm on that plane, and I start behaving the fool. I get zip tied and I got taken off that plane, and I don't get on the planes again. So implicit trust. Note this very timely. Lots of context all the way through from the user, the person, the things that they're carrying the translate that to it, the user, the device they're using, everything to the moment of access, and even when
you're accessing that resource, you're still not trusted. That's what we're eliminating. Implicit trust means and zero trust. And how do you do that when you have a very, very complicated system that you don't understand, or that has too much fluff or too much sprawl, because you need to deconstruct an environment. You need to understand where your data is, because it's all about data, right? Systems don't matter. Applications don't really matter. What matters is the data and the people who are using it, and what that means to the user. Is it their therapy session? Is it their social security number? That's what we're protecting. And so understanding where the data flows, where the pools are, where the lakes are, and then
what paths does it take between them? What gaps exist in visibility and enforcement, and why do they exist? Is incredibly critical to zero trust, and how do you do that when the system is more complicated than you think it needs to be, or uses things outside of your control that you can't control, and that's really the problem. So I think any zero trust assessment should involve some kind of mandatory technology culling as part of that. Find the things that don't matter, find the things you don't need. Remove it. Maybe it's stripping down an operating system. Maybe it's simplifying communication with two entities. Maybe it's removing a whole part of the system that no one uses anymore, but just happens to be there and
couldn't possibly be an attack surface, right? So so I really think that kind of minimization and almost contraction of the IT estate is important in any kind of a zero trust assessment or any kind of cybersecurity modern approach at all. Another I like, I like metaphors. One metaphor that I can't get out of my head is the concept of a state machine. So a state machine means, what does the machine do, and what are its states? So think about an old timey turnstile on a subway. It's locked, or it's unlocked two states. It transitions between those states based on a coin and pushing on the turnstile. Simple, what's a state machine for something like this? I say
complex application. This is a pretty standard application, devsecops pipeline, with public cloud and AI in it. This is pretty normal, right? So you have developers building code from their mind, from chatgpt, from online sources, pushing it through a cscd pipeline, building an application and also building the server component behind it. Let's just say it evolved over time, organically. You've got containers, you've got virtual machines, you've got maybe some serverless in there, and you've got all of that abstracted with all the cool things that cloud does, because that's what Cloud means. Cloud means abstraction of resources, whether it's storage, networking, data, authentication, all that. And of course, we're AI everything. Now, of course, we got to do
some AI over here too. So I've got machine model training and model hosting infrastructures too. And I'm using data part, maybe data that I control, and maybe some data that I don't control. And maybe I'm using models that I don't build myself. Maybe I just download one from hugging face, or I API out to anthropic or open AI, right? That complexity is part of my solution. So you think, do you think state machine. What happens if some of the data that I don't control gets into here? How does that propagate through this entire thing? Right? I affect the user experience. You can't see that down there anyway, the user experience, the user app, is down here, so the
user is behaving differently, which influences the way that maybe this gets scaled. Maybe we're looking at the output of that. Machine model to build scale and, again, capable, and then move that potentially all the way through to the developer who now starts writing things differently. I'm amazed things work as well. They do. I really, I really, really do. It's a testament to our innovation and our our capabilities. But the thing is, things can go really, really wrong, and the less there is to fix and the less there is to pay attention to, the better we're at because, again, this is almost an infinite state machine in my mind, and things go very wrong sometimes. Here's the
kicker, if you create all that manually, or if you click a few times in a Cloud Console, that complexity is still there. I created a high performance computing cluster in AWS with like, three clicks. Easy for me, but it's there. Sure the shared responsibility model means that it's not my responsibility, but it is my problem. If it goes wrong, it sure as hell was their problem when I'm not picking on anyone, but when Google deleted $100 billion company irreversibly. Now very rare, but it happened, they can't say it didn't happen. Hopefully they fix it, whatever caused it to happen. But the thing is, when you abstract everything, you start to understand how that's possible. Because if I have a data center, I have servers, I
have storage, I kind of know where my stuff is. But when cloud is abstracted, these are pointers to everything objects. Object defines. Like object storage is like a scatter shot of things, and you get your your file back, and you got all your networking and storage and compute, and it's all just pointers to things. And when it goes away, it goes away, it is gone. And so again, it's your problem, even though it's not your responsibility. And it is really easy to add complexity into a system this way, and so it's important before you do that to really think about the implications. And they did. They had third party backups, and that's how they recovered. If
they hadn't had that, I don't, I don't know what would have happened. CrowdStrike, very different scenario. But what I noticed, what was funny, is, after it happened, I noticed a lot of articles coming out, and even, like normal magazines and publications, about, hey, this whole it thing is way more complicated than everyone kind of thinks it is, right? One vendor, big vendor, but one vendor made one change to one file that touched the Windows kernel wrong, and the world burned very, very simple thing. But again, how we architect systems matter, and if we can remove complexity that doesn't make any sense or that isn't adding value that helps our situation, let's not forget about hardware. In this we've
talked mostly about software. I know that's extremely hard to read, but basically this is from a paper I found recently, which maps current DoD weapon systems on the left to defense logistic agency authorized suppliers in the middle, to Chinese semiconductor flow downs on the right. It's only the nuclear triad and the Patriot missile system. So it's not, it's not that big a deal, right? But again, hardware matters. These attacks from China are real, and we have to, especially my domain. We have to fight this on a daily basis. So again, hardware matters from a security perspective too. So if you can simplify your supply chains, that will help you read the book chip wars, if you haven't, if you haven't read it, it really
kind of explains the whole global supply chain of hardware and the challenges around that. So that's really, really important from an AI perspective, I should mention, and I'll slightly censor the title of this. If you want to hear a rant from a person who's grumpy at AI as me, but is also a data scientist, look up a blog called I will effing pile drive you if you mention AI again, I'm not joking. That is the name of the blog. Look it up. It is, I wish I had his knowledge, because it's like the perfect rant from a guy who actually knows what he's talking about. And so from all of this, I'm running a little
bit quick. So if anybody has questions or wants to tell me I'm wrong, please do. But from a call to action perspective, again, it's hard to generalize this problem because it is so broad, and it's more of a philosophical problem in my mind. But if we can all do our part of wherever we operate and adopt over complexity is actually technical debt mindset, I think we can help ourselves out, even if it slows things down and even if it limits technological deployment. And now that's really hard in for profit companies and people who are doing competitive nature, right? It's really hard to do that, because we all have to answer to Wall Street and all of
that. So it's hard. But if we can play our little bit apart, maybe we can all make things better, kind of from a grassroots perspective. So if we consider complexity to be a tangible cybersecurity liability, like something real, like a resource, I think that helps. So I saw the movie Interstellar. When they were doing a rescue, they had to calculate fuel, food, water, but they had to calculate time as a tangible resource. And in some ways that was a total mind change for them. OK, five minutes on the planet is two years up here. So they had to think about that. And I think complexity, if we think of it, as a tangible liability, can
maybe help us realize its impact in a negative way, practice elegant systems design as much as possible to build your reliability and security inherently into the systems. And I kind of thought about this like, well, this is sort of the Marie Kondo approach to cybersecurity, right? We want to pile everything together, what sparks value, what sparks security, and anything else we kind of just throw away. So as. We do these zero trust assessments, and as we do of these cybersecurity re architectures, is the system even necessary first of all, and if it is, what technology can be removed from that system to maintain functionality but make the attack surface protect surfaces simpler, to make our
jobs easier and more manageable? Is the system even worth the complexity needed to secure it? What's the most elegant way to provide those capabilities? As I said, are modernizations to a system worth the risks that they introduce? So I work for Red Hat. We have we saw a lot of Kubernetes. We have a lot of container solutions. It's not the answer for everyone. I see people jumping head first into kind of containerization in Kubernetes, but I've seen talks of like, what's the rise of the Agile monolith? Right? If your solution it works from a monolithic application built that way, but that you build it in a way that it can be deconstructed later, like defined data boundaries in your code, and kind of object
oriented programming, which we all kind of should be doing now, we can build in the capability to disaggregate it if we need to. But do you need to add Kubernetes to it right now, or five years from now? Maybe not, and that adds an entire new level of complexity, right? So do you actually need it, or is it just the new shiny thing that everyone thinks they ought to do? And again, state machine, I can't get away from that concept. Is it understood well enough to the system's criticality level? Is this a safety critical system? Now, there are a lot some standards around that, but is it safety critical? Is it an ICS environment? Is it a
manufacturing line like Do you understand it well enough that it's not going to kill people? Do you understand the medical transcription system well enough that it's not going to kill people? What is the safety of that system? There's a whole concept of safety around machine learning models. What does safe mean for it, there's a whole discipline forming around that safety for that might mean, again, as an EMT, that I have more false positives than false negatives, so at the very least, I'm going to start looking for a problem in a patient that doesn't exist. I'd rather do that than look than not look for something that does exist. Maybe that's what safety means for that. Other systems, it'll very much vary. But again, another
level of complexity on top of complexity on top of complexity, on top of complexity. I just kind of see that piling up. Maybe I'm old and tired, but I just find it exhausting, and I get very discouraged, even though I keep fighting that good fight. So if you agree with me, please find me. Tell them, tell me I'm right. If you disagree with me, please find me, tell me I'm wrong. But thank you for the for the time, and please watch Dr Strangelove if you haven't seen it. So thank you. Applause.