
Okay. Uh yeah, thank you very much for coming. Uh I'm Ailan. I'm a director of research at K networks and I'm very excited to talk with you about what we call as the contextual aetic garbage collector. Uh you can think about it as smart sweeper that helps you detect misconfigurations before the attackers even get a chance to exploit them. Um we'll dive how to uh how we identify these misconfigurations uh inside uh using the context like around them. Um and we'll see it inside the critical systems like a firewall. Um we'll dive into the problem domain then move into the um detection layer and finally introduce the entire uh aentic framework. So um a little bit about us um Kato is a
company in the saucy space the convergence of networking and security. Um what we do is that our product takes all the customer uh security and networking infrastructure and moves it into a glob private cloud um with full inspector capabilities over the network. Uh all of it is managed by us. uh and that gives our customers the ability to enjoy a single unified platform. Um instead of juggling multiple point solutions across all of all of their sites. Um so let's talk a bit about the problem uh space. Uh according to Gartner um by 2025% of all cyber security incidents will stem either from lack of talent or from human error. And this isn't some distant prediction we're talking about here.
This is already happening today. And if it's not concerning enough, added to the almost 3.5 million unfilled cyber security roles uh across IT uh CISO, AppSac and security engineering. uh and the result the pressure on uh IT and security teams is skyrocketing and this pressure both um explains and um fuels the surge that we see in the number of security incidents in the industry. Um so when it comes to automation um and AI this is no longer an option. just don't have the the amount of manpower that we need. It's uh it becomes a necessity. It becomes uh a true business need to actually handle all the gaps that have been building up for years, right?
So um our users uh over our platform like IT managers and the CISO which we referred to earlier define the company uh policy across networking and security through the policies uh they create. Uh a policy is essentially a set of rules with the flexibility to um define multiple configuration dimensions. um to the rules. Exactly. So, and um while policies are very impactful and flexible, as flexible as they are, uh they become a true um critical part of the system where the margin for error is significant. So it's really a necessity and it's really needed to introduce and and uh handle all those uh misconfigured rules. So we decided to bring together two worlds. The world of uh policies
and the world of AI. Um the AI capabilities allowed us to build build um an engine uh that continuously monitors um the user configuration and um from there these two worlds of conver convergence really open the door to um some powerful new insights from detecting contextual disisconfigured rules which a whole new set of security issues that live in the semantic space to all the way to suggesting optimizations not only at the in the individual rule level but also at the level of the entire policy which can sometimes be comprised of thousands of rules. Uh in this session we'll be focusing on the first type of insight uh detecting contextual misconfigured rules. So um when we take a step back and take
a look at the bigger picture, it's clear that security teams need help. Uh that's why um an AIdriven engine that proactively and continuously monitors the user configuration uh is needed to ultimately enhance the customer's security posture. Um so what does a a policy mechanism look like? At its core it's it's a list of rules uh where the order matters and every rule defines the uh desired behavior, right? Um so let's take a look at the firewall rule for example. Um you can see the rule name allow remote connection Ireland. You can see the rule description following requests by the finance team for due diligence and a list of predicates like source, device, app category and so on and so forth to
define the exact behavior. So if a policy is comprised of thousands of rules, what does this AI engine actually do? Um, it applies contextual misconfiguration detection by analyzing rule sentiment, free text in the rule rather than just its configuration. It can examine elements such as the rule name and description, associated tickets with the rule like contracts, read pages, any customer customized documents and um essentially takes um a lot of the burden off the customer's shoulders as a true force multiplier to the you know lack of resources that uh most of us uh deal with. So this uh semantic um behavioral engine that we've developed actually um introduced a whole new set of security insights. Uh and in the next slide we'll
dive into some examples on the temporary rules, the testing rules and the expired rules. But there are a few other types of insights that are out of scope for today's session but are also worth mentioning. Uh the first one is contradicting rules um which are rules that essentially contradict each other um and affect and causing misalignment to the entire policy. We as we said earlier the order matters and if we have one rule at the top and another one at the bottom both contradict each other so we have misalignment in the policy. Second one is mismatch rules where we have a rule that is mistakenly configured with a wrong action like in the description I said I want to block a
certain application and in the action I chose allow um I have a message with myself and over permissive rules which are uh essentially fire rules that are configured with too broad a scope causing unnecessary um unnecessarily um increasing ing the attack surface. Right? So, let's go into details uh and see some examples of the insights. So, the first one is temporary rules. These are uh rules that were meant for short-term solutions designed to address immediate often business related needs. Um let's see the examples. So, the first one that we have here, I think the text is uh small for you. So I'll read it out. Uh we have uh a rule name here with allow unlimited access or R&D and uh the
description says require temporary for the 2025 hackathon event. Again this is unlimited access. So if the hyon event is over I bet we should remove the uh you know unlimited access rule. Uh second in line we have a rule that says temporary allow inbound connection for RDP as requested by some ticket for a PC phase and uh app access to a third party contract. Okay. So if the PC phase was has ended, uh think about how dangerous it is to leave an inbound connection open from the internet to RDP. As we all know uh there has been a lot of exploits and vulnerabilities uh for RDP in the past few years. So leaving it as is as is if the PC has
ended is really leaving a door open for unnecessary reasons. Um and the third party contractor if he uh if they doesn't work for the company anymore why do they still need access? Right? So these are temporary rules. Um, another rule that was u meant to block a certain geography to due to a globe event and another rule at the bottom uh that was created ad hoc for Ethan elevated access for TNET even though it is against the company policy. Okay. So as you can see the detection can range from simple semantic cues like spotting the world the words CMP or temp in the rule name to um more complex context based detections like um third
party contractors or [snorts] ad hoc rules that were created for a certain user. Second example is the testing rules which are slightly different than the um temporary rules. These are essentially rules that were explicitly created for validation, debugging or experimenting with a specific feature or scenario. Um for the first example, we can see a rule that says troubleshooting network issue apply during a debugging phase. Now we all know how generous we are with our non-restrictive approach while debugging. But seriously though, if we ended with the debugging phase, uh there's no need for um this this over permissive rule anymore. The second one is uh a testing rule that says allow any rule to see the potential
impact. again allow any if it's uh if it's not needed anymore it's really uh it's really dangerous to leave it out there it should be removed ASAP second third on here uh it says IoT segment use for quickly check if the latest IP cameras some communication issue between IP cameras and DVRs and um to see if the issue was resolved. So here this is a a nice example because as we said earlier there's not a single word or Q semantic u here that we can create some regular expression to check we really need to understand the context of this sentence to understand that this is a testing rule okay and um inherently this is all multil language
depending on the LLM that we're using we have seen customers using many different types of languages. For example, in Chinese old domain domain controller assessment verification in French dummy stop demo for the new RDS feature and in Spanish evaluate access to forbidden nai applications. This these are testing rules in different languages and it uses an LM so it picks it up um very very quickly. Um the third one is expired rules. Um this is the one I think one of the most unique ones. These are rules that were implemented for a specific need and have reached their intended expiration or cut off date. Um for example, we can see um a rule that allows access to Tik
Tok uh due to marketing course that ends on a specific date. [snorts] Um, we can see two rules with an expiry date embedded in their rule name ESB 13 of May. Um, and these were found among thousands of other rules. A very unique convention. And um these two down here are patching efforts must be completed until a certain date and troubleshooting a printing issue um also can be removed after certain date. These dates have different formats. And on top of that, we can actually get more context if we're not um confident enough about those detections by connecting using a tool to let's say the customer ticketing system to understand if the patching efforts were actually done or
not. Um so that's if if we have the right tool essentially it's it's possible. So what do all of the AI insights uh have in common? They both um create improvement in two distinct metrics. The first one is the detection quality. Again, we have seen rules here that were hard, if not impossible, to find within hundreds or thousands of rules within a policy with standard um uristic based checks. >> And remember the stats we stats we said earlier at the beginning, 50% of cyber security incidents occur due to this kind of issues, right? Misconfigured rules that leave the door open. And uh second one is the detection speed the meanantime to detect. I think that u
let's say if we do spend resources and try to find these they may be found during an audit once a year or at best once a quarter or worse um only after the breach. So um up until now we've gone into um the very basic steps that we need to detect and um let's talk a bit about the building blocks in the identic framework. Let's see the agent that we have under the under the scenes to make this uh all happen. Um so when uh building a multi- aent architecture you have different types of [clears throat] architectures that you choose from. You can choose the supervisor approach, the single agent with tools um many different types. We
chose the supervisor approach. The supervisor is uh generally like um an agent that's responsible for all the other agents, the interactions with all the other agents. Um it's like a traffic controller basically. Um it decides which agent should be called next and so on and so forth. Um it's also responsible for triggering the analysis process. Um it could be scheduled or by detecting a change in the customer integration, right? Uh it can perform account lookups like understanding if the account is active or not and which policies are enabled. We said earlier we cover a wide range of policies. Um and it's also responsible for the agent management as we said to assign a task to a specific agent aboard an agent task
or redo a task and also ask for human in the loop verdict if needed and store the or publish the insights into production for the customer to see and analyze. Next in line we have the data retrieval agent right we need the policy itself we need to process the rules. So this agent is um its goal is to retrieve all the available raw data for a given policy. It fetches the data with you know with our public API. It we use GraphQL so it uses GraphQL to make GraphQL queries for the policy data itself. But if you have an API to any other uh system um you can you can use whatever you want and uh it checks for available
enrichments from third party sources like servers now or Zesk where all customer interactions are saved using um using a certain tool and it uses an LLM to distill the relevant points from long text generate summaries and perform some name and entity recognitions. Next in line, we have the uh temporary rules detection agent. Its goal is to detect temporary rules within a policy. Temporary rules, we've seen what they are. Uh it gets its role data policy from another agent and then uses anthropic claude 3.5 over Amazon as its LM. And uh in in terms of tools um we actually made it clear here. So you can see the LM use and the tools that the agent has. So um this agent is
capable to instigate further or confirm or disapprove um like uh low confidence insights by quering internal systems such as contract management systems docuign or uh ticketing systems like Jira. Next in line we have the expired rules detection agent. Um this agent is responsible to detect expired or nearly expired rules. Uh it receives uh the data from another agent and also uses anthropic clock 3.5 sonet over Amazon bedrock. And one thing I have to say about um LLMs and time comparisons, LM as as much as it's hard to believe, there are some things that LLMs find it hard to do. Um for instance, counting. LM can't count. Um they can't do comparisons and they can't do really
time operations. So this is why we have equipped this agent with um two tools. One is to apply certain time functions like what is the current time, what is the time zone and being able to do comparisons and the second tool is actually an option to resolve holidays and events into specific days. So within certain rules we sometimes see that uh customers use words like Black Friday or the Super Bowl or Thanksgiving or names that we need to resolve into dates. So with the right tool we can actually put them on a timeline and understand um if the date has passed or not. Um that's in terms of the expired rules detection agent. And moving on to the
last one, the judge agent. This one decides which insights are considered um relevant and like have a high impact. So it receives all the relevant insights from the dete detect detection agents and responsible for deciding which ones should be surfaced and which which ones should be dismissed. Um it's also using FL 3.5 and uh it's capable to perform uh some noise reduction to take high to tackle high ambiguity cases and also judge in case of insights conflict. If we have insights from different detection agents that conflict contradict each other, it can uh make a decision on that and also to prioritize the the the insights um themselves like this insights is uh is more important
than another and so on. So let's connect them all together. the supervisor. Okay. So, we all it all starts with the supervisor agent that um is responsible to uh actually initiate the process. As we've said, it can be scheduled or it can be based on detecting a change in the configuration. And next um it um initiates the data retrieval agent which is responsible for fetching the data the row policy data. Um it's gathering all the relevant information and uh returns the relevant information to the orchestrator. Now the orchestrator uh is responsible for initiating the data retrieval agents. In this case, it shows to initiate all of the detection agents, meaning temporary rule detection agent, contradicting and expired rule detection
agents. Um once they are done they also move back the results into the orchestrator and from there um the supervisor um initiates the judge agent which gets all of the found insights uh perform its uh conflict resolving and final prioritization and once the judge review is done the final insights are going back to the supervisor agent and it chooses which ones should be um propagated back to production for the customer to analyze. So um from a problem to a solution we've seen how misconfigurations uh can cause uh risks security issues um and how AI can help to solve them. Uh I think that um this session we today is just the beginning of a of a more of a
smarter more proactive way uh to security and uh thank YOU [applause]
any question I'll be happy to chat if not uh yeah the temporary trans the impact on the test was all medium high just the example you gave was a limitation somewhere. >> No, I think that it depends on the context. >> So [snorts] you have every single one of them every single one actually was high. >> So it's not necessarily going to be like that. So if it depends on context. So if for instance um over the temporary rules there is a very sensitive protocol in use >> um so it might get a higher impact >> right got a question about you you said you could integrate a ticket systems >> what sort of information it look for in
a ticket >> so um if it can really understand that for instance there was a rule there with a certain ticket ID ID inside. So if it would have picked up using a tool like an MCP connection to a ticketing system and would picked up the exact ticket um that was used against this rule and the this ticket was for instance done in its status. So it can understand that the the rule itself is not needed because the effort was done. Anything else? [snorts] Okay, thank you very much. [applause]