
have to hold it this way. Yeah.
Thank you. Right. Thank you everybody for coming here. Thanks for your time. Uh I'm happy to be here to present the tool called Polar Espresso uh which actually pulls shots uh of cloud live response and advance analyzes. Uh we're going to get back to the name a bit after. Uh my name is Arukshini. I am a detection engineer at Permiso Security, a US-based company focused on universal identity uh detection uh protection and uncovering and finding evil evil across uh all integrations on cloud. I am from Kosova. It is a country in Europe in Balkans. Uh there's a picture of me drinking coffee or espresso with the Yeti character. So that's kind of the name uh
that the tool has. It's because I like espressos a lot drinking them and it kind of makes sense for the tool and its uh purpose. So we're going to go ahead and dive into the agenda. Uh we're going to go and in have an introduction about the topic that we are about to uh speak today. uh we will dive into the problem which actually is different vendors uh different logs different hunting and definitely different IOC's indicators of compromise which is a huge problem nowadays on all cloud environments on threat hunting on every kind of uh defensive mechanism or uh however we choose to approach it uh different logs and vendors uh really uh make a huge
problem in that. Uh we're going to go ahead and talk about the solution which clearly is uh the tool I'm about to present today the polar espresso and in the end we're going to see a tool demo uh which will dive into the different aspects of the tool uh what uh it does and all the features it currently has. So let's begin with threat actors bounty which is a different logging or we can also uh call them attackers or we can also call them adversaries. We can call them however we want but in essence they are all the same thing. It's actually a bad person doing bad things or a person with good tech skills using them in a
bad way which is kind of similar to uh cloud logging which nowadays it started shifting uh its name to vendor logging which is a really big problem and it's uh the topic of today on how we are going to treat that in order to make uh it easier for defenders to go through different uh cloud logs. uh perform uh all the threat hunting, incident response, all other processes that they need, documentations and all of that and also how it affects detection engineering. So cloud logging uh as we know is a crucial part. It's our eyes on cloud infrastructures. It's our visibility on what's happening on uh cloud infrastructures let it be AWS uh GCP uh Azure octa uh whatever cloud
integration that we use uh logging is the most crucial part and it's the only thing or only chance that we have to have visibility across our identities and uh environments and it's the only chance to uh catch an attacker find malicious activity performed by an identity. Even if it's not an attacker, it can be an insider performing espionage for other companies or whatever other activity it might uh have going on. So log diversity uh it's pretty much a huge problem nowadays since uh we have all those cloud vendors uh as I mentioned before and each and every one of them has their own uh way of logging things. Now by logging things I mean having different field names and
different uh approaches to how they uh choose to to have visibility on uh actions and activity on cloud and that usually affects threat hunting and incident response which is an important part uh for I mean every company that uses uh cloud resources since uh going threat hunting and incident response and also detection engineering which is creating detections uh on for our cloud infrastructure trying to catch malicious activity uh trying to f find malicious patterns uh dealing with log diversity is is a big problem and we can say long live the logs since we really need them but when it comes to different logs we definitely don't want to long live different logs we want gone as soon as possible since that's a
really problematic area. So now for for the main topic uh we're going to go ahead and present the real problem. So as we all know logs speak different languages and most of the time identity is lost in translation or same identity across all integrations but it's different in each integration by the way it is presented and from that we have fragmented fields we have also fragmented defenders and that's the way that we present the same identity we present the same IP address we present the same user agent the action that is being performed on cloud environments. It is different on every one of those and we also have another topic which is uh the value of the field
names which is also different on every cloud vendor but I mean that's kind of another thing to talk about and uh there are some kind of solutions to normalize a few things uh but the main thing is the field names. So the field names are something that represent uh let's say the action or the IP address across different cloud integrations and that's the problem here which is different in every of those integrations. So by having the different fields and uh different uh kind of referring way to each of them uh affects and brings us a longer investigation time where we have to separately analyze identity on GCP. Then we have to separately investigate or analyze the identity on AWS. we have
to separately investigate the identity on octa on GitHub or whatever uh else we use and automatically we have to perform numerous uh translations between all those cloud vendors which definitely brings us a longer investigation time. Now we have to have a different approach for compromised identities. We have query fatigues depending on what uh platforms we use to extract the logs or go through logs and find malicious patterns. We have to kind of reinvent the wheel for every log source. And we also have to build different detections for uh let's say if we want to investigate an identity or we want to track an IP address and have an IOC for that we have to build different
detections or different IOC's for every kind of uh cloud platform. It's kind of similar like driving threat hunting and incident response with a manual gear. So you have to switch it between Azure, AWS, Octa, GitHub or whatever. The most important things is sometimes we have to switch in reverse which is the worst things. So every log log source feels like a separate road. It's it has a slower scoping, slower containment, definitely a slower response and you have to definitely press the press the clutch for every log source. Now detecting the behavior not the log format is what we kind of want to achieve by this tool which is a huge problem because I mean one rule to rule
them all is still pretty difficult because you can't do that yet since uh all of those different differences between cloud vendors. Uh and again log logs speak uh different languages. Attackers most of the time hope that defenders get lost in translation. Since we have multiple cases where an attacker compromises an identity in a specific cloud platform and after that they they manage to get access let's say in a crucial environment. From that they try to perform lateral movement across uh other cloud integrations. they try to gain more access and by doing that when multiple cloud sources are affected. We definitely uh know that it's going to be a huge problem to track that that identities activity across all those
platforms having in consideration that uh each logging is different and multiple detections uh often tend to fail since uh there are differences and not only differences but we also have uh common uh updates which affect uh the fields that detections are built in and numerous Those detections are repetitive across integrations. Uh that means that for example, if we only want to have a detection on an IP address or on a user agent, let's say we want to only put a detection for a spec specific IP address that uh we know to be malicious and we want to have visibility on that IP address uh on AWS, on GCP, on GitHub, Octa or whatever. uh we have to do that separately for all
those platforms since uh none of them have the same field names. So we have to create different detections even if it is just for one single IP address or even if it is for an identity we have to go through all those logs try to analyze them what we want to get from them and then go ahead and uh create the needed detections. This is kind of like also vendorbound detections since you have to adapt the vendor to build detections. So it's not like only cloud detections but it's more shifting to the vendor bound detections. So yeah, now the problem isn't the lack of data is that the defenders are forced uh to chase vendor specific field names
on their hunts, incident response and detections instead of attacker behaviors. So we have logs and data across all cloud integrations. I mean in the most part because sometimes there are are some uh cloud platforms that require you to pay in order to get logs or visibility on your infrastructure but for the most part you get logs for free hopefully. But the problem here is that we can directly shift into the uh attacking or attacker behaviors or finding malicious activity but we have to kind of switch our focus to analyzing and uh translating logs and stuff like that. So cloud logs definitely a no. It's AWS logs. It's Microsoft logs. It's Google cloud logs and that's what we are
going to see now. So you probably have heard about event name which is uh the main field on logs that belong to AWS which shows the action that has been performed on a service or whatever is the main part here. For Azure we have it operation name do value. For GCP we have product payload method name. For GitHub we have it action simply for octa we have it event type and all these others as you can see for uh IP address AWS calls it source IP address Azure calls it client IP address GCP goes on its own again and we have to access a bunch of uh field or objects such as product payload request metadata and caller IP
then we go to the GitHub where gladly they chose chose it to go the simple way with actor IP. Then we have octa they have IP address makes sense. Then we have the identity part where AWS calls it principal ID. Azure calls it caller. We have GCP again which is product payload authentication info principal email. Then we have GitHub actor and octa alternate ID. So this is uh the problem we are talking about since for the same thing let's say for the IP address we have all these kind of different uh approaches from every cloud vendor which is uh kind of unnecessary and it makes the work harder for uh threat hunters or detection engineers
whatever it is uh it makes it pretty difficult because as I mentioned before if you want to perform a threat hunting or you want to analyze one identities activity. Uh let's say you use uh Splunk or KQL from uh Microsoft, you have to access these fields differently for each integration. Even though you want to just get a visibility over your identity, you first have to understand how AWS calls the IP address field or the principal ID field. You have to understand how GCP calls and how you can access that. Since there are three different things, the product payload, authentication info, principal email. Now those of some things make sense because they want to differentiate and
separate logs into different uh objects. But for the defender perspective, it kind of makes it very messy since you have to learn and shift between all all those things uh constantly as you go go through uh threat hunting or other stuff. So yeah, it's same but different but it's still the same. So here we have a comparison between AWS and GCP logs where we can see uh for the identity part AWL AWS calls it principal ID and ARN both of them represent the identity in different ways and from the GCP side we have the principal email and the principal subject. Now on both sides it's talking about identity but it's different in both sides. Now we have the event name where I took
an example just create role on both sides. We have event name and we have method name. And as you can see method name here is part of product payload which is the main object on top. So every time you want to access method name you have to firstly access the product payload. The same goes for the uh IP address where you have to firstly access the product payload. the request metadata and then go to the caller IP and we also have the user agent where we kind of do the same over there as well. So yeah, this is just another part of GCP logs which is slightly longer than AWS. Now for investigating on GCP we have
multiple layers to critical fields as I mentioned before and we have overloaded queries. Now this is only a small example on how you would access uh some fields over there where you can see that only to have a visibility uh using KQL for method name which is the main field name for service name for principle uh for caller IP for user agent you have to perform all all those normalizations manually and you have to repeat this for all other integrations. Now this is just the beginning or the starting point to investigate an an identity if you want or you are suspicious that uh there is malicious activity going on. Now this is the first or the starting point let's
say here where you want to see uh that identity activity and in order to do that you have you have to firstly normalize all these fields and you have to access them. Now if that identity is across different cloud platforms, this is going to be a problem since you won't have an unified place to see all those uh logs and data regarding that uh identity and you can do that automatically because you have to uh shift from GCP to AWS and go through their logs to understand what are the field names, what are they called and how you can access them from there. Then you have to do the same for octa and you kind of hope it will be easier for
GitHub and you do the same repeatedly again and again for for all integrations and as soon as you want to create detections it's going to be a problem as well since uh I mean you cannot create detections all the time uh like universal detections for all cloud platforms but definitely there are plenty of detections that can be universal. For example, we can take uh here uh the let's say principal email. You have the identity you are suspicious of. You think like he's performing a spoon or he has been compromised and you want uh to make sure to follow that activity. You want to have a detection for that identity across all cloud platforms because you definitely don't
want to investigate him only on GCP. So in order to do to do that you have to perform the same detection for every cloud integration you have to go and perform all those normalizations you have to go and create all those like bunch of files for the same thing for the same IP address for the same identity or for any other field or let's say user agent you want to find uh anywhere the user agent had something to do with cow Linux let's say or anything else you have to do that let's say in the best luck you have five files. In the worst case, you can put them all in one file and have a total mess. But
otherwise, it's not going to be uh a good approach to have uh a lot of files or even in the same file for the same thing across uh multiple cloud integrations. So, so the solution here I'm going to present in front of you today is called Polar Espresso, the opensource tool that I've been working on. So, [snorts] it's an it's a modular extensible uh Python framework and it currently supports uh three integrations which is AWS, GCP and Azure. uh planning to add different uh integrations in the future which uh obviously the easiest one ones to add were octitub. Uh the way of adding integrations uh goes through logs. So you have to analyze logs again to see which fields
you would like to have normalized. And having in consideration all those differences between uh different uh cloud vendors sometimes is hard because uh you cannot do that because some cloud uh vendor has some important fields and the other cloud vendor uh doesn't have those same important fields. So uh you you have to to find the common ground over there. Uh what the tool does is normalization. So all these problems that uh I talked about uh the polar espresso treats those and normalizes all those uh key field names. You can also go ahead and create IOC's universally across all your cloud integrations no matter what it is as long as it is supported by the tool. Uh you can
universally create detections and you don't have to worry about uh different logging or different field names. You can also perform thread hunting and an analyze identities activity since it has unified views. It has detailed views. uh you can have uh an analyze an identity by uh putting IOC's uh for that identity. So the only thing you need to do uh in order the tool uh to work is just feed the tool with logs whatever logs they they are of course as long as uh the tool supports integration. So this is the problem that we were just talking about about the method name and event name pointing to each other while being the same thing. So you can brew
logs which again goes with the name espresso. You can choose individual files or select a whole folder containing multiple files and you have you don't need to worry if JSON has errors. You just run it. So modularity is key of this tool. As I mentioned before uh it supports uh these three integrations AWS, GCP and Azure. Uh for the next version it is plan to support uh Octa and GitHub as well and it adapts to custom needs. So when people or defenders want to change the tool to uh perform investigations based on their needs, they can pretty uh much change the tool uh to adapt their needs. And it's pretty easy because uh it's separated in that way and its
architecture is in that way that you can uh add more integrations without crashing others and you can choose all the field names that you want to uh have visibility on without uh worrying for making things worse. So since uh logs speak different languages, normalization gives logs a common language or like a translator makes them understand each other. So as we can see here for the event name on AWS operation name value on Azure and product payload method name on GCP, we went ahead and just called it event action. it kind of makes the most sense of it since it's an action being performed from an identity. So event action is the most appropriate way to
approach that thing for source IP address, client IP address, product payload, request method data, caller IP, it's just actor IP. The reason I called it actor IP and not IP address is because on some logs we kind of have uh different fields where we can see that one field is for the actor that has performed the action and if you scroll below or down you can see that there might be another field representing another IP which sometimes might be the target IP in some integrations which represents the let's say the affected user for example in a case where an admin has let's say changed or assigned a role to somebody else we have in logs the actor IP which
is the admin's IP and then we also have the targets uh IP address the one who was affected by that by that uh role change or something like that for principal ID caller and let's call it short for GCP the principal email it's just the actor mail and for event source or resource provider or service name it's just service what makes more sense and it's the easiest thing to to remember go around it so I'm going to just uh show a little bit of uh the main screen of the tool here I'm also going to show the demo in a bit uh but right here it's separated into three sections where in [clears throat] the upper part
we have the event list. Down below we have the identity activity analyzis where we have uh two visualization uh opportunities to see identities activity and we also have the IOC's part on the left where IoC's are shown uh IOC's that we create uh depending on on our needs or uh whatever we choose to to do. So the purpose is like uh normalizes once and have the opportunity to investigate every time and everywhere no matter the cloud provider or cloud cloud vendor. The tool already normalizes those uh key field names and all the threat hunter or detection engineer or whoever else has to do is just dive in and investigate without the need to manually go through all those cloud
integrations separately and perform all those translations and try to understand every integration's way of logging things. So basically we try to turn vendor chaos into behavioral clarity where we have the unified main event fields. We have clear field names. You can add or remove field names as needed. So you can use what you need. You don't have to use field names that you don't actually need for your investigation process or purpose. And we also have the IOC's where you can create custom and simple IoC's based on uh investigators or detection engineers needs. So they can be integration specific for AWS GCP or uh Azure now and they can also be universal as I mentioned before. If you want to just uh
track an identity or IP address or user agent, you can just go ahead and create a universal detection which will work across all uh cloud integrations. They can easily get more advanced. You don't uh only have to create uh let's say a detection based on the IP address. You can combine different fields such as IP address then tie that IP address with an action. So for example, if this let's say this store IP address has been seen uh doing this uh post on Kubernetes, we want to have a detection on that. You can go ahead and do that. Combine also all those different field names. So can get more advanced than that. uh you can
combine as I said fields and sometimes they can be used as kind of queries to quickly pinpoint uh activity of interest. So if you want to just have an overview of how many times let's say uh an action has been performed on that integration or on that environment you can go ahead and uh do those IOC's as well. So this is just a view of the IOC's. When you click on into the IOC, you can see here that you have the distinct number of events, users, IPs, actions and services. Uh you can see how many uh events are related to this particular IOC. How many users have triggered this IOC? What IP addresses has that user used?
and the services affected by that. This can be achieved just by clicking the view details and it's a pretty uh good option to uh have visibility on the IOC and the reasons why it was triggered and you also have the events tied to that IOC as well. So basically we try to uh achieve take an espresso shot and hunt the threat without needing to do all those things manually. As I said before, Polar espresso enables threat hunting and it makes it easier. Definitely. If you want to perform threat hunting, you can do that with Polar Espresso. As long as you have the logs, you just feed the tool with logs and you have it all in there. Uh you can
perform compromised identity investigation and you have multiple visibility options. You have the normalized event list. You have the IOC's which we just spoke about. You have the identity activity analyzis separated into two uh views. And to go even uh further, you can also search for all identities, IPs or actions. And each search provides uh information about uh the activity related to that search. Let it be IP address or an identity. As we can see here as examples if you search for an identity you have all those details related only to that identity which will show you all the IP addresses that that identity has used all the action that that identity has performed uh the services and the same
goals for the IP address. How many users have used that IP address? Where has it been seen? For example, you might have uh a case where you know that an IP address is malicious and you want to see uh which users on your company or across your environments have used that uh IP address. You can just search for it and you'll find out. Just as simple as that.
We're going to go ahead and dive into the tool demo. Uh actually the public release uh it's done already. It was released uh during uh last conference and it's available on GitHub. I'm going to show you the link on the end. So we're just going to proceed with uh the demo here. So we can just uh access after we download we can access the main file here like that we can uh brew logs into a tool. We can select the logs that we want to use. So for the investigation the brew part goes with espresso. So kind of makes sense that you want to put logs in there. uh you can simply select individual files
or select a whole folder containing uh different JSON files from different uh cloud platforms. Here you can see multiple sessions containing logs inside of them. Uh these sessions can be from any cloud provider but we'll just go ahead and in this case select the whole folder containing all the logs and sessions uh that we want to analyze. You can see the number of events over there and this tool is as I said separated into three sections. the event list or the detailed view. On the bottom, we have the identity activity analyzes and on the left part we have the IOC's which we also talked about. As we can see the event list has all the unified field
names uh I just presented to you before. It has the integration the source file which is the respective file from where the event is coming from tied with the integration as well. We have the event action uh event time actor IP actor email user agent service resource and we also have the IOC hits and the IOC's on the right part. Uh to make it easier you can just right click and show the IOC's for this event or you can click into the IOC's and uh see all the IOC's that are tied to to that single event. You can scroll down to have an overview of all the events. uh that we fed the tool with
and then we can group by IP address where we can see that uh we have the event count uh for an IP address. So how many events has have been performed by that that IP the first scene and last scene the unique actors and unique actions by that IP and also the integrations that are related to the same same IP address. Then we have the by user grouping which kind of shows the similar informations just on the user perspective. We have the event count here. We have the first scene and last seen for that user. We have the unique IPs that are related with that uh user. We have the unique actions that have been performed. Let's say you have uh
227 uh events. So only five actions were unique. So it was they were only repeated and we can also group by action where we can see the actions performed, how many times they were performed, the first and last scene of those actions, the unique IPs used across those and the unique actors uh that performed these particular actions that we we just talked about and we also have the integration part in each uh view or grouping. So in the bottom here uh we have the uh identity activity timeline but we'll go ahead and just talk a little bit about the IOC's where you can see all the IOC's that you have created. Uh some are just pre-built as
examples but you can also go ahead and create them. You can see here the total number of events related to that IOC. uh the distinct number of users, IP addresses, actions, and services affected when this IOC was triggered. You can go ahead and click view details to dive into the details of that indicator of compromise where you can see the time that it was firstly triggered and the last time that it was triggered. You have the statistical overview uh on the bottom here. where you have uh the total number of events uh you have the number of users that have triggered uh this IOC's which in our case is system anonymous you have a number of IPs and also the actions
you have a drop uh you can scroll on the IP addresses just to see all all those that were related to system anonymous and actions on the other side have been performed but by let identity and also the events that are related or that triggered this uh IOC that we created. And the last part is uh the identity activity analyzis uh where we have like a timeline showing identity activity across uh time that we can also customize. So you can basically just see by the spikes when an identity had uh higher activity and uh might have performed uh let's say malicious activity or stuff like that. If it's not constant then definitely there's something going on
and you can also see their count. the visualizations are pretty straightforward and simple to understand and just have an overview of what's uh been going on. So for the next part uh we are just going to go ahead and create uh an IOC just to see an example of how can it be done. So as you can see here uh this is the popup where you can create IOC's. uh you have the integration selection uh where you can select uh AWS, GCP or Azure as per your needs or you can just go ahead and uh go with universal and have the detection across all cloud integrations. Uh for the second part you have the rule
name which you can specify uh for the reason that you created that IOC or uh you can call it detection as well. In this case, we're just going to go ahead and call it system anonymous doing some cool stuff or actually uh not so cool stuff since system anonymous definitely doesn't do cool stuff. But yeah, you have the description, you have the condition over there where you can specify all your uh IOC needs. In this case, we uh have it by default system anonymous and just uh in order to help you understand how you how you can approach the IOC creation, you have it ready. You can just change uh the field values or actually the field names if
you want to target let's say a service or a resource or whatever that is. In this case, we're just going to go ahead and create a detection for I for system anonymous again uh performing uh IO Kubernetes post and where the event response code is zero meaning it was successful which is a bad thing. Most of the time activity from system anonymous gets denied. But if it gets a response code of zero, then that's a bad thing. So we're just going to go ahead and create detection for that. As soon as the detection is or the IOC is created, it is automatically saved on the folder. Uh the way to load it is to just reload the IOC's bar over there.
And you can see that uh it showed up. You can see the number of events related to this IOC users IPs and things that we went ahead and talked previously. For the next part, I'm just going to show you the uh search uh feature as well where you can search for identities, IP addresses, and actions. In this case, we're just going to go ahead and search for system anonymous again. And you can search for any identity across your log uh across your logs. So in this case, we're just going to search for this. You can see all those details. It kind of matches the views for IOC's. It's just that here we are being more specific. We are just
searching for an identity. A nice feature to be added on the future here might be the IOC's part as well. uh just to to show the IOC's related to this exact identity and the same can be done for uh IP addresses or actions which we are just going to show ahead now you know you only need to uh type two numbers for IP address and you'll get recommendations for all IP addresses that match the same pattern across your environments. It shows the same informations and stuff just a testing IP address with some testing actions. So you have again the active part here which shows you when it was active from until when or first seen
last seen and all the other informations for this IP address. On the bottom you have all the events tied with this IP address where you can get more more in depth and you can also do the same for actions. Let's say you have an action of interest. You want to check uh who performed that action uh from what IP addresses that action has been performed. You will see it right here. You can also see the first and last time that action has been performed and all the activity breakdown, the users that performed, the number of times one user performed that action, all the IP addresses, services and also the events here.
Oh yeah.
So the aim of this tool was to have this uni unified uh platform that you can feed with logs uh from every cloud vendor and have all these normalized field names because normalization is the main aspect of this tool. normalizing all those field names having them in one place since as as you can see if you want to just uh filter by actor or user or IP you have integration on the left side and all the main field names uh on the right side where you can uh have them unified and you don't need to go ahead and perform uh different translation queries across all uh cloud vendors and try to learn each uh one of
them the way the they log activity it's sometimes a real pain since for myself when I have to create detections uh not only for let's say an IP address or whatever uh as soon as I'm working let's say for uh a detection on GCP I have to go through the log formats uh learn their way uh of logging stuff uh learn how that works and let's say that takes a time I finish the detection Now I have to create a detection for AWS. I have to redo it all and over again. I have to learn the way that AWS logs things. Uh try to uh analyze the main field names over there and how they work. So I can
create an effective and a detection that works constantly. So by by having this tool uh for me personally uh it kind of makes it easier to uh analyze stuff that I want quickly and effectively because I can create detections uh very fast by having uh that opportunity to do that across all integrations. And not only uh the field names problem but uh sometimes there are also other problems attacking uh affecting the detection process uh which uh might be the field values uh which let's say in GCP they uh constantly change probably not so uh often but they do they kind of have these version names which you have to make detections that you don't want
to uh miss just because of version change uh from the GCP side but that's uh another topic uh the main topic here was the field names which uh this tool aimed to to normalize so the three takeovers from here was the uh were the normalized one and once and investigate faster across all logs build lightweight detection rules that work across vendors and definitely use polar espresso to go through threat hunting incident response and quickly create uh IoC's Thanks everybody for your time and if you have any questions and go ahead. Yep. >> Yep.
>> Excuse me again. I can't hear you. Well, >> uh-huh. Yeah, it it it can you can fill the tool with as many logs as you want. it won't fail most of the time. It depends on the uh detections and IOC's that you also uh create and want to go through all those logs. uh the IOC's part uh is one part of the tool which uh might take more time to load up and in case you have a lot of IOC's at the same time or let's say repetitive IOC's checking for the same thing uh across let's say 5,000 uh events uh that's going to be a bit slower uh but is still going to work
just it's going to be slower or it might crash from time to time. Uh so the main uh idea is to try to avoid repetitive uh IOC's and focus on uh specific IOC's that you want to have and uh memorize those or you can just uh create a list of IOC's that you have just so you don't make a new IOC let's say for IP address once in GCP that's I mean that's why it has the universal uh part to select and create an IOC for all integrations. So I mean that's not a problem in for the most part of the of the tool. >> Yep. >> Okay. >> Okay.
Mhm. Yeah. >> Yep. So, the difference that this makes uh or the way that this is different from a lot of uh seam tools or uh platforms currently in use. As I mentioned before, you have Splunk, you have that other one, and you have a lot of other tools that you can use for going through identities activity. Uh the thing that makes this different is the normalization, which takes all those main field names from every kind of uh log source, which for example, Splunk doesn't do that. It doesn't go to the AWS logs. It doesn't go to GCP logs. It doesn't go to octa logs. It doesn't go to all the vendors and try to come uh
into a common ground to find the thing that matches all of those. So what this does is that it normalizes and it has I mean I have put effort in this to try to normalize these key field names which would make more sense and can be used across uh IOC's or even just going through the identities activity. Uh this is one thing that makes it uh different. So you don't have to normalize every time you want to search for a specific thing. You don't have to go through the normalization process over and over again. Uh depending on the platform you want to investigate or uh find malicious activity, let's say uh it has already uh
normalized and you you can just uh dive into the investigation process or IOC creation without the need to uh translate logs all over and over again. and uh adapt to uh different different way of logging. >> Yep.
>> Excuse me again.
>> Yeah. So the way to get logs from different platforms right now is by having JSON files. So if you have uh let's say multiple sessions or uh only events that you want to investigate uh if as long as they are in a JSON file or in multiple JSON files you can just go ahead and select the whole folder and it's going to load up all those uh logs and data that you want to have and investigate and go through them. >> So I'm saying this application actually sending >> Yeah. Yeah. So it it doesn't have a direct API so you can connect to Google or connect to AWS or Azure. Uh the thing here is that you will have to have the
logs locally. So as long as you have the logs, download them uh and put them into a JSON format. You can you can put the logs into the tool. >> Yeah.
Yep.
>> Yeah. >> Yeah. So, let me just get back to slide here real quick. Let's
do see the example here for Octa. Okay. So for Octa and GitHub as you can see uh they already have like a more direct and understandable uh way of logging things and naming field names. As you can see, it calls it actor IP and octa IP address actor and alternate ID action and event type. So it's way easier normalizing uh octa and github than it was normalizing AWS, Azure and uh GCP. So for that it's going to be a pretty simple process by just uh referring to the same thing just shifting it a bit to match uh the tool uh format of uh naming. uh but we are going to go ahead and uh normalize uh
those same field names that are already normalized for uh the other integrations and as per the IOC creation as I mentioned before you can create different IOC's based on conditions that you want to specify uh it cannot it doesn't need to be just a simple IOC or only for an IP address or user agent you can combine uh the conditions uh let's say you want the identity uh you want to have an IP associated with that identity and you have you want to only track uh those two when this specific action has happened you can also do that and when you go ahead and add more let's say field names or you want to normalize even more fields then
you can also do that again by also adding uh more conditions to the IOC's and for the time of it being triggered it's pretty simple as you just uh refresh and reload and the IOC's will will load up. I mean the IOC's part will not be affected uh by adding more field names or integrations. >> Yeah.
Show me
to do that. So my IP address
SLS
have now
kind Yeah.
>> Yeah. So you mean like having a part where you can follow the activity of the identity from uh let's say beginning from the SSO through all the cloud integrations. Yeah, I mean that is a great idea as well. But uh that is something that actually uh the company Yeah, it's it's a lot more and uh it's kind of actually like what permisso does the company that I work on uh where you have the universal identity over there and you can analyze uh identities activity beginning from the SSO or logging in process MFAs and through the whole integrations wherever that identity is uh performing activities. So you'll have all those uh related activities shown in a graph format. So
you can follow all of that uh like by using the company's uh product which uh aims to have a universal identity across all cloud platforms and find all the malicious and unveil threat actors hiding onto the identities patterns. >> Yeah. >> Yeah.
Yeah. Yeah. Yeah. Yeah. Definitely. That makes sense. Thank you. Thank [applause] you very much. Thank you.