
cool I'm good all right let's get it started then good afternoon uh thank you so much for having me I am super super excited to be here today again as Bruno mentioned uh I had the pleasure of founding this conference with him 10 years ago and wow it's been 10 years um I am here to talk a little bit about AI uh and I can already hear some groaning in the audience yes we're going to talk a little bit about AI but I'm going to try and talk a little bit about reality just to give you a little bit of context my name is Thiago uh I work for a company called Coalition we sell cyber
Insurance um however we are not your typical cyber insurer we scan the entire internet uh we run large networks of honeypots we do a lot of technical work when doing underwriting um and we found that we needed some help from AI to do some of the things that we do I run the research team um let's dive in so my objective for today is to show you what is possible reality day-to-day use cases with AI my second objective is that you walk away from this presentation wanting to try some of these things and thinking about when you go back to your jobs on Monday hey could I try this thing could me could it make me better and by the
way I don't know about you guys but at least what I found is if you use AI it can accelerate you meaning you can be lazier and still get paid the same and that typically tends to be a really good thing so let's go back to basics before I dive in super deep about the use cases I wanted to at least give you a foundation of AI if we go back to November 30th 2022 open AI introduced chat GPT one mistake people make is thinking that chat GPT is everything chat GPT is actually the application layer that sits on top of models called the GPT models uh and this is what all of you used the other thing that I would
like to point out is it's been a year pretty much it hasn't been that long so as you're thinking about and seeing my examples today don't think just about today think about that it's been one year and this is where we are Imagine where we're going to be a year from now five years from now there are five things that I would like you to know when it comes to Ai and I will promise you that I am not going to use a single mathematical formula because we're security people we're security Engineers we're developers there are ml Engineers that can take care of the mathematics we're going to focus on practical aspects of this thing large language models they're
essentially general purpose machine learning models imagine having a pile of books the books are not from a specific vertical they're not just medicine books they're not just law books they're just books a really large pile of books these are essentially texts that were found on the internet that were scraped that came from books uh and those were used to train these models these models can also be trained for fine-tuned purposes you can think of this as owning a dog you can have a normal dog but you can also have a dog that's trained to be a cop's dog to help a cop or you can have one dog that's strained to Snee drugs or you
can have a a dog that's strained to compete uh in a gym it's the same thing that's like fine-tuning a model you essentially have this thing that learns a very specific set of skills the large portion has to do with the training data sets that are used uh to train these models and also the number of parameters that these models get the other important thing is GPT isn't the first or the only one over time there's been lots of different models released from Google from meta open- Source models androp IC so yes use open AI it's great but don't buy fully into the hype there are cheaper ways to do these things you can even uh have
models running on your laptop which comes with a lot of privacy benefits as well as parameters grow so do the use cases that these models of things these models can do in the more simple ones it's typically language predictions so it can essentially generate text or you can act as a chatbot in more advanced models they can write code for you they can generate images lots of interesting things that a model can do the other thing as well that people tend to confuse is thinking that this is just about text this isn't just about text GPT 4V which was launched by open AI a couple of weeks ago is actually the first multimodel launched by them
meaning you can feed an image to a text model and this allows you to do really interesting things so the image I'm showing you on the right is um from our internet wide scans and we caught a VNC so remote managed computer that was running VNC which is a a management protocol that was open to the internet there is a tiny problem though I don't speak Japanese so when I find these things I'm actually really curious about what is this thing and I tried to ask GPT to help me and I L all I did was give it the image and ask it please describe the attached image and immediately GPT started doing that he
started explaining that uh there is a specialized tablet called TNC 5000 he grabbed the portions that were in Japanese and translated it all for me and then at the end even made a note that there's something weird with this device because the units of speed and noise appear to be incorrect or may be a specific notation used by this type of device so this is really really cool for that a machine learning model I am able to give it an image and it handled translation interpretation summarization all for me in a couple of seconds prompting you just how you go into Google and you type a query prompting is how you interact with these models and prompting can change how the
model is going to act and what the model is going to do for you we have two examples here on the left I ask is there a vulnerability in this code and I paste paste a bunch of C code and it essentially gave me a list of the vulnerabilities that were present and on the write I said write me a secure version of this code no Pros don't explain anything just give me the code and he just started writing a more secure version of this code still in C so prompting is essentially how you ask things out of this model embeddings embeddings is when you grab a piece of text and you essentially translate it into a numerical Vector so
if you want one of these machine learning models to understand some of your personal data data that the model has not been trained on you essentially convert these to embeddings and then you're able to use that data with the model and it essentially allows you to do uh distance similarity because it clusters these sentences because they become numeric vectors it allows you to do clusters of similar things so a man walked a dog and uh a man and dog are walking those two sentences would automatically be close to one another retrieval augmented generation so these models are trained on very specific data but if you want to use your own data or updated data that is
not been trained in the model what you do is you build a database of embeddings and when you're querying you're first going to query the database of the embeddings pull information that's relevant and then you're going to feed that relevant information into the model on your prompt so if you're prompting who walked the dog if first query would would be into a database looking for walking dog it would pull all sentences related to walking a dog and your prompt to the model would go with all those sentences and your question who walked the dog and then the model would respond the man walked the dog and then there's agents you can think of Agents as all the things that I
explained before but then you also give them tools meaning you can have an llm that has access to bash that has access to a browser that has access to whatever apis you guys can think of or use on your day to-day uh and agents are the future this is a modern uh AI stack if you want to experiment at home there's a bunch of different llms there's Frameworks to interact with the llms and there's also the vector databases where you can store those embeddings congratulations you're all AI experts this is it I know thei people are are going to hate me for this but you make it way more complicated than it actually needs to
be so let's start talking about what is actually viable honeypots um and I'll switch here real quick so as I mentioned before we run a fairly large network of honeypots to give you an idea just in the last 30 days we saw 46.3 billion events against our honeypots and events are are malicious exploitation uh internet wide scans all sorts of things and you can see here on the right that we build tags to automatically classify these events now this requires a lot of work um and I'll go here into a quick example just to give you an idea for example on November 1st we saw 27531 payloads that we had never ever ever seen before and this happens every
single day now think about if you're a security engineer having to grab every single one of these payloads having to go into Google understand if it's exploiting a CV or which technology it is associated with and then write a rule to tag those payloads it's a lot of work and honestly no matter how many security Engineers I would hire I would never have enough to be able to just keep up with everything that's happening so we essentially turned to AI to help us out we have our honeypots um we have a rules engine that automatically tags all these payloads uh all of this gets stored in big query we do our anomaly generation meaning we look at all the payloads we
received and select only the ones we've never seen before we then group them together because often we see PS that are fairly similar and have just a dynamic portion where they're trying different usernames for example we then pass those to an AI agent and the AI agent using the Google search API and an llm is automatically going to generate the rules for us right now we are not automatically merging these rules in it feeds a web application where we then have a security and engineer that just says approve not approve when one is approved it generates a PR into our GitHub and new rules are generated and I'll show you a quick demo for this so if we're looking at this day
for example I'm going to pick this payload I'll need to go will get the full payload from here and this is AI so if my demo fails you don't get to throw stones because these things aren't stable sometimes and I'm going to paste the payload here and we're going to give it a second and essentially what this is doing now is breaking the payload into parts and then using a combination of Google queries to try and guess what it is and as you can see here it says that it is associated with F5 big IP it is considered malicious and it also generated the rules to automatically tag these things and the other interesting portion you might notice it removed the
D it tried to create as generic a rule as possible to cover as many pads as possible because it essentially removed the dynamic portion of the payload the user that was being attempted from the rule itself and the great thing about this thing is he can just eat through these payloads nonstop so instead of me having SEC Engineers day in day out sitting looking at these things we no longer need to we can just have ai help us uh tag these
payloads as an example of the benefits of this uh many of you might have heard about C A that just came out this week that lock beit the ransomware group is using a zero day in C A we actually observed scanning on our honeypots for Cade all the way back in March 15th so when the vulnerability was dis disclosed uh this month it was super easy for us to just go on our honeypots and ask how do we see anything related to C A and because it was tagged it was super easy for us to then turn it around and use these same payloads that people were using on our scans to identify which of
our clients were exposing Cade to the internet but we didn't have to go de drill into the payloads try and understand how the payload works because AI had already tagged these payload for us you can think of this as essentially searching for a needle in a h stack and the a stack is really really big because as I mentioned we get 40 plus billion payloads every single month this is an example of a PR that's generated by the llm into our gab repos that then still gets approved by a human but we're hoping that by the end of the year this is going to be fully automated and humans in the loop are no longer
required over time uh one interesting thing to see is the growth in our Honeypot tags so to give you an idea if you look at when we started using GPT Honeypot which is what we call the system it took us 12 months with just humans to go from 300 to 400 tags but we went from 400 to 500 in just four months and that's still using human approval there's nothing stopping us from Tomorrow having this fully automated and have a massive increase in the number of new things we're able to observe oh and by the way it's not just giving us new things for the things we already knew about it fine-tunes our rules because
sometimes let's say for example that a new type of attack uh for example log for J gmdi it started with just gmdi then it was one uh uppercase one lower case one uppercase one lower case it covers these things as well automatically for us Second Use case source code analysis and web application pen testing it works just like a human the more you can give it the better it will work um and as you remember I mentioned before that GPT is multimodel now so images are also useful and I want to walk you through an example here so in the beginning here I attach the ZIP code with the source code for a web application I'm using the OAS Juice
Shop as an example here and I also give it a text file with the URLs from the spidering of the web application it immediately found a potential vulnerability cross- side scripting it writes a proof of concept and then I ask it show me the proof of concept URL and then it tells me to test it in the user profile and I just continue to ask it I test it it actually didn't work I give it a screenshot to show that it didn't work I asked it give me some more examples with obfuscation the llm automatically generates cross side scripting examples with alisation I'll let the video keep going I also gave it then a screenshot
of the main menu as in the profile that it asked me it didn't work asked it where else could I test it it asked me to test it in the search function I pasted the payload and it still didn't work and I I gave it the screenshot and told it one more time it didn't work I asked it can you give me 10 more examples that I can try in the search functional it gave me some more and then go a little bit forward the crossy scripting triggered and I gave it the screenshot and said I just said this was the output and it replies based on your description it sounds like one of the
payload successfully triggered which means the payload was not properly sanitized it gives me the full description and essentially I was able to find a cross-site scripting the cool thing about this as well is I can then just ask it write me a report for the maintainers and essentially will give me a fully written report describing the vulnerability steps to reproduce all those things that honestly as a pentester I hated doing because it was boring as hell um so it can find vulnerabilities it works fairly well um let's see if I can go back to presentation mode there we go from my personal research I would consider GPT 4 is like an analyst for code for Network packets with one two
years of experience but with the writing skills of a senior I've been testing against a bunch of Open Source software and up until now I've been able to find two new zero days in boa server which is an older uh web browser but that is still heavily used in embedded software and I was also able to find a no day in redis um the one thing that I will say is the version I use in my research is more agent based is not just pasting things in chaty PT as I did in my examples so what that means is I give it access to a server that has the software running and it can automatically test
the payloads that it's producing against the software detect if the exploit was successful or not so I do a couple more Shenanigans to essentially automate and run this thing at scale um I'll leave it to you up to set up a pipeline like that because there's fun things that can happen with that um I also created a custom GPT for source code analysis you are more than welcome to try it all you got to do is zip uh have zip file with source code throw it into this GPT and ask it to find the type of vulnerability and it's automatically going to start to do its thing peap analysis for those of you that do networking there's some
limitations uh if a library doesn't exist in the um in the environment the llm can't just install it but if that happens it can still act as a co-pilot meaning it will ask you can you please install it um and give me the output so as an example here I gave it a sample onep cap um and I told it hey here's a network that is under the Dos attack please analyze it find out what's causing the doos and the type of Doos also generates some supporting plots to go with the analysis it gives me errors because it essentially doesn't have scappy installed to be able to analyze the peap but then it tells me hey you
can write some python code that export reads the peap exports everything as a CSV and then I'll be able to install to analyze it but of course I am super lazy and I cannot be bothered writing python code so I asked it to write the python code on my behalf so it writes the python code then all I needed to do was copy paste it run it against the p app and then I give it the output. CSV and it immediately starts the analysis it lists all the fields what the fields are and then starts doing frequency analysis finds the IPS that were ddosing causing the DDOS um looks at the traffic patterns looks at protocol usage
frequency and it essentially tries to guess what type of the Doos was being used here um and again I can hear you in the audience saying yes but um let's Advance here again Google meets is not a fan of videos there we go yes I know your cm and log management platforms can also do PRD graphs and can do all of this but the interesting thing with GPT is that we can also do this where I gave it a different CSV where there was exploitation actually happening there was a vulnerability R being ran from one machine to another and I said uh please tell me which vulnerability is being exploited in this traffic and he
essentially did analyses it concluded that hey looks like there's heavy traffic uh SMB on Port 445 based um SMB on Port 445 and typically when we see this type of thing and based on the payload it is ms7 010 essentially Turnal blue if I'm not mistaken and it was indeed uh that the type of attack that was happening uh on that peap so again as a support analyst not fully automated GPT is pretty useful we are working on what we call a security co-pilot um you can think about it from our perspective as I mentioned we sell cyber insurance and when you buy cyber insurance from us you get access to an attack surface management platform
but on top of that because we sold you cyber insurance we have a financial incentive in your company not getting hacked that means we need to give security support at scale and to give you a notion of what scale means for an insurance company we have a 100,000 customers that we need to give security support to and our security team is about 20 people so we're turning to AI we we launched a first version of our security co-pilot and if you remember from the beginning of the presentation I talked about rag where essentially we fed a bunch of information the typically asked questions into a bot but what we found is that actually it was pretty
limited because anytime there was a question that wasn't in our knowledge base the agent would fail to answer so we're working on a new version the new version is based on agents meaning it has access to Google and is able to look up information um and therefore is not limited to the knowledge we fed to it it also has access to our customer information and typically when you buy cyber insurance you have to tell us things like which EDR do you use which backup solution do you have uh how many office locations do you have and that's all super useful information uh when giving security advice and I'll try to do a quick demo here um so for example if I say
hello uh I have an RDP uh exposed to to the internet and I have a sonic wall firewall I would like to remove the RDP from internet exposure please tell me how and this is still super early beta so it might fail but we'll give it a go we'll see what happens what's happening in the background the agent is breaking down the sentence is going to the sonic wall website is essentially Gathering the sonic wall documentation and he going to try to give step-by-step instructions um I can also try to ask it for more detailed information can you give me step by step instructions what if it's a 4net firewall we'll see what happens but it's
essentially it's Googling the information about these devices going to the documentation don't loading the documentation understanding that we're talking about RDP learning what RDP is learning the associated port with RDP and merging all these things into a concise answer so as you can see uh navigate to policy and objects ipv4 policy the NOS the port for RDP is 3389 and it gives step by-step instructions and this works for any types of questions that Google might potentially have an answer for what our objective here is not that this becomes the uh AGI for cyber security it's just that as we think about our tier one security questions the basics that AI takes those and we can have humans focus on tier two
and tier three that are a lot more complex because we have to think about
scalability CV analysis uh we launched a website. coalision in.com that has a vulnerability categorization and prediction modeling um where essentially we try to predict out of all the CVS that come out which ones are going to have exploit code exist for it and which ones is actually going to be exploited in mass and where we use GPT for this is if you see here the keywords I lighted on the right what we have found is GPT is really good at removing the noise from the descriptions and only pulling out important keywords so it's the important keywords are used as a feature in our our big model in our ESS model so that's another example questions
yes I cannot hear you is there a microphone oh there we
go who is asking the question and up if you have a
question so um you explained that um when a prompt a prompt is converted basically to a numerical vector by sentence am I correct uh no the data uh that you want to feed into the llm you first create generate the embeddings and the embeddings are stored in the knowledge base okay that's completely um no because I was trying to to to to to understand how the numerical how the conversion to a numerical Vector would for example help to in the interpretation process so for example I was imagining that the size or the direction of the vector would imply some sort of interpretation and that and then what came to my mind was so for example I if I have if I finish my
sentence with a thought how would the the thei um note the difference between ending with a paragraph which usually uh means a different topic or just a DOT and then continue um the same topic I'm not sure if I'm explaining myself correctly you are you are but it's separate parts of the process so what would happen is when you send your first prompt that's going to be broken down into specific words those words are going to be used to pull from the knowledge database similar content and then everything is fed to the llm and the llm will handle interpretation and sometimes yes it breaks like they're not perfect just yet adding a not at the end
sometimes for example might break things yes okay thank you yes uh right here poor Paulo has to run back and forth hey a great talk uh I have I have two questions can I can I get a hands up sorry I I do not know where you are oh there you are here yeah yes so I have two questions um how reproducible are your examples even with the same prompt and with the same let's say code snippet do you manage to get reproducible answers uh can you repeat I cannot understand the question how reproducible is the llm output when you give it the same prompt and the same sample of for instance code you wanted to
review how consistent is the output oh um so these are generative AIS so it's not always that you're going to get the same thing and that is where prompt engineering comes in you can there are two things you can change you can change the temperature of the model and the temperature essentially says how Wild the model model gets to get away from what you're asking and fine-tuning the prompt and essentially saying uh no Pros outputting Jason and Jason only my question is is it consistent for you and do you manage to get consistent results with the same sample and the same prompt you you do but it requires a lot of prompt engineering it really requires a
lot you are able to but it requires a lot of prompt engineering and by the way open AI saw that as a problem and that's why in the latest version they released this Json mode because it was something people were really struggling that sometimes get these llms to Output valid Json um but yes it's a problem but prompt engineering helps a lot right because yeah I notice if I ask an llm to Output s Json like one out of 10 times it's like nah I'm just going to ignore your prompt do do you notice the same thing we we do uh and and to give you an idea as we think about one of these
projects we don't necessarily spend the time training models is actually like fine-tuning prompt engineering and we've built systems that essentially launch the same same request with different prompts and over time we're monitoring quality of all of these prompts to see which ones are more consistent you need to have a lot of metrics to successfully deploy something like this in production very nice I'm definitely interested in this um second question have you tried other models such as Cloud 2 or any uh you know open source models we do uh we have some things so uh we have certain AI projects that we launched um and we didn't have our open AI Enterprise contract signed yet uh and to give you
guys a little bit of of context if um you're using customer data do not use the normal open AI apis sign up for an Enterprise contract because then you get zero retention and full privacy going back to your question we didn't have our contract signed yet and we wanted to proceed with some projects so we have um an internal bot that handles customer requests more on the insurance side that is an open source model we train the fast chat T5 we fine-tuned an open source model all right thanks no problem any other I think we have one here Paul
okay not your turn yet you're going to have to wait get light great talk uh thank you second second talk about AI in two days so this is pretty fun so I want to ask about the the plan is to then remove humans from the equation the human acceptance of the of the results what are the safeguards that you intend to put in place to ensure that the AI just doesn't drift off the rails and just start putting mrr that are dangerous so a couple of things uh one is uh any project we start always starts with a human in the loop and as the humans in my team are reviewing this payad saying yes and no that is fine
tuning it's essentially reinforcement learning is fine-tuning the model again uh number two we have a lot of metrics specifically we know uh we have canaries meaning queries we know the result should be this thing and we constantly run them and confirm that the result still looks the same so for any project you always need to build canaries number three depending on the project ideally you use rag to stop hallucinations and I'll give you an example the first absolute first AI project we launched was something that looked at the content of a web page and predicted what was the Nike's code essentially the industry code for that company we never noticed in the first few weeks but the llm was completely
hallucinating Nike's codes like he was telling us codes that it sounded super certain but the codes didn't even exist so what we did to fix that is we essentially rag in the list of existent codes and tell it you are only allowed to pick codes that exist in the list and that essentially stopped all hallucinations of Industry codes uh but that's just you know three things that you can do to reduce those concerns thanks uh that guy is first yeah hello Thiago great talk uh the the same question I had the same question that the other guy did oh so it's answered just a matter of time there you go okay but by the way there
is one question I get asked very often and just a second is this going to replace my job like if you're in security you have job safety you're fine cuz guess what these things suffer from injection these things suff from hallucination these things suffer from escapes of the sandboxes like your job safety is absolutely just fine your job might just shift a little bit don't worry about it yes great talk um can back to the last slide that that that you shown have you can you elaborate a bit about the exploitability prediction that you have like basic methodology and have you Benchmark it and do can you share some results yep um so the way
this works is for any new exploit that comes out we have a bunch of features that we pick um and we essentially compare those features back to exploits that we know for a fact have been exploited that's the the basis of it the features we use are Twitter mentions um about the CV ID it's these sets of keywords it's uh the presence of exploits that are similar to these one in GitHub in metlo in a bunch of things it's fairly similar if you've used epss it's fairly similar our numbers are fairly similar to epss in terms of accuracy and now you might ask me why the hell did you develop your own model instead of contributing to epss I work
in cyber insurance and we're in a highly regulated industry meaning when I file um our our insurance filings I need to tell Regulators in the US how I'm scoring these companies to price their cyber insurance so epss is still very much a blackbox model and I don't know when things go in how they're going to come out here I know how things work end to endend and that's why we built our own version we have a new model coming out in potentially end of January that is looking a tiny bit more accurate than epss so fingers crossed anyone else oh denish yeah and great presentation great great use cases I had a question in terms of using customized models versus
the rag stuff where you feed into the model especially take into account that now we have 128k of context to feed a huge amount of data do you see a world where we going to be using mainly the context especially you gets bigger and we don't need as much to create those customized models because we can be better at creating context even with models feeding the models to feed the context I think it's going to be dependent on some research that open AI and Google have ongoing right now where instead of retraining the full model the the whole thing about rag is that you don't have to retrain the model which is a super expensive process but they're
now doing research into a pro into a type of process where you retrain batches of a model and you can essentially make cumulative batches to the original model if that happens to work I don't think we'll need rag anymore could just essentially retrain a couple of batches send it to the cumulative model and you have a nice model that works for everything but it's all I think it's highly dependent on what comes out of that thank you and by the way again this only started one year ago and this is where we are today I'm super excited about where the next five years are going to take us oh one more question back there one
more hi thanks Thiago you mentioned um the challenges of working in insurance and it being a very regulated industry uh have you run into any friction being able to leverage capabilities like this as often times regulated Industries have to have very kind of simple explainable models and this really kind of goes out the window with llms we do uh we do so if you break Insurance into smaller parts you've got risk selection risk pricing risk maintenance we have no problem whatsoever using machine learning models in Risk selection and risk maintenance super troubling to use it in Risk pricing because uh Regulators do do not like us they they worry correctly about these models being heavily biased towards something that
might essentially excludes some Industries or some types of companies to higher prices or lower prices so in pricing it is still a bit of a struggle any more question well oh I think we have one up there I was going to thank you all hi um my question is related to how do you deal with data poisoning to keep the consistent results um we don't we don't because we haven't ran into that issue just yet um right now you know for our in-house trained models we know where the data is coming from and we're the ones training it so we don't need to worry too much about data poisoning and honestly it's emails from Brokers it's emails from our
customers so we're not super worried on the other side where we're buying it as a service from open AI that's what we're paying them for and they should worry about that well if you're using Twitter data X whatever uh I can guarantee that you data poisoning is an issue that you have to look into yeah but so we're using Twitter data not with llms and not in this way it's a feature into a bigger model so our Twitter data is not used uh for all our llms understood one last question anything uh oh one here in
front just a small question here you are using a rag as part to feed your llm have you tested fine tuning to see the differences understand if it could be a better approach in the future yeah um we we do when we don't I mean it depends on the use case in SE situations it is uh that's why I try not to be too binary about this because it really depends on the use case and what you're trying to do uh so for example for the models that we have that are open source and we wanted them just to handle a specific use case from our customer emails we found that fine-tuning would lead to better
accuracy and made the model faster as well because you don't have the delay from dealing with rag meaning going into the knowledge base converting the embeddings feeding them into the llm so for that we went with fine tuning for other things like the vzo we went with an AG agent that combines rag with the agent tools it really depend uh it's an analysis as you're starting a project you need to think about what is the project that I'm working on and what are the right tools to use there isn't a binary answer in my opinion this will be the last I hope hey tago great presentation thank you you um are you looking to make ESS
available for the general public or just your clients yes and so the plan for ESS is right now you don't need to register you don't need to anything you can just go on the website and browse we are going to be making API available in January no authentication nothing open for everyone I also want to publish a paper to be fully transparent about how ESS Works probably in the first quarter of next year awesome and are you looking for your clients to be able to give context about their environment so this could potentially help with patching prioritization for example so um we have a platform called Collision control uh let me see if I can
uh let's see if this opens here so every client gets access to this um and if you go here to security findings um we give a list of um essentially vulnerabilities that company is exposed to we are going to change this in the first quarter of next year we've built something called an asset classifier that does not use llms uses machine learning but not llms that essentially looks at the assets we have found for the company and tries to understand the role the asset plays in the company then it crosses that data with our ESS scores um and essentially combines these two things to give vulnerability management and vulnerability prioritization for our customers for free in the platform by
the way if you're looking for a tax surface management platforms this is free zero cost sign up you'll have nothing to lose awesome thank you so mucho cool well thank you besides enjoy the rest of the [Applause]
day