
hi everyone um happy to be here to do the first talk kol and I it's the first time for us in bide Last Vegas we did give talk in other bides and other places but um I want to do a a first intro kind of contract uh between us and you guys we will share the slides later if anyone wants I I just rigged it a a little bit before the session so I need to redo it and upload to SlideShare and share it if at any point you feel that it becomes a product speech we say mob too many times because that's our company boo us okay loud like Boo something like this yeah
exactly right this is not the point right we are not doing the product speech here we're are going to talk about our research about our work what we did if you later want to know about it of course we'll be happy to do that um if you like it let us know take pictures post on LinkedIn make it fun and let's start so a little bit about myself and I will say ahead of time it's weird for me to Stand Here stand still but there's I can't take the microphone so it will be weird I was born and raised in Israel if anyone has issues with it sorry um I live in Massachusetts in 2016 with my wife three three dogs
three kids three dogs sadly two dogs since two weeks ago um it is sad I used to run long distance but if anyone here knows the weather in Massachusetts you can't run the entire year so half a year you running and then you lose everything and I have I'm in the absc business since 2007 I was a developer turned to product manager and now I co-founded and I'm the CEO of mob if you want to connect with me I'll be happy to as long as you don't send me spam and bdr stuff and K yeah hello everyone I I'm really happy to be here it's a huge like it's very very good I mean like I enjoy it uh and
thanks a lot uh for coming here so I I was traveling around the world for last like 10 years and uh I I am 15 over the 15 years in uh security and uh software engineering currently I live in Amsterdam uh with my wife and two kids and uh please make some noise for my wife she has a birthday today so while I'm speaking here yeah um and uh I am a co-founding engineer at mob so I'm leading the security search team there oh we need to drink for that tonight yeah okay so we'll start with the agenda it's we have a lot of slides so I don't expect you to take pictures again we will share all those slides if
you want to take pictures take pictures of K it's better uh we are going to talk about the problem what is the goal what we were trying to solve with with our research um the different steps of how we attack um attacked it how to not use AI how to maybe use AI how to yeah we can do the work without Ai and then the mix that we believe is um the way forward now from our perspective yes startup these days everyone need to say ai ai was never the goal the goal if you want to solve a problem it's not AI the AI is maybe a way to get to that goal and this is how we um treated it so
everyone knows this one right the lovely devops infinity loop that security vendors also want to say hey this is Dev SE Ops in infinity loop but from my perspective Dev SEC Ops is broken and why Dev SEC Ops is broken is because you may have Integrated Security scanning into your pipeline you may be scanning on every commit you went all the way in the ID are you actually fixing anything most issues that are found are being kicked off down the road like an old can and are not addressing and the reason that I'm saying Dev seops is broken because the goal for devops was can we release product to the market faster higher quality everything best let's let's ignore crowd
strike for a second but it did prove itself that it works right if Dev secops was the same idea of delivering secure product quickly to the market I don't think we any of us made it because we find vulnerabilities faster we're not fixing them so why wasting the money at the end we get to something like this all those issues that you didn't fix are piled up and I'm working with some companies that have hundreds of thousands or even millions of vulnerabil is in their backlog why because 10 years 15 years they didn't fix any of them and now they can't right you need to come with a D9 and and clear all this mess so
this is the problem and this is the most worthiest slight I ever had here so don't be worried the idea is there is so much work to do so very much work to do and people can't handle that right they they cannot over the years they decided okay we can't fix it so we won't fix it so what nothing happened yesterday nothing will happen tomorrow but there are Winds of Change and software supply chain is a thing if I am a if I'm a software provider and I provide my software to another company and they provide software any vulnerability that I that I have in my code are now their vulnerability we are running on AWS if AWS has a
vulnerability that's my vulnerability now and the the industry understands it between this and PCI 4 by the way that next year will be mandated to fix stuff um yeah you can't just sit on your hands and not fixing things anymore and that's what we were trying to do so our goal was to create automatic first party code fix we're not talking about third party not upgrading your library the code that you actually write that your developers write by the way any developers in the room nice security people what else any ciso in the room okay we'll go to talk to you later um so what we were trying to do is basically minimize the meantime to remediate
you see researchers verod and all that n 3 months to fix an application security High severity issue I mean 3 months that's crazy can we do it faster that was the first goal second goal we're developers in nature I mean I used to be S is we want to help developers get rid of that security pro problem to focus on what they are they love best what they were hired to do right build cool stuff um we want them to build cool but secure stuff and we want to allow fix it scale and this is this was the hardest challenge for us can we actually allow a company let's say a big bank has thousands and thousands of SQL injection
yes they do if you don't know they do I know um kind of scary I know someone can get all the information easily but this is what we're going to focus on because the other issues are hard to me to um measure and how we are going to do that in this session with AI uh who is old enough to recognize this guy cool okay back to C back to okay when we are talking about AI first thing come into our mind is basically generative AI today is the narrative right like you have um all this Char pts Gemini all those models a lot of them and um like a lot of people say they really performs great on uh on
such um tasks so first thing is very very naive approach which we also implemented to do our research and try it so the and the the naive approach here is like you have basically a template of the prompt and in this template you basically have um keywords right like I have xss vulnerability like xss you put the vulnerability you have uh in the following JavaScript code so here's typescript Java whatever you use then you have the triple back and you have the code sample the vulnerable code sample and the vulnerability is in line uh the line of code fix it like fix it as um as ask for llm to do something right and uh here on the screen you can
see like very typical xss injection it's reflected xss so um we take data from the location HF which is the URL in the browser bar uh and we concatenate with text and put it into the body of the page right so if uh hacker put the script tag in the URL um you're going to have a trouble so let's see ah and by the way uh this is the screenshot from um charg PT and for all of the slides here I'm going to use latest chpt uh slides was created like two weeks in advance of the conference so uh all all the prompts and all the responses are umy pretty much the same today and I'll
going to have some links also for the prompts like uh for chpt uh so um the answer from llm is very good in this case like you see it transformed uh what it was before the string concatenation it was transformed to a creation of uh new H1 tag and uh it said text content to the hre so it's a good fix no vulnerability here is possible anymore uh but you know for the sake of research we want to try more than one sample right and uh that's exactly the same prompt as on the previous slide but this time we actually have uh reflected one click xss which is uh less common but still very common type of issue so we create uh a
tag and we put HRA to the window location search and see how we parse the location search in this case like we just uh remove the question mark after the URL and uh put it directly to HF and like you you you can recognize here is a again the typical problem you can put Javascript colum alert one and your code going to be one click vulnerable to uh reflect the TSS again um let's see how llm fixes it and yeah like um here is like many problems there it's a little bit a lot of code so uh I I'll highlight for you things we are focusing on um basically um first problem I see is the
llm uses the search param parer to get the something called URL it wasn't like this before remember like it was the just splitting the question mark so um something suspicious going on here and uh um the table called safe URL like how they actually validate URL how llm actually validate URL in this case it just create a new URL Constructor so basically it doesn't prevent it from any xss injection or anything like that and uh for the sake of like to being 100% sure it is still vulnerable I just copy pasted the code from LM directly to the example.com and exploited it in one second so it is vulnerable and um quick recap like what happened
actually first of all llm ruined the application logic here because the website like the your code was rely on the fact that what what goes after the question mark actually will be the back URL it will be the part of the hre but llm decid that the URL parameter looks better than uh just question mark and so now we have broken application because probably backend renders it different way probably other parts of the application renders uh render it in a different way uh and the second problem obviously it doesn't fix the vulnerability like as we've seen the injection is still possible so fail let's try to figure out like what happened it was exactly the same prompt
right like it's exactly the same scenario what's wrong why one time llm was successful and the second time not and uh I I I don't know answer right because the it's llm it's a kind of black box for now and um but you you can see a really nice example is basically you have two prompts uh one prompt like they they are identical for the developer they are identical I need no J static file server uh use pure nodejs no additional npm packages and the second prompt is exactly the same the difference is only the uppercase n and a DOT between GS and node right like if you send this text to your developer they will produce exactly the same
results ah yeah thanks um so and let's let's have a quick look how uh Char PT performs for this prompt the first example and like I removed half of the code so it's not the complete code sample you can go by the URL and check it out if you're curious but uh general idea like how many know JS developers here any yeah okay some some of you thanks um so this simple is like you create a HTTP server and in a callback you just work with something called request. URL and it's a basically what happens after the slash of your domain right in in the URL and uh obviously this is direct user input it is
vulnerable for like I mean it it can introduce the directory traversal vulnerability but uh in this case llm did really good job it's uh puff normalize which will move the do do/ segments to the beginning of the URL or remove them and then replace pattern to cut the do/ segments at the beginning of the URL so sanitization happened here I I I wouldn't do it this way I actually if I would implement this code I would do it differently but it's a good solution it works it's not vulnerable but for exactly the same prompt with the uppercase letter and a DOT we suddenly have no validation at all like uh why why it's uh same for me it's the same
text I need to say about uh also like different models May behave differently right because you have a tokenizer in in before the model you have different model architectures which may normalize kind of meaning of the prompt and uh make it better but at least like chpt which is one of I guess the most popular of today is making this mistake thanks um so short story we we started to do the automatic remediation before Chad GPD came to life and suddenly in November or December 2022 came to life we were scared I mean do we have even a startup and I was going to Black at in newu and I told the team hey stop whatever you do the
entire Team all three people stop whatever you do go and research it so we will see if we have a startup or should we close or is there a technology to use luckily it's not doesn't work um so we we did a research we started with with gpt3 we did a research uh started with gpt3 then three and a half then four also and validated basically we wanted to see how how good is open AI in fixing code vulnerabilities we did we gave it an easy task we chose two easy two known applications web goat and um Juice Shop right uh we assumed actually that it was trained on them but whatever we scanned those with two s providers and we
started to ask G um open AI to fix them we gave it decent prompt better than what most developers would probably do focus it on it and still the results will extremely underwhelming the is the research is also published um on our site but the idea is that 29% of the fixes there were 104 I think something like that yeah 20 29% of them were good fixes not necessarily following best practices but the fixes like Carol just showed the fixes do the work the problem by the way is if you fix something not best practice you will run the sasan again it will probably report it again because it won't know that it was fixed
limitation in sast but let's let's give it that then 19% it didn't ruin the application which is good it actually touched the code in the right place which is good sometimes it even introduced new vulnerabilities though it didn't fix so you would say hey why is that not the worst wait the worst was that 52% were just bad GPD change code in unrelated areas it did stuff like um validate domain created a new meod validate domain you look down there is a template of validate domain take an input and comment inside do the sanitization here so it didn't do didn't give you any help now this is just it's not just that right GPT well AI in
general has some more problems with it some some more challenges that you need to address one is the context window the information that you give AI in order to fix something is usually small it's always expensive if it's not small yes I know there are some that can take millions and billions of tokens it's going to be extremely expensive and we can't build a solution based on that the other one is let me put it here it's easier can you still hear me okay um the other one is parsing the LM output right you want to do something automatic with it not a developer goes to a chat and and and fix things as I said we were looking and
doing things at scale so parsing it didn't work as expected you tell the llm hey generate Json generate something else and then large files the more information you give GPT or others the more chances it will hallucinate it will try to help you so much in areas that you didn't even ask for and it's a mess that you need to clean later so I'll do a quick stop just because um you seem tired and we just started maybe we're boring but I wanted to make sure that you understand that we are not it's not our interpretation we actually tested it and I'll show I like this slide because are making fun of us and and others um so for example we G we
asked GPT to fix not GPT this one was I think it's not today right it's Mixr I think um fix SQL injection here look at the code um there is no code change here there is just a comment or in this one we ask you to fix something uh the we ask you to fix hardcoded domain the code change is good it replaced it from HTTP to https great nothing to do with a problem so you may say Okay Aon and kol you seem like fun guys but you're not smart enough you didn't build a good solution there are really good excellent um companies out there that build something with AI to remediate Let's shame them
now without putting any logo because we like them actually so this is one now when you think about this one does that look like they fix hardcoded secret it's not now I'm not I'm not ditching them it's really hard to do those things and asking the AI to do it it's even worse um this one for example fix SQL injection oh you just deleted the vulnerable line great problem's gone right if you have vulnerable code delete it no problem then one of our favorite tools I couldn't mask this one because everyone would recognize it anyway but the ideas and they are one of the best ones that we we saw um gab did a fix here why or
why did they decide to delete this line no idea and we move on one of the things that when we started early on we knew you cannot fix everything you cannot fix things that require architectural changes for example fixing a weak cyer encryption is very easy let's replace it with a strong Cipher encryption D you just broke your application because you fix it in one place in one component you have 10 other components that are still using the old encryption key uh um Str so algorithm so you broke your application so it's not just us those are only the tools that we got our hands on um there are other tools out there I invite you to try maybe they figure it
out we haven't so far so let's review the goals that we had were we able to re to minimize the mttr yes but only if the person that use the tool knows actually how to write it in the first place so not so much can we fix that scale definitely no every fix needs to be vetted and verified and so on and so on so let's move to the next step how to maybe use gen when we have a fix so first thing let's do custom prompt for every vulnerability and often for different code patterns because the same vulnerability the same issue type the same xss different code patterns will require different fixes second thing
carefully pick the context you only want it to give it the minimum that it needs in order to fix because as I said The more you give the more problematic the more chances that it will make mistakes so remove everything that you can and um well everyone that talks about AI needs to mention rag retrieval augmented generation basically the more context that we give the more information that we tell the AI this is how I would do it if I was you Mr AI so try to do it this way the more you give the better the more consistent the results will be and last when it generates the the the when the AI generated the code we wanted to
make sure that it's still compilable because we know it makes mistakes in the generation itself let's look at what kol just showed earlier right the same just different background same everything else this um thei fix failed earlier to fix it now we told it fix it but you know what we told him how to we told the AI how to fix it maybe I'll say her we told her how to fix it so we tell um AI hey please fix it uh by by validating the URL right so we look at the results and great great success we finally finally cracked the code we figured out how to use AI for automatic remediation so what we need for that custom prompts
the downside for every code pattern you need to understand it and you need to write different custom um different prompt not only that you need to understanding the code that you see now is a different pattern it's a new pattern and you need the new custom P prompt for it so let's think about that the second thing we mention is the context right I don't want to give the AI all the file the entire file because then it will make mistakes so can we ask the LM to spe fix specific method instead of the entire file yes we can but the Tain flow for those understand static analysis how it works and I won't go into that because we don't have that
much time but it goes from source to sync from the place where user input came into your application until it reach its final destination and it may go through multiple methods in a large file so you can do that and sometimes it's more than one file so I can't give just one method to the AI and tell her hey fix it second the source of the vulnerability can be even static variable in the class so you won't even see that as part of the method that you want to send the fix may require adding some imports if I give it only a method where will it put the Imports I want to take that and automate it into my
product so when we were looking at custom prompts did we meet the goals that we said at the beginning did we minimize the mttr mostly yes we got to a place that most of the fixes were accurate it didn't require a lot of check but it's still need to check every single fix again you may tell aan you don't look that smart maybe you don't understand how to use this AI so we went to the company that everyone loves GitHub and GitHub in their article they said it and they were referring to their new code automated remediation tool gen can generate fixes for more than 90% of the vulnerability types which is awesome think about it it's awesome 90% of the
vulnerabilities that you have the different types they can provide a fix not only that over two3 of those fixes can be merged with little to no edits sounds good right anyone doesn't think it sound good sounds good some say they don't and the reason is because the way I read it and I'm I'm a hater maybe onethird of the fixes you have nothing to do with them and the other two3 you need to check them one by one so did we solve the fix at scale no did you fix did you help developers yes you helped them you gave them an answer they can check it two3 of the time you save them a lot of time but
it's not at scale so we talked a lot and again I'm trying to be loud and noisy but some of you are slowing down slowly in their chairs so let's do a small act um you know activity I'm showing you a code here anyone knows where this code came from I will send it I will give a mob shirt uh hat to someone no one so that came from web goat now this code has vulnerability I'll make it bigger so you can see um basically taking account name as an input concatenating to a SQL in SQL query and just executing it obvious SQL injection right simple so what we did and this is by the way not
from two weeks ago I created this one in a while ago but sorry um I asked chpt can you fix the SQL injection in this code and chpt did an amazing part work it told me hey is this codee is actually vulnerable to SQL injection due to fact that the account name is concatenated what you need to do is do a prepared statement with the account name great and that's what they did now the problem is that this fix will also break your application anyone has any idea I'll make the the problematic parts I'll make them bigger again do you see any problem here
what no no login count is actually the the good part the account name is the problem account name is being assigned to user ID if you look at the template as a schema user ID is actually an integer how could the AI know that it doesn't have access to the schema so actually the AI should have known that because account name is not surrounded by single quotes So it cannot be a string if it was um a string you would see here a single quote and you would see here another single quote but in and every developer not every most developers will look at this code fix and they would say oh my God it saved me
time commit if they have regression testing great it will break if they don't have regression testing production would break so this is a problem now I'll pass it to Chris again yeah um this is kind of a little bit um off topic for the presentation but I really wanted to share for you uh this little bit of knowledge uh we also mentioned the problem that you're uh that it's really hard to parse output of llm you actually ask llm sometimes oh give me Json and you may see like okay it's a perfect Json but suddenly at some point if you run it at uh production you will see in logs like fail to parse fail
to parse invalid character why because LM not always obey what you say it to do so uh sometimes you ask it for Json and it's not Json uh sometimes it also changes the unrelated parts of the code so here on this slide you can see this is the same code sample from the one of the first slides but here like the first line is tabs like the top character and the second line is basically spaces you know developers do that I don't know why but uh some sometimes it happens and we when we are opening a pool requests we don't want actually developers to see changes unrelated to their vulnerabilities so our goal is to change as minimal code as
possible because it gives Trust of the developer they can read it and understand why we changed it so we don't want llm to touch anything in the formatting and uh the trick here is like how we figure out how to parse the llm responses without asking it for Json or other formats we just basically add pipes to each Cod line at the begin the pipe symbol at the beginning of line and we ask like this tiny simple addition to the prompt keep the pipe symbol at the beginning of each Cod line and the results are really good it never mess up with the formatting anymore and also it's super easy to parse it in Python so
if you use Python uh you just basically need four lines you split it by the new line check if the first symbol is pipe so it's a code line um and yeah again it helps to keep preserve original code formatting it helps to uh ignore additional text from LM because LM would often say like oh here is the fixed code for you triple backtick JavaScript new line then the piece of code and sometimes you don't have a JavaScript you don't have backticks or like you know it it can be whatever and avoid Jason because Jason was a huge pain for us uh in pars and the llm responses so uh I hope yeah this is useful will be
useful for you um and like we already talked about two approaches today one is basic template which not works uh second one is like custom prom and more sophisticated ways to query the llm to give the more more and more precise context but let's talk about something else like what if we don't want to use AI at all and it is actually implemented in in many places today so it's slow to implement but you you can see some of the famous companies here like mob uh es Lind um yeah and I I'm mentioning ESL here on purpose because it's a code Quality Tool and all JavaScript developers are familiar with the with that but uh it
has amazing feature of Dash Das fix uh and D- fix is basically if e see the smelly pattern the bad pattern in the code it just replaces it to the good pattern that's what we want for security issues um I also want to mention here open rewrite uh tool which is the Java um refactoring tool you can write your own uh kind of rule how to change the code and it will walk walk through all your files and change the code for you um and S grap of course they have automatic remediation like you can when you create an rule for S grap you actually can say this is the way how I fix it and what common for all of these
amazing tools um they they all work in the same way they basically pass the code to what called abstract syntax trees uh we're going to talk about it in a minute but all of you who is familiar with the code analysis you probably already looked at what is IST and uh second thing you you need to do is to understand what is your vulnerability and where is it located like par the vulnerability report and you're going to need it anyway for any of those approaches right like because you have reporting from the famous uh Su provider and uh you want to fix their signals so you need to pass their reports uh then it's kind of easy you
match the original code in form of IST to the report you figure out what's the code pattern what actually you need to fix right like what replac to what um and you need to apply the changes in some form to deliver to the developers I mean you need to open a pool request create whoa whoa okay okay 10 minutes I see um so yeah you need to deliver it somehow to developers um first let's quick look at what's abstract syntax 3 is is about uh like our ultimate goal is to replace vulnerable line of code it's from first slide I guess um to nonvulnerable uh line of code in this case we want to apply don't purify as a sanitizer it's a
typical library in JavaScript to uh remove the dangerous text from the input uh and yeah I know this is super cumbersome but um like this is how abstract syntax 3 looks like it's uh basically you have a lot of nodes connected some edges it's a tree it's why it's sub syntax tree um and um BAS basically each node represents part of the code in the code and each Edge represent the relation between nodes and here you can see the entire string is expression statement in terms of um I think it's three Seer uh parer um and the assignment expression is basically something equals to something and on the left part you have object dot property
and like basically what you want to find is inner HTML which is reported by the sus provider SS providers say this inner HTML is introduces the uh vulnerability to your code and you want to basically just take the right part and wrap it to the D purify so you just create another uh abstract syntax 3 node which is call expression and identifier is don't purify function and the argument is whatever was before you started to work with the code um so kind of easy uh and uh you you also need to parse the vulnerability report sometimes um it's straightforward I I really appreciate s providers who use the S format which is the facto standard today it's a static analysis
results interchange format and I don't appreciate the S providers who trying to introduce their custom formats and uh like it's not only about Parson XML custom Json protab Buffs no it's also about like the data is missing sometimes you you don't have like code position or line position or uh file information uh yeah yeah yeah yeah I'm speeding up yeah um you also need from the report you need to figure out source and sync like what aan already kind of mentioned uh and eventually where is the best place to apply the fix and voila you have the um the code changes and you just render it as a g diff and uh send it to your
developer um and yeah let's do some conclusions here the reviews uh basically it helps indeed minimize mttr because it gives solid advice it helps to fix it scale because it works similar to es link which is I mentioned before but it's for us for the platform which is doing fix it's very hard to uh maintain the data like the the res like the the code fixers uh for each code pattern and it requires a strong security team which we have uh um to produce the good fixes really good fixes which not break the code which not and which um like satisfy the standards today and it's really hard to scale unfortunately as you migrate from your check marks to sneak and you
want to add new sus provider suddenly or you migrate from your JavaScript to your go I don't know and uh suddenly all the work should be done again um so and uh this is the final so it's it was the like ladder of three different stages and this is the last stage we have today it's how we actually believe correct way to use AI today we call it hybrid AI because it's a mix of two approaches the pure algorithmic and the second one which is the custom prompting and I'm going to show it on uh on a n de reference vulnerability um which is reported by fortify scanner and uh it may be the reason like some people say it's the
reason uh for a crowd strike incident I don't know if it's true or not but um let's see uh and uh like you can see that's get environment variable it can get it's C by the way not C++ but the idea is the same um get environment variable can return null Point null if the CMD variable is not defined so the second line will break the code because CMD can be null and uh the automatic fix to do is wrap the vulnerable line to uh um if statement just check the CMD if it's not now and it's good but it's good and easy for easy cases some cases are not that easy you can see here the
vulnerable line is settings and settings is actually dictionary and inside the dictionary you have the object and object contains object field contains array and it's another dictionary so try to figure out what if condition uh for this line needed and uh instead of writing the algorithmically how to like consider all those cases and create the code which will produce the fix uh the if condition I'm talking only about the if condition here uh we can ask LM and actually for such simple tasks because LM don't need any context don't need anything like you're basically give it very solid statement what you want to do and it produces amazing responses although um sometimes llm introduces additional components you don't expect
like lse statement in this case it wasn't in the original code but you don't care because you actually can do the same procedure as you do for the source code you can put it to your I parser and just extract the if condition uh data and incorporate this to the customer code eventually um yeah I guess okay so I'll try to to go fast probably have 2 minutes and if not whatever um were we able to reduce the in time to remediate yes we were um you don't trust us you can try it but you can build a solution like this on your own that's why we showed also how to PO the Json and all
that uh yeah we were able to reduce the Min time to remediate and the fixes are following the same pattern all the time yes it's 2 minutes 2 minutes everyone um but the problem that we had before it was really really slow to add rules it's still not fast it's not that hey koll fix this it's more hey koll let's build the rule to fix this it takes a day two days sometimes now but it's faster to developer the fix and this is where we wanted to reach so from our perspective this is a well done this is time that you can clap if you want but we didn't finish yet so let's wait with that um
summary okay so let's remember what is AI in general AI is a glorious copy paste right that's all it does it has a huge place to copy from it copies and it Bas it in a nicer place so as Lis lust says and I'm probably butchering name thinks of a model as an overeager Junior employee that blurts out an answer before checking the facts it knows where to copy it it's not sure that it is the right answer that you want but hey I gave you an answer so great so what do you need to do it is a great tool but it requires supervision and it needs it's not predictable it's not deterministic you can't just automate the hell out of
it it helps save developers time even with the very basic approach it helps but you can never trust J I know that insecurity will have to say trust but very I think that in AI inse security you need to say verify and then trust and then verify maybe but you can't just run any questions yes so in this model new tools that are generating Solutions I keep thinking about false positives on just the static analysis detection that if you you know there's a I mean clearly you want that to be low but there's going to be some percentage of false positives and false
so I'll I'll repeat the questions quickly everyone knows that SAS is notorious to having a lot of false positives um I got a stop sign um but I'll still go uh notorious um for false positives what is a false positive usually it means that the code is bad but not vulnerable in most cases sometimes it's there is no problem in your solution what is not broken you shouldn't good code you can't fix a good code but if the code is bad you should fix it anyway even if it's not vulnerable and the reason is because if you have co-pilot in your organization it learns from your pattern it will take the bad pattern now put it in another
place because as we said it's a glorious copy paste fix the false positives yeah not only not only co-pilot developers love copy paste so um f fix it I mean the alternative is prove the alternative is doing the manual triaging persuading someone that it's it's a full spot if you're wasting a lot of time just fix it get down with it if you can uh we will be outside if there are more questions because I'm getting the nasty eye here and we need to leave the room now [Applause] [Music] [Applause]