← All talks

BSidesCharm 2024 - Who’s going to secure the code our army of robots are going to be writing?

BSides Charm44:0847 viewsPublished 2024-06Watch on YouTube ↗
About this talk
LLMs are allowing developers to write increasing code with the same vulnerabilities. Security is already hopelessly outnumbered, but we’re barreling towards a future with no practical oversight. The only way to keep up is with AI security engineers. This talk will illustrate the scale of the issue, discuss new & original research, and walk through open source tools for building your own AI helpers Presenter: Arshan Dabirsiaghi Arshan is a security researcher pretending to be a software executive, with many years of experience advising organizations on code security. He has spoken at conferences like Bluehat, Blackhat and OWASP, and definitely wrote his own bio. He is also a co-founder of Contrast Security, a cybersecurity unicorn focused on vulnerability discovery through runtime instrumentation. He now serves as CTO of Pixee where he’s done finding and asking about security issues — he’s just fixing it for you.
Show transcript [en]

[Music] thank you for having me besides uh my name is arand Sagi uh I was born in GBMC right there down the street lived in this area my whole life within 10 minutes of here so um it's awesome to be able to speak here um so thank you for having me um I've been in uh appc so you know we're coding meet security um my entire care my once I became an adult and I had an official career in uh information security so um I've been focused on you know in the in the beginning it was on memory corruption kind of stuff and then I kind of moved up the stack toward absc and I you know I witnessed a lot of

changes over that time but I I think there's some more drastic changes that are coming that I think we need to be um thinking about I've been I've been I've made tools I've made I've been Consulting uh you know code reviews pentesting and threat models and and you know anything you can imagine sort of in that in and around that space that's my background um so everybody's probably familiar with GitHub co-pilot have you seen GitHub co-pilot um it's a really cool Tool uh you if you haven't seen it uh you know you write some code and GitHub co-pilot predicts what's the next line you want to write and sometimes it predicts the next four or five lines and

uh I started using it when it whenever became available to me and I really like it uh it's it's right you know for me it it feels like it's a right I don't know 50% of the time and it's a it feels like a average uh a little boost to my throughput to my speed it doesn't feel super transformative but um they've done some studies now McKinzie and Microsoft have done studies trust McKenzie a little bit more than Microsoft on this but uh so McKenzie says it is leading to a 25% Improvement in throughput um and so that's okay that's relatively serious in a lot of jobs if you could all of a sudden do

25% more you you have my attention and it is the most it was like the most widely adopted um sort of AI tool and then Chachi BT came along and destroyed that but it's still got relatively strong adoption and most Enterprises that I talk to are going through the evaluation now of bringing in uh tools like this um and if you've had a chance now that there's now the Ides like VSS code and intellig have another sort of interface to talking to the llms which is these what they're calling assistance and so you chat with something inside the IDE and then it drafts not just like an autocomplete but like a whole file a whole function a whole something more uh

we have not done any studies on there's been no I should say there's no uh results released on any studies about what the throughput gain from these types of tools are we don't know what the adoption is yet uh but still and if has anybody used the the assistance in the ID yeah it's not as not as popular yet but you you can give it a try in lots of IDs and it's interesting uh it's not I wouldn't say it's great yet but it's you can see where it's going um and I have no doubt that they will get there and then if you think about you know what's next um a couple weeks ago

there was a tool releasee called Devon uh to great a claim uh on on your social networks there and the idea is GitHub sees a uh Devon sees a ticket posted to your GitHub and it really tries to take on the whole task uh of what a developer would do it makes a plan it uh you know sort of officially lists what it thinks the requirements are and it tries to execute to those requirements and if you look very closely at the end of the GitHub Universe uh keynote last year uh at the very end there Steve Jobs one more thing kind of thing was uh was something like Devon where you would give it an issue

assign it to co-pilot and co-pilot would try to make whatever changes to the repository to make you happy and so if we think about what is the throughput going to be uh you know additional code throughput um from the autocomplete to that thing just described it seems every B business analyst's fever dream can become Reality by asking co-pilot to go do it um for them the throughput of code I think is going to be shortly uh pretty crazy um so anybody think that's [ __ ] you you think that's it's what what part of it Deon oh you think Devon is [ __ ] yeah there's a little bit of smoke there I I I trust GitHub to

execute a little bit better um but there's some tools like this like you can there's some that are open source actually that do a little bit of this like AER Aid er um and there's a couple that are uh definitely going the right direction but it's going to be it's going to be a wild world here pretty shortly another you know another very important thing to throw into this mix is that um llms uh they they write insecure code and then they lie to you about it um so they they've done uh a bunch of studies now there was even one yesterday that just reached the pre-print server about how how like how well does

these things like understand vulnerability classes every vulnerability class is different and I'll tell you it does a great job at SQL injection probably because SQL injection is like the most widely discussed uh vulnerability class out there and so the the is just overwhelming and so it does know it really well but if you ask it even like one vulnerability that's a vulnerability that's like even slightly less popular it does not really know what it is it doesn't know how to reason about it and the the fixes it suggests are wrong um and so you know they did a study where they looked at they give it some programming exercises and a lot of times it came out with the wrong answer

it came out with an insecure piece of code and ironically the developers you ask them afterwards and they were more sure of the security of the of uh the computer generated solution rather than the human written solution um and then of course if it asks the llm is this secure the llm said yes uh and so and it comes up with some like three paragraphs of like inan Babble about why it's secure and the developer can't really sort out the difference right because they're not Super Cross Tred on on all these security issues so we're about to create a lot more code this isn't like 5 years away like maybe like it is happening and the code is less secure

with more confidence from the developers um and so that's not good um and you might wonder why it's vulnerable if you if you haven't been in this in this uh you know thinking about this too much the GitHub and really anybody who trains models they take human code which has bugs in it and it learns how to write code based on that and so the llm suggestions have those same bugs it's trained on insecure code um and if like I know that y'all have had this experience where you go to stack Overflow and you look at something and it's like the insecure answer is the top one and then you go like to the next one

and like somebody's like hey actually it's it's this thing this is better but developers don't know because they don't know what is the security question they should be asking themselves about that thing in order to know that they need this other security control sort of involved and so the llms actually are you know they want to make developers happening they don't really care about security so if an llm gives you a sec Ure answer and their competition gives you an insecure answer but it's what you expect that's what the developers going to vote with their feet to use because it gives me what I want and what I expect and so there's this perverse incentive where they're sort of

incentivized to produce insecure code on an individual level so the question that I as myself was can can't the models just like generate secure code uh and like I like I mentioned before like they don't really understand the vulnerability classes they can't reason about them um SQL injection again being like sort of the one example that they are really good at but if you really uh the rest of it you really can't um and by the way a vulnerability is this imersion phenomena across um lots of different places some of it is in your code some of it's in your libraries or Frameworks and some of it's in the runtime itself and so understanding all of this and reasoning

about all of these different places is uh just science fiction right now for a model to do that they can't remotely do that even if you give you know even if you look at the the most recent research about you know they have a million token context window meaning you can fit a lot of code into a question you ask to an llm they really they can do search through that space but they can't effectively reason about um uh you know such a an a deeply complicated uh vulnerability path through that code so um yep and so it's worth pointing out that uh I I even made tools in the market that do that try to like

purposely you know purpose-built to do this and it's uh extremely tough even if like the mo what you're looking for is super well understood it's very difficult to do this so to ask an LM to use incomplete information in the very weird blackbox way in which it works um it's it's quite difficult and it's very slow you know like it you know you generate answers at the speed of inference but you know if you if our static analysis tools that we all use today are extremely fast uh they've been optimized for 20 years U and they've had thousands of maners put into them but it still takes hours to scan reasonable size code bases so to think that like so

we've sort of boiled the problem down we think as as efficiently as possible to solve statically um and so the the idea that we just throw it in at an llm uh just sort of defies it justes doesn't pass the sniff test um I talked about this a little bit already like developers I don't think they'll reward the secure model output and this is an example here um so the text is super small sorry about that but uh the developer when they write XML parsing code in Python they would just want to say xml. parse and the secure way to do that is to prevent a vulnerability class called um external XML entities would be to use this thing

called diffused XML and so when a developer looks at that if they have those two options what the hell is diffused XML like that's it's not like you know the API design was not uh intended to communicate to the developer what is the advantage of using this API versus another API so the developer if they have to choose between that two I don't know I'm just going to choose xml. parse because that is more of what I'm

expecting okay so this is a chart put out by pag Duty and this uh lists all of the security activities that we do in appsc um so it starts on the left with threat modeling and then there's some uh more continuous stuff like sast iast sea you know those tools are running we're doing code review we're doing uh you know there's tools that are giving us feedback on our infrastructure and and containers and runtime stuff and um there's a lot of human review at all of these places and Peter Duty put some M where there's manually stuff that's completely manual meaning like these are start to finish these activities are manual however if you get static

analysis results somebody's got to look through them and then somebody has to fix the results of that so that's manual too so if you put if if you change that to your definition of what should have an M on it there's M's all over this freaking thing so uh when I did Consulting for companies uh uh you know I I knew what kind of resources they had how they didn't have that many humans to cover this many M and so what we would do is we would rank our applications uh you know from least critical to most critical you know the internet facing ones that touch sensitive assets that you know those are the ones that got really much human

attention at all and so we're already only covering you know like let's say really paying attention to like 10% of our uh inventory and by the way these these M are not just coding you know there's project product management discipline project management discipline uh you know there's all kinds of stuff that that goes in here it's not just one skill set and a lot of the tension in between development and security occurs because we're not sort of we're not really good at the other person's job we don't know how to empathize we don't know how to communicate we don't know how to estimate and so so things sound like they should be easy for them to do

actually they're kind of hard and so uh lots of M here from lots of different roles so I think I mentioned we yeah we don't have enough humans so uh other people there's a lot of reports that say developers outnumber security 100 to one I feel like that number is worse the bigger the company um you know like your appc program at a big company like giant company giant Enterprise might have 50 people uh and they all have 20,000 developers so just the the numbers don't work out today people are drastically security is drastically drastically outnumbered uh I think I mentioned the humans we have aren't cross- skilled on each other's uh areas so you know

developers aren't good at you know triaging vulnerabilities because they're not super familiar with the vulnerability classes and security doesn't really know what the fix should be for these things is they they're not developers so again what's what ends up happening we only end up covering uh a few parts of our you know our portfolio so what's going to happen here like uh I'm I'm waiting for somebody to tell me what the solution is going to be we have we're about to I can I pulled a number out of thin air earlier and I should have referenced that number if if co-pilot is giving us a 25 % throughput uh boost what are these other GitHub

tools you know what are these other co-pilots that are more Junior Engineers you know agent Engineers is going to be giving to us is it a 500% boost nobody knows but it feels like a hell of a lot more feels like a you know gener you know I don't know um a lot bigger number than 25% so if I just pick 500% out of thin air I think that's conservative but if I tell you all those M we have in that chart who's going to be doing the work who's going to be doing the code review on those uh you know for for what the code the robots right who's going to be looking at the static analysis

results who's going to be reviewing the the bug Bounty reports that come in nobody knows nobody has an answer to this question uh and so we you know we have to sort of Orient around um how do we make how do we live in this world because I guarantee you if I just said to you one day Mr Mr Miss security person like if I'm going to 5x your workload uh I'm pretty sure that would be a that would be a surprise and that would be a you know a really disastrous result in some disastrous um new trade-offs that you're going to have to make and the risk that your customers are going to have to uh you know absorb

and companies are not going to make the decision of well we're just not going to use co-pilot because it's it introduces too much risk they're they're going to use them because they have to be competitive that's just the world we we have to be in they're not going to cut off a major tool for themselves so it's happening what can help scale so one thing that helps is this concept of paved roads so this is an upfront invest investment that we make in our architecture in our apis where we we make it really easy to or or we make it only possible actually to introduce new features that go through a paved road that has security built

in so you might uh you know if you have a if you have a web app or something that you know you you want to add a new API to um if you the only way to add a new API in your the way your API has been built is to use x and X always provides authentication you know you have to opt out to create an unauthenticated endpoint and that should send off you know red alarms for review um and so you know that's this is like common thing too like you know we all have another uh control that's built in and so we want to make it hard to be insecure uh and so yeah let's go to the

next one uh this is another common one we see like we don't allow uh anything except you know cross scripting is a very uh popular vulnerability it's very easy to to make that mistake and create that vulnerability in your code but if you use rest apis and you use a data format like Json and you kind of do those in the in the right way it's it's hard to be vulnerable to that uh that class of stuff so paved Road there you know we follow this pattern and then something interesting happens the robots want to make your new code look like your old code so they do follow the patterns you have so uh and even today just with

co-pilot if you write a test if there's three tests and you write the first two it kind of knows what the third one will look like it knows the format it knows the apis to use um and so it'll copy the patterns you use and so the paved roads here will also be at the same time creating our our robot how to work with us um and this can this is also true of your pipelines right like if you're pipelines there's the only way to have a pipeline is to also include static analysis in it great right there's no way to get around that and the bigger of a company you are the harder this is

because you know you're a company Enterprise that makes seven Acquisitions a year you're inheriting you know all kinds of Technology of every type uh pipelines of every type different kind of security levels and so this is so much easier said than done um but this will help you know this is an investment that we make once and we get paid back for it the robot again will start copying our patterns and yeah we we make it hard to be

insecure so something else that really works um is uh this class of Technologies called runtime application security protection um I used to work at one of these on the at my old job and I'm super bullish on this space that makes you know when when I was first coming up in the year 2000 we had a different worm every week it was the conficker worm the I love you worm The Slammer worm um and the reason they stopped happening wasn't because developers learned how to write secure code uh it wasn't because we made new apis and we trained them and blah blah blah what happened was we added protections to our tool chain to our

compilers we added in these stack cookie things that made buffer overflows hard to exploit we we added uh you know we randomized the address space layout of our operating systems so that exploits would know where to jump to uh when they took over control of the the program sort of position pointer so there's lots of things that we did to the runtime and the tooling and the tool chain so that even if you could get an exploit it's very hard to turn that into reliable code execution like living from from my experience you know where you all you had to do was find one memory corruption sort of primitive to get you know to pop

a shell now today if you find a vulnerability in Chrome if you pop Chrome you know they give you a [ __ ] award called Pony and it's like worth a million dollars uh because you really need you need to chain together like seven different bugs and you know you need a Sandbox Escape you need some way arbitrary read four bytes here then it's I mean they're all a work of art and they're all Way Beyond me so about our strategy there was not let's teach our way out of this problem the idea was to build stuff in that made exploitation more difficult and so this is what runtime application security tools do they instrument sensors into

the runtime so they'll add sensors to your custom code to your libraries to your framework to your whatever to all the places and we can create little firewalls so if you remember log Force shell uh that was a vulnerability that uh you know the you would be in some you know when when an exploit occurred you would be inside a logging statement and then somehow you would be running a system command after there is absolutely no reason those apis should be able to call each other so uh and you can do that kind of thing with these sensors actuators whatever if we can instrument your code to say you know the first thing that we

do in the deserialization code or the shell popping code is check to see if we're within the scope of a deserialization event or a logging event or something like just stop that like we don't have to do that those kind of Behavioral rules those higher level rules are easy to specify and they're very hard to avoid um so for an attacker to bypass this is quite difficult and so this is an example to of uh what we have here so is the a user provides some input that looks a little sketchy and let's compare this to a web application firewall which is traditionally the solution for this type of problem the when does boom happen for one of

these application layer attacks it happens within the app right if the firewall has to make a decision about this input this early you know when it sees the web traffic it's going to be wrong a lot and the the second that it's wrong and it blocks legitimate traffic it's getting turned off or it's getting turned on to you know log only mode and so that's why most of our most of our wafts are in log only mode mode so it's too far away it doesn't have enough context to decide is this an attack or not however you know if we have sensors in these in the SQL driver we can check to see does this input you know we can

do a single pass you know static like semantic analysis you know we can do break turn this into an abstract syntax tree and actually see does the input that we saw come into the system appear in this query and it does does the code or does the data that they gave us Escape into a code context and change the meaning of the query change the structure of the the a of the query and this is something that the you know uh Upstream security tool cannot do right this can only happen within the application itself where boom is actually happening so this type of protection makes it really hard to uh yeah it's hard to exploit so our first strategy

was make it hard to be insecure second strategy was make it hard to exploit inse secure code and you know there's probably 10 things we could talk about but we we don't really have time um uh the one of the things we're going to need to do is automate the human interruptions from our security tools so basically if we're giving the robots the the key the you know the ability to to generate all this code and all this code is going to be part of our attack perimeter um without the humans there we also need to we're going to need to automate the things that we would do that come out of those uh you know the

activities that are done in response to those interruptions from those tools so for example you know if we have a static analysis tool that finds something uh there's nothing to say there's no magic that prevents us from from using uh robots to also look at look at that vulnerability look at every step in the chain and decide is this uh you know it's it's a slightly different question to say what is the ri you know what are some attributes of this vulnerability that they're telling me exists compare that to you know knowledge that was been built up about the repository and decide if it's true positive or not and then if it's a true positive um actually

you know issue the code uh issue a uh yeah like a poll request to uh to fix it and then if it's a false positive triage it you know shut it down in in the in the dashboard of Truth or whatever so we can use robots to also help with this problem as well so this is where I spend most of my days researching uh Solutions here um and there's a couple there's a couple tools here to look at and uh one of the tools um that we work on an open source tool um to to help in that that particular problem space is code modder so code modder is a framework for building code mods code mods are there

were this idea uh in in they came out of Facebook originally for doing mass refactoring of code it was like a pearl or python oneliner You' give it some some code that looked like a and it would change it to code that looked like B and uh it kind of died within Facebook and then the JavaScript Community brought it back and they said are they made a library called JS code shift and they use it to um automate migrations so like if you're on react 4 you give they give you this code mod code modification that changes your all your apis to use the new apis and I looked at this and I was really astounded I was I mean like

why don't we use this to solve more problems and the reason is it's very difficult to be expressive about the code you want to change so there's two sides of this there's the code you want to find and what you want to change that code into two and so these tools were all taking a very ocean boiling approach of like they were writing all New Logic for expressing both of those things and so we took a different approach um um and we just made it easy we made an orchestration library that connects tools that are great at querying code that like are very expressive about how you find different things and these are tools that you use

every day uh to find problems in your code and we stitch those together with the community loved idiomatic solutions for mutating source code uh things like Java parser or lib CST for python um JS code shift and we just made it really easy to connect these two things so if you want to scale with a robots and you have a consistent problem that's occurring out of out of these tools you can write a code mod that says every time you know the the tool says X I want you to fix the code with Y so here's an example of this uh super simple example don't me ask me too many questions about it because I'm not a python guy somebody

else wrote this but uh what you can see here is we've stitched together semrep a semrep rule with uh a lip CST function so uh you can see in the in the those triple quotes you can see here this is the tool uh this is the library fi you know we've written this some group rule that finds random calls and then every you know our our library does the magic of connecting that to lib CSD here and so we give you a call back when that happens every node that matches this will be sent to this function and so every time we see that a function that matches that we'll replace it with secrets. uh system random and

we'll add the missing import and Bob's your uncle so we have in again we've invested I don't know how many thousands of of manur in these in these existing tools we cannot build them again that would be an absolute waste of time so we have to take the results of those tools make it really easy to work with uh a make you know with with the new um code mutation technology we have as well so um here's yeah here's some of the links for that again you can look for code Moder and you'll see uh see a lot of that stuff on the web so we have to figure out as a community I I believe

we're barreling towards a sort of climate change level uh event with uh with these tools uh and the amount of code that they're producing and again it's it's it's probably less secure if we had to summarize the research from all these organizations and developers are yet paradoxically having more confidence in them uh than the code that they write themselves and so the model we cannot model increase uh the quality of the models to get our way out of this for a few reasons uh we also talked about so you know I'm uh we need to figure out how to make code hard to uh you know hard to hard to be insecure and then when it's insecure make it hard to

exploit and try to automate the discovery process from a lot of these tools along the way that's my pitch thank you [Applause] any questions

yeah the the pave Road strategy is really like the pattern for you know developers I hope I'm a developer so I'm I'm not saying anything pejorative here but like we do the same five things over and over again in when we make an app we we make a rest end point it takes X and it spits out y so uh the the concept of the PVE road is there's one way to do that there's one way to take X and give you back y so that's how we structure our API and then the but the way you take X and the way you give back y have security built in authentication Access Control validation output encoding loger

you know logging airror handling like all of those cross cutting security concerns are baked into the API and this requires a massive investment from the team to build an architecture that forces you to do the right way even if you don't understand it you don't understand the need for it like we don't care you need to provide a a role here an access control Rule and so it's really in the the libraries we build for ourselves that that's where you do that um it can also you can also abstract the concept into other things like there's a golden like if you want to have you know one container that everybody uses that strips out the useless artifacts that

nobody needs um so there's there's ways in which you can try to scale that in other places um but yeah traditionally that concept has meant you know there's there's a right way to do things and the right way and the only way has all those security concerns baked into

it yeah I think there's a lot of lwh hanging fruit that you can that it's you know like objectively better just like we don't have to argue about it what you may have to argue about is is it exploitable which I I believe the more and more of my career is like is a giant waste of time but developers sometimes do make us go through that exercise but if we don't make any claims about that if we just say like I we can't chase down if this is exploitable like automation has a really difficult time doing that and making that assertion really confidently but it is no doubt better for you to use parameterized Query instead of unparameterized queries

so for me it's like we could spend time fighting about it or you can just click the merge button um and so that that's one way to to attack it is to say like we're not going to argue about it like we can just fix it and everybody will be better off um but I think there there's other vulnerability classes certainly that are a lot more difficult to to say like I know exactly what the fix here should be I think there that people will get themselves into trouble and be wrong a lot um but yeah I think there's enough low hanging fruit in the sort of like OAS top 10 kind of stuff that I think we're

going to be uh that there's there's a lot of opportunity for no-brainers

yeah I think it's going to be a lot of uh just like my whole life in security like I'm going to have to automate the Sol the business is going to do what they're going to do and I'm going to have to automate the the verification of it like verifying a robots code we we have to do the same activities and you know if we just scope it down to like the static analysis part we can still scan it and find some classes of vulnerabilities scanners don't find everything they don't find access control and authentication stuff stuff that's specific to your business so that sucks so we will'll still have that gaping hole um but maybe you can make a

bot that you tell it what are the rules for your company like hey don't uh you know block any block any PRS that introduce a new rest API endpoint that don't have a uh you know authentication annotation on them um it's very deep in the we example there but I think we we have to figure out some gates to let us know when manual intervention is is really needed um and I don't think we I don't think we have the tools yet for that

yeah it's a good is that is there a paradox there I mean I think um none of the people that there I might have some bias in the people who I talk to uh but the people I talk to at the big Enterprises are there was a lot of hesitancy for since co-pilot sort of was first released but now they're all they changed their tune I don't know whether the business has changed their tune or the security teams have changed their tune but everybody's going to be doing this um and the it's not like the you know if you think about quality as like the little micro patterns of code like it spits out high quality code sort of in

the micro context but on the macro context it can't reason so it doesn't yeah I don't know so it looks like good code it doesn't have all the code smells that like a junior developer would write they have unused variables or you know whatever like the other obvious tells that the code is bad it won't have those um but yes it will be subtly bad and there's an interesting question behind your question which just why do why do we do the activities that we do the security activities that we do and you know I'm sure there's a lot of people in security that are very cynical about that like you know we do what we

need to do because compliance tells us to or this is what's expected by the board or whatever um so I can't answer that question but I do I do believe that there that everybody is going to be doing this right or wrong no no matter how puritanical they may may be about the quality of their code

yeah that's good that's a good question uh a lot of those terms are being used interchangeably Everywhere by everyone and um typically when I say so co-pilot uh is a general people use that as a general term to describe an agent that is helping you perform the task that you already do today but quicker better more efficient whatever so that's generally what people mean by co-pilot and I do think we need we're going to need co-pilots that ride alongside um the you know the code generation tools um that help them secure their code help them eliminate the interruptions and so the form that those co-pilots would take like if they're if they're in GitHub they would

be a they would be called a bot that would be the form factor that that that's how GitHub describes the automation available in GitHub um but I when I say bot I really do mean generally just any AI

agent yeah co-pilot is like sort of taken over the term and now anybody anytime anybody says co-pilot they sort of think about github's co-pilot but uh or something that's going to be helping you draft whatever your unit of work is but um we're going to try to take it back I think any more

questions oh good question so rather than having multi llm or having the one giant llm I mean you do you mean literally in the generation of code part of the process because I I think sort of what I'm proposing is that it's like we're going to have an llm that's really good at understanding how to act on SCA results and automate the the that it'll probably still be built on the same foundational models that everybody uses because it's not possible for us to all create our own foundational models but yes we will be using fine-tuned models uh for a lot of these actions um all over the place but I don't believe that it will help us write

more more secure code I don't think it'll help the the co-pilot tools like github's co-pilot write more secure

code all right thank you very much [Applause]