
so hi everyone welcome good morning or good afternoon now so every speaker uh nivi right maybe okay sorry this talk is titled uh the vulnerability deluge how to dig in but before we get into that i'll go quickly over some housekeeping uh so we have 60 minutes scheduled for this talk you all of you may have seen before with other talks that the platform will automatically cut off at the end so we'll do our best to stay on schedule and address any questions we have at the end you can see the q a or just ask questions in the chat on the right hand side we do encourage you to stop by and explore the expos with the sponsors
they'll have additional information and career opportunities there as well for those that are local in the area we do have an open invite for those that have a b sides nova ticket at a location called punchbowl social social that is in arlington virginia so not too far outside the dc area here and that begins at six o'clock this evening and with that i'll hand it over to our speaker all right thanks trevor um hi everyone um my name is nivelta muruti um a little bit about me i'm a senior security consultant at synopsis based out of boston massachusetts i started in security due to movies and sitcoms that inspired me uh starting with security operations uh
where i worked in network incident and application security i moved to one british triaging sas pentest scr risk assessments and finally into devsecops um i've been working on desicorps implementation for past three years now um while i'm not working i like to travel a lot and be outdoors but 2020 didn't allow me um where i wanted to go but just discovered new england area in general and if not traveling um i like to take pictures especially landscapes and i'm a voracious reader this year's goal is 20 no 36 books for this year actually um so what's the agenda um today in today's session we will first check what are the different sources of vulnerabilities where do these vulnerabilities come from
what do they contain followed by how do you store it uh where do you store it and once you have this data how do you see through that are there any strategies to handle this volume and make sense of it all one would obviously want to trace down the root cause of these vulnerabilities and fix it out and finally how do you generate sensible metrics and how to best present them let's understand first how our vulnerability is discovered any finding from a security assessment is considered as a vulnerability security assessments include static testing dynamic testing source composition analysis penetration testing mobile testing container security testing infrastructure testing bug bounty programs etc the vulnerability could belong to the application or
component of the application or the infrastructure where the application is hosted by static dynamic mobile and penetration testing focus on application level issues issues that have been created by a developer these tools identify the known locations of the source and the single and lines which will cause the vulnerability itself in some cases the one over three may be the same across all uh tools for example an sql injection can be found through sas and it can also be found through dust and pen test as well these tools or assessments report back with locations and cws of the findings the cwes provide a common ground among different tools for the same finding uh for example in uh one tool it may be called uh blind
sql or the other tool may call it sql injection blind so as you can see the title tech if we if a human looks at it it's the same but if you do a rejects of an equal to equal comparison it won't be the same so cw is actually help you do that and sometimes the same vulnerability has like for the same cw you have different categories let's say um all right cross-site scripting persistent and stored probably i haven't verified do not quote me on this but could have the same cwe um but then the titles may be different obviously because it could be depending on where that exercise is happening um then there are network scanners source
composition analysis container scanning infrastructure scanning tools they provide a mix of results availabilities known in the wild or public and complication checks that must be applied to make it more secure but are um missing actually those checks are missing out there while conflict checks can be fixed publicly known vulnerabilities have to be fixed by the vendor or component owner itself also with infrastructure vulnerabilities these can be shared across applications rarely would you see an application being hosted on a dedicated environment um and also in general with infrastructure we uh expect consistency in terms of how environments are set up with docker and majors with container environments especially this is pretty much true you repeat the same old
thing uh with just a few variations um but let's say the base image remains the same across all environments out here same versions of operating system products frameworks etc so as you can see there are a lot of sources a lot of places where vulnerabilities exist and can be found it is not just locations but just the volume of vulnerabilities that come from these say the same locations is pretty high it is difficult to wade through it one by one and now and the progress can be slow organizations now need tools to manage it all and find ways and means to reduce the volume tools that can help analyze the most likely location to apply a fix that will
reduce not one location but all possible locations tools that help aggregate and de-duplicate vulnerabilities since they are the same or they ultimately boil down to the same issue um like input sanitization missing configuration etc now higher the volume uh data higher volume data requires higher processing power to analyze it hence this information cannot be stored in flat files you need tools that can manage querying into such data and making highly complex queries into it you can use databases to show vulnerabilities from different uh tools ideally first design your database schema to determine what all information you want to capture and how do you want to normalize the data coming in from different sources and some tools may provide source sync
and cw some may not provide that um for example some pen pen test because it's a manual effort the reports may not have all the information that you're looking out for that is probably provided by um you know automated tools like for example the impact uh and the remediation effort or the scale of um i would say the confidentiality scale over here that those are the that kind of metadata is not provided in a pen test so you may have you have to look into all of these sources and figure out what information is important and ultimately to generate metrics out of it um like you also need the metadata like source of the vulnerability itself date
discover and close and maintain a history to generate these trends if you are willing and having you do have the bandwidth and you know money and time to do all of this to put efforts into it go for it otherwise you could also use data aggregation tools such as splunk and um to you know to collect this data and create some trends and analysis on quantities from different sources note that splunk and other aggregators don't have the capability to deduplicate the data which helps reduce the noise and volume to make it easier to handle this however these tools are pretty useful when it comes to analyzing trends and give a quick overview of what is the
level of risk involved you could also use analytical tools analytical or reporting tools such as power bi cognos etc to generate the same analysis and trends basically a business intelligence reporting tool out here because they use it the concept of big data is being applied out here um the advantage here is that you can actually apply your own deduplication algorithms and generate reports after it is applied at the database without spending overhead on creating these tools or you could just use tools which provide all of these functionalities in one that is import the results deduplicated and normalize it normalize the vulnerabilities provide a few templates for creating reports tools such as thread fix defect dojo
xero north and synopsis polaris which works only for polaris uh sorry synopsis products um provide these capabilities know that if you are looking for very custom reports these tools don't provide an alternative to that um the alternative is to use the analytical tools that i was talking about before it can run over either the these databases solved by the reporting tool or create your own database schema and algorithms all through this i've been using the word deduplication so what does it mean deduplication is the process of converting duplicate vulnerabilities into one now the key term here is duplicate how do you identify a duplicate over here in some scenarios it's the cw with the sink as the combination in some
cases it's the cwe source and sink in some cases it is actually the location of the fix that will determine if it's a duplicate on a rather root cause issue or not if so if across tools you find the same vulnerabilities instead of showing multiple instances of the same it will show it as one one pity out here this is pretty useful when especially when creating tickets for developers to fix issues that is you know in jira itself or bugzilla for example uh creating a ticket for each vulnerability from each tool will result in a lot of tickets being created and is definitely not an efficient way uh to use the developer's time aggregating and de-duplicating them
helps in focusing on the main issue and fix and not the volume of it normalization also requires you to um for you to customize severities in different tools out there for example check marks does not have a severity called critical hence some vulnerabilities that are being critical in other tools are marked as high in check marks you should do a thorough evaluation of all the vulnerabilities being identified and the severity set before dealing with your data out here
now that you have the vulnerabilities normalized and stored it's time to generate the reports the first kind of report that everyone wants and should be looking at is count of all vulnerabilities by criticality um organizations definitely want you to look at the criticals and highs and then the mediums are the mediums consists next priority unless the organization has matured quite a bit in their abstract process low severity findings are rarely triaged as a result the volume of low severity findings is comparatively larger to medium or severe even critical high severity findings um for example uh let me just show you on the left hand side what you see is a simple chart which you know uh reporting tools will
show you this is how the charts would look like and this is just a demo but on the right hand side you will see this is what in reality happens once you include the stool in your environment and start you know filling in with details as you can see the medium the number of mediums is high uh over here but that's probably because we have triage criticals and highs but we haven't tried mediums or even lows for that matter because we don't have the bandwidth to look into that into them right now and the uh development team hasn't reported back so it depends on the focus out here like we do get a lot of vulnerabilities but when
you compare the ratio of it all uh criticals and highs tend to be lower on the lower side while mediums and lows tend to be higher if this is and also the focus being triaging is down on critical and low critical and high vulnerabilities wide medium and low severity vulnerabilities are generally not triaged in the first phase um the low severity vulnerabilities are generally low hanging fruits like verbose banner or missing declarations or some bad practices like dead code or incorrect string comparison so while get generating vulnerabilities by count is a great idea comparing them with each other does not generate much value having counts by criticality also helps prioritize your remediation efforts in the right direction if the number of
critical vulnerabilities is not as high as you would expect in terms of volume example hundreds or few thousands versus ten thousands you can include prioritizing some of the high severity vulnerabilities as well the key word here is expect um period here is um the ratio expected ratio hundreds of critical vulnerabilities found in just 10 apps is a lot versus 500 apps the number of critical depends on the number of applications in your scope itself a ratio of 1 to 10 criticals per app is low versus 10 to 30 which is moderate and i think beyond that i'm talking about critical solution anything beyond that is high not that when it comes to criticals it it's in ninety percent of
the cases um the time to fix these issues is not higher than a week uh this is ninety percent there are some which may take time which requires additional team effort and funding but majority of them don't take more than a week to fix out and this includes testing for high severity a ratio of 1 to 50 is low versus 5200 as moderate and anything beyond that is considered as high you can of course apply your own ratio based scenarios where your organization on where your organization stands in terms of abstract maturity know that this ratio factor is something you have to determine to prioritize remediation efforts another common analysis that is done is vulnerabilities found by category itself
you have reporting templates such as over stop 10 or let's say you know p a p ah p i uh pi top 10 or something in that a pci reports out there another uh is your of organization besieged by sql injections or excesses or weak passwords or passwords in code or is it privacy violations or csrf or sessions configuration issues um this type of analysis helps understand a key risk area in your organization and what is lacking in terms of security does it mean that the developers have um as like basically seriously lacking in secure coding skills that even today the bulk of your vulnerabilities are exercise or sql injections um or do they need a higher level of
training um to handle higher class or vulnerabilities and write a better code or does do they does there need to be a change in the process uh appear on you know the peer review being done uh and then show better coding standards are applied over here some categories indicate a lack of infrastructure that can support a secure method of dealing with a functionality for example developers will store passwords in code or config files and uh if they don't see or know of a password vault out there and secure transport streams or assessment missing indicates that database or smtp servers are not any uh ssl or tls enabled in such cases the security teams have to step in much more actively uh to
provide solutions to developers out there um a report that i really like is the type and count of vulnerabilities discovered and closed in the last 30 60 or 90 days consider you are an organization who has successfully set up the required security scan controls in place to ensure that scans are done at the right times and you have provided the tools and methods developers and organizations uh sorry uh teams to identify vulnerabilities in advance and to fix them or better still avoid them in in the first place by writing secure code now how do you evaluate if these tools and methods are being effectively used and i'm working and doing a stop uh start looking into the vulnerabilities
that are being introduced from you're on uh one videos introduced in the last 30 days now this is the first phase i'm talking about after the training learning or tools implementation you should ideally not have the ones you targeted for um example you have told the teams you've taught the team how not to introduce sql injection or cross-site scripting uh which means they apply certain um you know they apply parameterization uh input validation and coding all of that if they are there that means either your target audience did not understand what was taught or they did not adopt it immediately which means stricter you know policy implementation needs to take place an example would be um reducing or
blocking privileges until the employee has fully understood uh the training provided out here and well aware this is not going to be it's actually a bit harsh and security teams would probably not get a buy-in from the engineering team on this solution for that matter but this is just an example uh of the kind of uh enforcements that need to be placed when you see the from the metrics itself the metrics are generating some kind of value to you that it's not being reflected back in the work then onwards the same vulnerability would be moved to you know 60 days and 90 days um unless they're closed the new set of vulnerabilities discovered in the last
30 days will be an indicator of whether the same training that you've provided to these teams have been retained well or not are they retaining the information given to them or not finally vulnerabilities by priority itself now this is different but from vulnerabilities by criticality criticality is based on impact and damage factors while priority depends on the organization and the industry domain it is from for example health industry prioritizes on data protection um financial institutes prioritize on data integrity over others so based on these factors some vulnerabilities get priority over the others even if both of them are from the same criticality for example you know you have different categories of critical variabilities but then
um because the volume is high which one do you give more priority is it um let's say um privacy violation over sql injection or csrf or though these are all criticals you know you want to drive remediation effort on baseline issues so how do you do that this helps in deciding when the volume is high like i said and if you classify once per severity so this again the same thing like you know what you see from demo and what is actually in reality using the same tools is different um if anyone here has worked on pareto charts or fishbone diagrams these are something or like have heard of this thing called six sigma uh you would probably be aware of the
rule 80 20 rule um this comes very much into effect in the vulnerability root cause analysis as uh well of yeah this specifically comes into pictures when let's say teams um use a common framework uh which is quite uh you know uh used for different functionalities um or if an application uses a common function to validate inputs uh which is generally the case when it's a large it's a large application using multiple sources of inputs and you know processing it so teams generally use this one function where they validate this is their validation function and then they keep on calling it wherever uh they have to make use of these inputs and process it um so while there may be 500 different
instances of sql injections with different sources and sync if all sources go through the same uh function for validation it can be fixed if you can fix that function itself yes i do know that sql injections uh can be fixed by parameterized queries but when an application uses dynamic queries to be to um you know to execute a parameterization even if it is implemented can be flagged though because the query itself is dynamic and not constant so the tools uh general tools that we use here it's not able to identify that so when it comes to component level issues it effectively economically it effectively boils to how organization handles infrastructure changes unless the organization applies
restrictions on what components softwares os can be introduced into the environment um um it's basically free for all um and unless there are policy levels restrictions on what can be used and from where it can be prepared the number of infra level vulnerabilities would be huge and varied out there there's no consistency in terms of even one bitties out here there's no common ground being found there um it becomes difficult to consolidate and work on it policies are effective uh very effective in driving remediation efforts uh also another good idea is probably to hire a data analyst that looks into this volume of information to come up with some common patterns and trends which may
not be immediately identifiable by a chart and you know graphs out there a friend of mine discussed uh the value of the same in his podcast uh with the cso um um the podcast name is agents of influence uh this was run by my friend nabil hanan you could check it out on any of your podcasting uh tools and they didn't believe that there is a value in having a data analyst as they are very useful in displaying um critical metrics to csos and executives note that all applications uh will be contributing to the same set of vulnerabilities some may have a lot of mediums compared but none of high or critical and vice versa
digging into the major contributors is a good idea to target the worst offenders first another good idea is to check in to see which type of apps contribute more is it java.net php or what kind of service effect um like you know have more vulnerabilities out there in your environment aws azure gcp uh kubernetes um help it helps focus and try uh strategize workbook what's working in terms of security for your organization and what is not digging into technology specific vulnerabilities helps drive questions which which technology seems better to be better implemented in the organization versus others it could also give you an idea about you know uh where is the organization most vulnerable let's say a new one already
comes out in the wild in terms of infrastructure for um example a drupal vulnerability out there and like 80 of your apps use drupal and your organization is extremely affected by it so you want to make sure you have like a task force or someone that handles that immediately um all of digging into this data getting all of this information through the this volume is important to focus and prioritize your efforts on how do you want to deal with a new already coming out in the wild or let's say uh external researcher coming in and telling this is this application is affected so the dealing in digging into this info is important to strategize your next steps
as well all right a lesser prioritized part of the metrics is the visualization of itself numbers are better absorbed with pictures charts and graphs you cannot just throw numbers out there and expect your audience to immediately absorb it and understand what are you trying to say over here um volumes are better explained using bar charts while volume contributions are explained but you know pie charts out here now um if you want to explain how uh whether a certain metric is going down or up use trend lines but don't apply trend lines on bar charts as it reduces the impact of your message that you're sending that is numbers going up or down if you were to include
it with the with the batches or use a thickened trend line that is like a histogram or sort something um it sends out a message that hey we may be going down and improving in terms of metrics but we have a long way to go and a lot to work on um as with anything that has to do with security metrics need to provide as much as positive news as possible before giving the bad news uh reducing the impact of your positive trends would not help driving your remediation efforts um the next thing that is very important and can result in metrics for falling flat is colors the same the colors that you use to present the same graphics drive
another layer of messaging over it um the question that probably is in everyone's mind is why do colors matter over here here's a question to the audience how many of y'all have ever seen a traffic light so i'm just gonna look into the chat right now quickly and see if anyone can answer this question all right um so about so um what does the color red mean to you think about it and now what does the color green mean to you this is in terms of traffic light now what if i were to say they are to be exchanged green is for stop and bread is for gold can your mind adjust to that probably not and you're probably cursing
me right now to you know putting you in that step but historically different colors signify different meanings or messages red refers to warning or danger orange refers to a uh sorry red refers to stop or danger orange refers to a warning or like introduces anxiety while green and blue uh reduces the anxiety gives a positive feeling because it is the color of the earth and trees and sky and it's basically uh associated with lows or infos like you know something that doesn't affect that much or in um so green and blue would technically signify your improved color metrics out here there's a whole science that deals with colors and if you want to make sure your metrics
are effectively received i would suggest uh to you know read a little bit on it metrics should provide the message as is and should not need any context behind it now this is going to be a controversial question i see a comment already out there um for example these metrics were generated after a certain number of events that will probably happen in the future when you send a report generally people take it at face value this is what it is and ideally that should be the case the metrics should essentially sound out the message you're trying to send that this is where we are right now this is how it looks like right now um it shouldn't require a lot of
detailing behind you know um how these numbers were generated or this is this line means this i mean you have the legends you have and all of that but then it shouldn't require a lot of talking to explain what your current risk is all involved right now a small session on how the developers or the sam or a sample of the target audience proceeds these metrics you generated without you providing much information is a good way to discover uh possible perceptions and that's where the mind how the mind is processing um and then you have to fine-tune the same thing so this is what i was talking about like send the file and see how people
perceive what is what are they looking at and what do they think when they look at your metrics out here also it's a good idea to give um you know you providing volume of information in your reports doesn't help because it's it's pretty much uh uses information until they do some analysis so it's an overview of all of your um all of your vulnerabilities is much more helpful in a view i would say is at the category level or maybe a common ground some kind of a common ground in terms of your vulnerabilities um you cannot mix your infra vulnerabilities with your application level one rupees because uh development teams have no say and much
and they can only go and say hey can you please try and upgrade this particular server but they don't have a higher power in terms of making sure this is upgraded it's only when probably it's it only works with the collected group of teams go and go and say to the server team or like if the server team is actually given direction that you have to upgrade to a certain version or fix these issues um that this will work so in my opinion mixing both server and apps app level one with ease makes it difficult uh makes it i i don't see the value out there yes you would probably see the risk level for a particular application
or an organization if you get that collective info but uh it is dependent on how the organization is structured and how the what kind of decisions go into infrastructure uh creation out there um so that was uh today's session um i know it's a it's a little bit short but then uh i felt it it's better if you have a discussion about what you think metrics uh how you should dig into metric these are the starting points there are various ways you can dig into it um but i'll open up the floor for questions if any
i think that was a really great discussion around the colors and it opened up a great conversation going on in the chat right there i wanted to bring back to the very first question asked if you don't mind yeah sure um erlos asked what's the difference in criticality and priority it's that seems to be a subtle one yep it is actually pretty uh someone a mellows uh criticality is something that is defined by uh your tools itself uh the tools uh see the location of the vulnerability they see where uh you know what's your source what's the same can based on that have predetermined factors in terms of impact in terms of uh possible remediation efforts and
that's all determined by the tool but let's say um and for example um in my with my current client i would say i'm a consultant so i work with clients out here and uh the tools basically flag off sometimes uh you know a log forging as a um a high vulnerability and for my client we had a whole discussion with the team we went through all this and we found that majority of the logs right now go into splunk okay this you may you say that this is not correct and all but since majority of the logs are going into splunk out there you still have the historical data it's not like flat price being
piped off with the data out here so for now uh for the client log forging is not such a critical issue and that's why that is not priority so we reduced uh we reduced the criticality of it so the uh we prioritized right now uh other issues over log forging to be fixed out yeah and that's where the priority comes in well that was very insightful thank you there are some statements in the chat uh sure if you you want to leave those conversations there yeah uh so i see gurpreet mentioned context around metrics is critical uh while a good while implementing and deploying to the screen is a little bit and dashboards is good to have and much
needed in the amount of data to be processed within a tight uh context uh text is critical yes a little bit of context that's why you have the legends you have the um you know captions out there on the chat but if your uh dashboards basically require a lot of uh talking uh involved as well wherein you have to step in and explain this is a high or like you know this number is just for uh applications or is this is just for the pen testing results it's not going to help you know you have to show as you have to show that information and your your metrics should basically relay that information without you providing a lot of text
behind it a little bit of context yes um because like you said it's a tight deadline or like you know csos and executors don't have much time to spare for you to listen to you and get that numbers they would pretty much quickly glance at your uh presentation go through that and i you know figure out that what's the first impact of your metrics and that's why it's important to put it across with the right numbers and the right way and i can see one from the window okay uh colors always matter yes it does uh red uh has different indications uh definitely uh it has varying of it basically it gives a very strong message
so if you want someone to even with just the font color itself let's say you want to say uh right now the priority is to close out all privacy violations so right now the priority is to close out all skill injection so you might want to say sql injection is the biggest contributor or this app is the biggest contributor flag it in red you don't like you know use the color out there to flag out where you you want their focus to come from because among all the colors i would say uh the red would be the first color that people will put their eyes on um yes orange is boring and important okay yes uh i would agree on that and there's
one other audience before presenting exec true uh there is also one thing which i actually uh came to know through um i'm not sure if any of you check on twitter there was this discussion going on somebody created a live graph of all uh ice cream vending machines uh that were broken in when in all of mcdonald's centers and somebody mentioned to the developer out there that maybe you want to use a different set of colors because they your audience is global now so you may have a lot of people who are color blind as well so like you said your target audience is important so you need to know your target audience before setting up the metrics and what works
for them um it's also in terms of globalization some colors are not good uh indicators like what red is uh considered a big color in asia or like there are some colors which are not accepted in with a nation for people with asian backgrounds so you have to think about all of that and that's why i said i i am not an expert in the science of colors i just learned through whatever articles i picked on but this is something in the ui of it all is very important because you need to give your message in the short in pretty much short time out here um thank you thank you vervinder um any other questions apart from this i do
i i have a question part of it you already answered but it did my question does stem from the those with a disability and being colorblind so how do you develop what and this goes back to when you're mentioning frameworks with how to incorporate frameworks into your your analytics and your presentations your message but how do you account for or incorporate the user experience and then the uh or the human experience human-centered design in into all of this um so at the beginning of my career i would say we were trained in ui design uh we were told to focus only on ui stuff so that's why i know a little bit about it when you come um
and there's a lot of importance going on into having ui analysts you they call it ux or the user experience analyst out there coming into all aspects of your work life not just let's say your mobiles or your screens out there um you and me are pretty much used to let's say you know a developer's environment with a black background and uh looking into you know a screen with tiny fonts but there are people out there who would prefer bigger fonts and you know this you know the sizing the resolution must be higher and higher and it's not just you know in the ages aging uh community out there but you also have people a lot of people uh
who have anxiety disorders and you know having too much of information like for example the screen itself uh the aslr event thing has a lot of information and that can induce out there um and even for regular people having too many charts or having too many dashboards or having too many metrics on one screen is not going to drive your point about what's happening within your organization try and keep as limited information as possible without the idea of you know trying to don't worry you if uh your message is lost if you put everything in one you know slide out there it's much more easier to send across your messages if you start going from high level to lower level
start from an overview simple saying okay we are at you know ten percent of our vulnerabilities are highs and criticals while ninety percent are mediums and low that gives a you know a good stance to say hey leonard we aren't so bad in terms of vulnerabilities but we we have a lit you have to do a little bit to you know get to a better level or you know mature at a level then you start digging in what is that 10 percent all about and uh there's this concept in the internet world in the media world the click rate that they call it you cannot go beyond three seconds i believe or five seconds otherwise you've lost your customer or
you've lost your client that same idea applies to your metrics that same idea applies to your your sending across your message in terms of whatever it is if you just send in an excel with like twenty thousand wannabes no one's going to look at it but if you reformat that thing into better numbers you know and reduce the volume like i said uh you know you have someone looking at it there's someone who would probably reply back to your email saying hey okay i understand that there's a serious situation with my application i have to look into it there's a serious situation with my group i need to look into it so you need to look at these factors
when you start presenting your vulnerabilities out there wow that was very insightful thank you uh were there any additional questions from the audience in the chat oh looks like there's one in the q a oh okay do you think that current tools provide the right mix of metrics with context represented um i would say uh we don't have enough the right tool yet and um i'm i come from the district officer i am currently working on one of these tools and i would say um this is something that you know uh it was that's why we're interested in listening to andy pierce's talk about hacking your vendors finding the right tool for it uh he
mentioned like you have to make sure your poc goes well to know that you know you're picking up the right tool and one he talked about red flags and the red flag being here if the all the things that you want and which at least i feel uh obvious uh results like to give you an example i was asking for simple reports that any other organization would ask for for example i want numbers by let's say or for all internet facing applications uh you know highs but uh just saying lows mediums and uh you know info out there but i have to apply a custom script or a custom you know i have to create a
custom report and this is not provided by the tool i have to write a script which basically calls their api and then generate a ui over it which will create the report if this all has to be done by me i don't think there's any value out there of that tool if it's not providing me the reports that i want which i think are common templates uh so to answer your question right now i don't think there is a right mix of metrics being provided by any of the tools out there in the market there's something lacking with each and every tool uh is there something which is given by um you know i would you say would i say
there's one which is better than the others yeah like some of them have some features which work some don't but i don't think we have reached the level of maturity that we have let's say with sas tools you know software composition analysis out there we haven't reached that because this is a new field that's just come up in the last two years probably to two and a half years it's not been that long and people are now starting to look at it because we have now reached a level where a lot of organizations have matured in terms of app security in terms of and now moving to infrastructure security people are now moving to kubernetes aws environments and all
and started to scan all of that as well so reporting is now the next step
okay it looks like the questions may have slowed down these are really great conversations and talking points so please join us in the besides nova slack workspace to continue any of these discussions we're there we're available for the remainder of the conference for you and also if you're local in the area there is the happy hour in arlington virginia at punchbowl social please bring your b-sides nova multi-pass with that i will close the room enjoy the rest of the conference everybody thank you evan thanks for joining thank you