← All talks

Vulnerability Management Systems Flawed - Leaving your Enterprise at High Risk

BSides DC · 201641:36279 viewsPublished 2016-10Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
Vulnerability management (VM) solutions and products that are central to every information security program contain a serious “hidden” flaw. This software flaw is interleaved within pattern matching-like algorithms located deep within the foundational core of the most widely used automated VM solutions on the market. As a direct consequence of this flaw, even though these products report a certain level of network security risk, the metric upon which their calculations are based is skewed, resulting in an unintentional gap between the products’ intended information risk measurement and the erroneous measurement actually reported. This session covers the technical details of the referred to hidden flaw, its consequences and what you can do to limit your exposure. Gordon MacKay (CTO at Digital Defense Inc. (DDI)) Gordon MacKay, CISSP, serves as CTO for Digital Defense, Inc. He applies mathematical modeling and engineering principles in investigating solutions to many of the challenges within the information security space. His solution to matching network discovered hosts within independent vulnerability assessments across time resulted in achieving patent-pending status for the company’s scanning technology. MacKay has presented at numerous security related conferences, including RSA 2013, Bsides Austin 2016, BSides SATX 2016, BSides Dallas 2015, ISC2 Alamo Chapter, ISSA Houston, ISACA San Antonio, many others, and has been featured by top media outlets such as CIO Review, FOX Business, Softpedia, IT World Canada and others. He holds a Bachelor's in Computer Engineering from McGill University. He is a Distinguished Ponemon Institute Fellow. Thanks to our video sponsors Antietam Technologies http://antietamtechnologies.com ClearedJobs.Net http://www.clearedjobs.net CyberSecJobs.Com http://www.cybersecjobs.com
Show transcript [en]

the b-sides DC 2016 videos are brought to you by clear jobs net and cyber sex calm tools for your next career move and Antietam technologies focusing on advanced cyber detection analysis and mitigation well hi everyone my name is Gordon mikhay and we're going to go over my talk today which is vulnerability management systems flawed leaving your enterprise at high risk but first let me share a little bit about myself so I work in San Antonio for digital defense so just to break the ice i was born in Montreal Canada there's a lot of ice there and I grew up there and graduated from mcgill university with a computer engineering degree started working for a telecom

company called northern telecom Bell Nordmann research later transformed into nortel now no longer exists and spent five years there and because i was in telecom i thought where's all the action and all of the telecom action just happened to be in dallas texas so i moved my small family to dallas texas in late nineteen ninety-five just after Christmas and started working there for a company called digital switching corporation which was bought by alcatel anyway i did a lot of software over the years a lot of telecom signaling type software and telecom was going downhill around year two thousand or so and by 2002 I was just about to be laid off and I was starting to look around and I knew

very little about internet security but I knew a lot about software and so I met met these folks digital defense in San Antonio they brought me down and I thought hey I'm just going to spend a year there so maybe go back to Dallas because I really liked it but I'm still there it's been 14 years really loved it they brought me on board to actually help them riorca tect at the time many many moons ago they're immature vulnerability management system so we took I took a tiger team of four people with myself and weary architected it it's a SAS base model at the time wasn't called SAS but you know terms change and so now we're in

our sixth iteration so we do vulnerability management learned a lot about that across the years and that's what this talk is based on I've done a lot of studies based on what i'm about to share and there is a limitation or a flaw that I've seen in the industry or rather it's a challenge that all vendors have to overcome and that's what this talk is about so hopefully you'll learn something I've given this talk before and many times most of the times actually when I given it people raise their eyebrows and really Wow why didn't I think of that yeah that makes sense I didn't realize that that was actually a challenge pretty cool good stuff

so the way that I give this talk nowadays is I don't actually reveal the flaw up front about three quarters of the way through I'll say AHA here's what the problem is and then we'll talk about consequences etc and so I invite you if you can guess this problem before I get to the end you just raise your hand and I'll call upon you and you can sort of shout it out and if you get the answer correct I don't have any prizes but you get recognition so that's pretty cool right recognition at bsides DC that's pretty cool this pretty big conference my first time at besides DC not not at various other besides but anyway I digress

so let's kick it off by sharing a use case and there are many different use cases that sort of highlight this challenge but in this specific risk use case what I'm sharing is a hypothetical situation right so don't go to this room and say well Gordon just revealed a zero-day vulnerability because that's not the case right but let's assume that it recently announced zero-day just came out let's say today or yesterday that we're Apache mount the most the earlier versions so two point four point zero to two point four point two two is vulnerable to some really serious you know flaw but it's not impactful to the most recent release okay what would you do as a security

professional to understand what your risk is across your organization right so certainly what you'd want to do is find out where am I vulnerable in my enterprise right where what machines do I have that are running this vulnerable version so that's one and so you'd probably even run a more recent assessment you could you know even though the vendors out there may not have emerged with the detection that detects this yet they may have but they may not have you can probably still look at you know what what where is Apache and where are the vulnerable versions because many vulnerabilities canners and systems give a lot of insight even though they don't tell you what

application versions etc and in this case you know this is this is something that you could do but still you'd probably want to run a more recent assessment across your enterprise to find out where the vulnerabilities live but that's actually not enough or said differently we've evolved across time to handle you know more complex use cases and this is where i share in step two where what you'd like to do in addition to step one is to understand you'd like to look into the past and say well it's possible that i actually was running apache on certain machines where ID installed it or maybe i did have a vulnerable version but i upgraded for whatever reason i mean the most recent

version of apache has been out for a while here and so so you'd do that right so i'm showing a diagram here which sort of puts this into perspective and I like these timing diagrams because a lot of this talk brings in the dimension of time right and we'll talk a little more about that in a second the right side of the diagram is showing us instances of where we're vulnerable now so it's sort of like a little tacky network diagram but the red dots are illustrating you know where apache is installed at that vulnerable version right so at least wow yeah we know we know we have three instances we can prioritize these we can you know figure

if we want to solve them and remediate them right away or not but now if you look into the left side of the diagram there's a lot more red dots and what I'm showing here is the possibility of where some at some point in the past you may have had you know Apache installed at the vulnerable version but whereas today you don't because for example you might have D installed a patch you didn't need anymore or maybe you've upgraded right and so from an attackers perspective or from a security professionals perspective you may say well it's very possible that although we know about this problem today this vulnerability today it's very possible that the hackers or some hackers have

known about this problem in the past and because of that they may have actually compromised some of our systems so you could take this intelligence right this vulnerability intelligence as I say which lives only in vulnerability land and you can feed that into your Incident Response program to sort of you know investigate look look to see other you know other other indicators where you know maybe I've already been compromised on this because your instant response program might have missed these instances in the past so this is a pretty cool use case so I'm showing a diagram here which is essentially the same and all I'm showing you is you know I'm kind of relating the

endpoints to give you a better understanding of you know the past and the present right so it's very important to be able to relate what's been seen in the past to what's been seen in the present in order to solve this this use case here's another timing diagram that I like to use and I'm going to be using this kind of diagram throughout my talk okay so we'll just talk a little bit about this so the bottom part of the diagram what I'm showing there is three assets three computers these are the computers that you can touch you know they're there right there like as a human being I can stand here all day and I can feel this

computer assume that's asset a whereas vulnerability management systems that are you know assessing these computers across time they're not just doing it in one shot maybe in the very very olden days that's what a lot of organizations they run a vulnerability assessment once a year they may or may not have fixed problems but nowadays we're running scans daily weekly even sometimes continuously right so I'm showing in the in the top part of the diagram two different points in time where a vulnerability scanner or a vulnerability management system this is how they viewed these assets across time okay does that make sense so we got a little break here just it's someone already knows what this problem is you can raise your hand

great over here okay

pretty pretty good pretty good I'm not sure if it's exactly it but it might be so we'll just hold on to that one well hold on to it will continue time has changed changes time I couldn't really find something that's sort of more Zen ish than this I've gotten into Buddhism over the last couple years and but anyway it's this is sort of our little break point so let's let's get into different scanning technologies to sort of give us a little more clues a little more detail right so vulnerability management vendors scan in different ways they use different technologies and I'm not showing all of the technologies here there's there's one that one of the

sponsors sure has which is pretty cool it's called passive passive scanning that's not up here but nevertheless all of these technologies that I'm getting into some have this issue and some do not so agent-based let's talk about agent based in agent-based scanning this is where the vendor has come up with agents they're like programs that that run on the endpoints so there may be a centralized scanning solution that interacts with these agents to signal it to do a scan starts can but the actual scan itself is happening right on the right on the end point right on the computer right so it's pretty accurate so that's agent-based credentialed based is a technology where the scanning engine is remote to to the

device it's scanning but you can set up a set of credentials on the endpoints in some form or fashion and also in the vulnerability management system and then the engine will authenticate to the endpoint and once authenticated it could get registry keys files different things like that it could even drop if it has right you know right privileges an endpoint or a program right on the right on the end point so in that regard it's sort of very similar to age bass but nevertheless it's a little different right because you have to set up credentials etc so it's also quite accurate but it does have some IT overhead we'll talk about that in a second and then

thirdly we have the remote unauthenticated or remote network unauthenticated scanning and this is where once again the scanning engine is remote to the endpoints that it's scanning it's not on the endpoints it's not on those hosts in fact in this case it doesn't use any credentials you go into the vulnerability management system you set up set of ranges or maybe domains and it will use internet messages it'll message these endpoints it'll send ping sweeps etc is there an end point there or not and if there isn't it moves on if there is you know it continues on with open ports it tries to find what OS it is it looks at what applications are running

and obviously looks at what vulnerabilities are present right so that's remote unauthenticated so there's advantages and disadvantages of each of these different scanning technologies I'm going to start out in the reverse order this time I'm going to talk about the last one remote unauthenticated so just take a step back all of the vendors including digital defense which is the company I work for we use different techniques technologies I don't think there's anyone that actually just uses aging base but most of us use at least two of these technologies so we'll use remote unauthenticated or as a client you're going to use remote unauthenticated to cast the wide net right because there's no light overhead to do so you just set a plop a bunch of

scanners down and in the case of you know external vulnerability assessments if you're using a cloud-based you don't even need to do anything because the scanners are just scanning you from external but anyway the key message is very low overhead and it's pretty good it detects quite a few things this is how the hacker actually to use it I mean if the hackers on your endpoint likely your property already happened you know there's a escalation of privileged required but anyway I digress so that's remote unauthenticated so as a client as an organization you'll be using that technology by and large to cast a wide net across time at regular intervals weekly daily whatever then the others agent basin credential

base if you know where your well first of all if you're able to do this across the board that's good for you but agent based in credential based have high tea overhead right and they don't they're sporadic in that you you don't have agents for every type of device you don't have agents for routers and you know it's hard to credential eyes these things and you probably want to keep your credentials set up forever etc so there are some disadvantages to those but there's certainly it's better to use those because they're deeper right because you're on the endpoints you can get a lot more information of the vulnerabilities Adobe vulnerabilities for example you can't see remotely right because they're

not internet-facing so you'll have to use credential base or agent-based applying those types of things so there's a mixture of these different technologies that one would use and so key message here is by and large remote unauthenticated or remote network on an authenticated is used you can't get away from it you're going to be using it to some extent so takeaways most vulnerability management vendors use or at least provide you the client the enterprise with the technique of remote on authenticated scanning but then of course you could also give you a credential based or agent based and you can use these depending upon your use case right so if you know where your high-value assets are you would use

you'd use a credential base you'd spend a little bit more money to use credential base or agent based right organizations employ remote scanning on a recurrent basics across time to cast a wide net because it's easy right low IT overhead and once again they'll use credential based for agent-based for high risk assets so we're getting closer one question you might ask is is especially when you're looking at that timing diagram that I showed earlier on where you saw the different endpoints being scanned at different points in time and the real world assets at the bottom that you can touch well you'd wonder okay how does the system know that the endpoints in one point in time

are related to the correct counterparts at a different point in time that would be a question that you'd wonder about right and that's what this talks about essentially so how did the vendors do this so they use various what I call network detectable cat characteristics as their match key right so certainly IP address is used quite heavily but they're other vendors use various hostnames MAC address is a good one although as evolution has occurred not necessarily for virtualized environments host types lots of other different characteristics that these vendors use okay so this is just to give you a little idea so in my research I looked at I looked out there and I tried to understand well

what do these vendors like what do specific vendors do and how accurate is it and so I found this algorithm which is actually it's pretty cool because they actually describe it right on their website and they call it hosts tracking and or the prop they call the problem host tracking I call this algorithm single host tracking key where the administrator is allowed or able to go in and specify which one to choose so if you think about this algorithm you can think about your fingerprint and if you look at a fingerprint in this example you as the administrator of the vulnerability management system can come in and indicate I want to use characteristic a or characters to be or

characteristic see it's not all three at the same time that's one so you choose one and it depends on what type of host you have so if you have laptop right or desktops that are in dhcp ranges you're not going to choose IP address but incidentally if you don't even know about this problem when you just wanted to you know install your scanners quickly and start scanning your environment by default it's going to use IP address and that's the only thing that's going to use to track so after a while when you start discovering some issues which we'll talk about later you'll then go into the system and you'll say oh okay this isn't a DHCP

range i think i'll use netbios hostname as an example right so it's one of these three that you could use and this is this is an example of one of the largest vendors out there incidentally they're not a sponsor here so just sharing it so I was curious I was curious because when I started a digital defense in the first iteration when i was there very very young system I guess very you know put together with a Spock would say I'm a Star Trek fan stone knives and bearskins but anyway I remember I was in the lab because we had some clients who were you know they had these bugs as we call him right and I couldn't actually solve

it because our database relations were the way they were right so I was trying to move data around to sort of get things to work with the key messages i did this issue when I was very very young at digital defense not to say that I'm young but young meaning i just started there right and so i wondered how could i solve this issue how could I how could I how could I make it better and so I came out with an algorithm that is still being used and that's been involved but of course with anything that's complex in the world of software if there's some software developers here you understand you're always going to

have some bugs it doesn't matter how good you are right and it's not just you it's your whole team and so there's an investment right so as you start getting more mature in your career you understand yeah there's software and then there's economics right and so the more complex you you make a feature you have to think about the economics involved in that right and so I wondered that we waste our time with this because we're you know we're having the baby sit this algorithm evolve it etc and so I said you know let me do a study and I went out there and I try to understand well how often do these characteristics that these vendors use to track the end

points across time or the host across time how often do these characteristics change and of course you know if you're talking about laptops you're going to realize or at least intuitively know that your IP addresses are not going to stay static forever because you're in dhcp range so that makes sense right so what I did is I looked I looked and because we're like a cloud basis a space we have a lot of data and we can talk about how i did this analysis if you're interested but I spy categorized different devices into server type devices client type devices laptops desktops printers you know different routers etc different types of devices and I looked across

time and my goal was to understand how often do these characteristics change across time so I took us time slice of three months the study actually covered like over a year but I took a time slice of three months and I said okay if a characteristic changes at least once across that three month time period that's a flag that's account right and so I'm listing two different categories of devices here servers and client type devices servers are database servers web servers those kind of types of servers that actually serve and client type devices are like laptops or you have your you know your your google chrome etc you're doing work you're writing word docs etc right so server type

characteristics you would think that if you have a server it's not going to change characteristics much you plop it down you know you give it an IP address you give it a dull name don't maybe a domain name etc and you think it's not going to change much but actually it changes more frequently than what we would have thought and so my analysis shows my analysis and my team shows that for example even just IP address changes at four percent across a three month time period so if you think of an organization a large organization that has for example 100,000 servers in their organization and you're doing vulnerability management moaner ability scanning as time goes on

or even if you're not doing vulnerability scanning the point is those characteristics that are being used as as match piece change they're moving and so four percent of those that's 4,000 devices may not be much but compounds over time it's like the compounding value of money right it's like interest so after another three months you get another four percent and even the original four percent changes again if that makes sense at a four percent rate so that's IP address and then the other characteristics are listed here I'm only listening three characteristics but the study actually covers much more and if you're interested links at the bottom there I know he's probably hard to see but

hopefully the slides are available afterwards and you can get to it but it's on my website my company's website right so things change so the takeaway here is if you look at a given asset a given endpoint a given host on using these different terms interchangeably as time progresses those things that we see from a remote perspective right when we're back here as a scanner and we're looking and we're sending messages that and we're getting responses back we're saying okay this is what we're seeing for this device right now but if I scan it next week will I see the same that's the question the answer is to some extent yes but to a large extent no

so this is the point in time where I actually and I kind of been scanning to see if there's any hands up but this is the point in time where I reveal this limitation this flaw right so let's bring it together and then we'll talk about consequences the most widely used scanning technology is remote on authentic okay we talked about that most vendors will will track hosts track point in time scanned endpoints using a limited set of these characteristics that i mentioned for example IP address teen associating that bias hosting in the example i gave you of the large vendor you could actually go in and specify which one so you have some control but

even then it's still not enough and other vendors actually you you don't have control which is good because its hands off but if you have a problem how do you fix it Third Point all remotely discoverable characteristics are subject to change and the study that we've done actually shows that a change is quite frequently more than what would have thought so what is this flaw what is this problem it's that vulnerability management systems often track point in time you know scanned endpoints and they make mistakes and we'll go through those mistakes with the consequences there's two different types of consequences acid duplication and asset mismatch and I have my opinion as as to which ones more severe but let's let's

look at these okay

the more bubbly not connected

very true yes very true yeah okay great so consequences of this of this flaw asset mismatch so here's here's that diagram again where I'm showing us at a B and C right so imagine you know Here I am in week one I do my skin I see three endpoints red yellow and the black endpoint and I don't know if you can see this but essentially they're different characteristics up there there's IP address there's DNS hosting that bias hostname MAC address etc at the on the different endpoint or hosts some point between at some point between scan week one and scan week two there's been some IT you know someone an IT and um keep in mind

and you know when you're in a large enterprise often what happens is that there's a centralized security team that manages vulnerability management and other security tools and then you have maybe even multiple IT teams spread out throughout the organization that you you know a sign remediation out to etc and then thirdly you have these IT people that are you know making things making sure things stay up moving things around moving printers around etc right so they're doing i.t work right and they don't necessarily even know that there's a vulnerability management program in place they're just their goal in life is to keep things running and they don't necessarily communicate together so it's not like someone's going to be like oh I

better not change this because yep what they're thinking about users certainly but they're not necessarily thinking about the vulnerability management management team who's actually you know trying to measure risk and trying to you know drive out risk so anyway this is a situation where in week two we see that the yellow note in the red note experienced an IP address change and as a result the vulnerability management system actually if I had a pointer I'd show this the asset a is actually mismatched and so what happens is the red the red asset originally was the red asset but now the boner management system thinks it's the actual yellow one and what happens in this case is if you imagine a

situation to make to sort of make it simple imagine a situation where the red asset and the yellow asset they both have vulnerabilities but their sets are completely disjoint they don't have vulnerabilities in common let's just make that assumption right now the vulnerability management system after this mismatch had occurred what it will do is it will declare all of the vulnerabilities that were there that were present that we're scan in week one as having been fixed because it doesn't see them anymore right because of the mismatch and so you're like yay I solved all those vulnerabilities but it's like we'll wait a second no I didn't really and of course it'll also declare new ones those of the yellow right so that's

a mismatch pretty serious a lot of the times it's hard to tell because you have so many hosts in your organization and you're running this program in a continuous basis so it's difficult to see often you'll see something and it's like I don't know what's wrong there's something wrong here and this is what's happening often asset duplication is other than the one that the gentleman over there mentioned is the second type of flaw so in this scenario imagine time has gone by now we're at week 23 and we've solved the issue of the red host because what happened is we said hey let's use a different match key let's use DNS host name because that thing doesn't

seem to change as much and you know we have a naming server and so let's do that for this thing that's great so we see that in week 24 when the scan ran the red endpoint is correctly matched and that's good but the problem is the yellow endpoint which just have so happened to have the same IP as the red one originally which has a different IP than what it had in week 23 and it was actually still using IP address as a match key and as a result the system says well I never seen this endpoint before it must be new let me add it to my asset list so now instead of having three assets I have

four which is kind of cool from the vendor's perspective if they're charging by IP address because now you're actually spending more money but in fact you still you only have three endpoints you really don't have four right so this is an issue right this is this could happen so as you you know in the olden days digital defense actually had these issues and we evolved it but we had a very large client and we had a certain set of assets and the asset list and it was growing and it was growing and it was growing and we're monitoring and we're like what's going on or something wrong so that's been fixed and stuff many many years ago in over ten years

but the key messages all vendors are susceptible to this challenge so these are not the host you're looking for so I'm a Star Trek fan but I couldn't find anything on Star Trek I like Star Wars 2 so anyway how we doing on time pretty ok impacts now you could you could imagine some of the impacts now I'm assuming right based upon what we talked about I call these dev ops and sec ops impacts and and I have a lot of stories on this there's there's a there's something I call chasing ghosts so we've had multiple prospects come to us and they tell us about stories which is good because we're like oh we know what the

problem is and we shared with them you know how how our technology solves it but anyway the story the story is where before I kind of alluded or start started a little bit we're in large organizations you have these humongous teams right you have a centralized security team and then you have IT teams out there right so this prospect who's now a client came to us and they and they told us this story where the vote they were they were doing this scan and because of the mismatches they were taking the vulnerabilities and assigning them out to the wrong IT team thinking that they own these assets when in fact it was just due to a mismatch and so

what happened was the people who were receiving these problems these vulnerabilities to solve they were like okay that's great let me go into the gate and they would spend time investigating learning about the vulnerability which is a good thing so it's not really wasted i guess if they had that moment really really to fix but then when it came time to actually find the device it took them a long time to figure out this isn't even our this is you it isn't even our device you miss assigned it and so that's what i call chasing ghosts so there's a lot of money spent you know chasing ghosts and and and wasting time mismatch scanned end points as i

mentioned before we roll all my vulnerabilities have been solved that actually gives you a false sense of security so this is this is sort of like this is pretty serious i mean when you look at vulnerability management systems nowadays we're we're all pretty good at scanning you know i mean yeah we you know a lot of a lot of clients or prospects will when they're vetting out different vendors they will run a scan on a specific you know test bed and they'll do that for all the different vendors but they're really just running one scan they're not and they're looking at well how accurate is it and okay great and that's good and maybe in the olden days

you know we would have seen some vendors have a lot more coverage than others but one of the things that i know that these prospects don't do is actually take into account this time drift right run scans across time what happens if your if your if your endpoints actually change right change these characteristics so that's a sort of my spiel on that boner believes the cleared fix when in fact they're not so that's a problem and one of the things that I you know in some other talks that I give is that as as a security general like a SISO or you know a CIO etc you're looking at this vulnerability management system as a gauge for your

risk right and you're using this information to make important decisions what if this gauge is off that's what I say and I've seen it so obviously the ideal scan the scan endpoint correlation solution is to get everything right it doesn't always happen but there are some solutions out there including digital defense that actually we use we use everything we can see right so it's kind of like instead of just using one thing on your fingerprint why not look at everything and so that's what we do we look at all of those characteristics i mentioned we look at what ports are open we look at what applications are running we look at all the way down to the

vulnerabilities and granted vulnerabilities are going to change a lot they're going to be solved but we still use them in a different weighted fashion that makes sense we don't look at every characteristic and say oh that's that's so important that's something different characteristics have different weights that sort of and then when i say weights their weights as it relates to change right and so as time evolves these these weights may change too because just because you know it is the way it is today doesn't mean that in the future something's not going to change so for example MAC addresses / virtualized environments it's my understanding when machines go up and down they actually sometimes get

allocated different mac addresses so you can't use mac address as necessary as a very hardcore characteristic for tracking what can you do well one thing is be aware of it and hopefully now you are I'm not sure if this is new to some people is it is it something that makes sense is it something that you were aware about but if not hopefully you are aware about it now and then secondly there's a third thing but secondly and when when you're benchmarking vendors when you're looking at solutions be aware of this problem and see will and ask them how do you solve this issue if they say something like oh don't worry you'll just use credential based

scanning all over the board well that's not easy to do all the time right the other thing you can do is do your own correlation it's my understanding that there's some other security products that pull in vulnerability scans from different vendors and they do correlation so they somehow solve this problem in my assumption is that they're doing it in very similar fashion as to what as what we're doing right so wrap up and then we can get the questions his historical security information is key it's not just now as I was mentioning them before when you look at time when you look at impermanence impermanence is something that you know if there's anything that's absolute change is

absolute and you know with change comes another dimension of complexity network endpoints / hosts change characteristics but why did they change we actually interviewed people when we were doing this study some of our clients to sort of validate and make sure hey what's going on here and they would tell us things like well we're doing name changes we have a new naming convention so we're changing all of our you know domain names etc and so so that's one reason and there's a lot of different reasons but the key messages there they're not doing these things to to cause problems in fact they are unaware that they're causing problems right most vulnerability management solutions are good with one point in time

assessments as I mentioned before I mean you can benchmark them with just one scan you'll probably see some differences in some strengths etc but by and large they're nowadays a saying there that's a pretty mature market and then finally when you're selecting a vendor keep this in mind ideally move stuff around and see how how the solution response so I think we're really good on time a little dirty which is okay questions

absolutely so in fact that's why I kind of shared the difference between remote on authenticated versus the other technologies as credentialed origin based you could you can you could do our techniques where you could actually like for example agent-based you're on the computer so you can give it a specific ID that's unique and so the next and credential based same thing you could actually deposit something like an ID there that's unique and so the next time you come back and do another credential scan be like well I did I deposit an ID there before oh I did oh ok so now I can correlate it so those technologies are not susceptible to this problem if that

makes sense

well actually I wonder who you're working for because there is a vendor that does that yeah right and that's cool but again that's assuming that you know that that that the reason that the IP change was because it just got a new IP due to a dhcp and that works great for that case but for the case of servers that I mentioned where you know the IP address is changing but it's not in a DHCP range you're not going to see that in your DHCP logs and so if that makes sense you know so very good point yeah

you know it's been a while its large I want to say it's definitely over a million endpoints over a million hosts and we actually sub divided it into large enterprise medium and small so this one I'm showing is actually a medium enterprise what I was showing you it's kind of like taking the average of the two because smaller ones don't you know there's the changes are actually different for the different sizes which was interesting to us but yeah others thank you so much appreciate it [Applause]