← All talks

Joe Gray - NetflOSINT: taking an often-overlooked data source and operationalizing it

BSides Knoxville48:05141 viewsPublished 2022-05Watch on YouTube ↗
About this talk
When we think Network Forensics, we often immediately gravitate toward packet captures (PCAPs) and logs from routing devices. There is no disputing the importance and value in either, but this leaves another source frequently overlooked - enter Netflow. Many devices natively generate Netflow or IPFIX, but do we really analyze the data? Many may be aware, but what if you were told that there are tools to extract Netflow data FROM PCAPs? This provides a means of more efficient statistic and in-depth analysis using a variety of methods with smaller files to help gain context in what to query or follow in PCAP streams. This presentation will include demonstrations in Microsoft Excel, ELK, and Jupyter notebooks to allow a simple jumping point for integration into other aspects of an investigation using OSINT vectors.
Show transcript [en]

it is time for our first talk our first speaker is joe gray he is presenting net flossant i think i'm supposed to pronounce it like that taking an often overlooked data source and operationalizing it

gripping your pillow tight so thanks for coming to the talk we've already had the mic drop so my job here is done so it's been great um all jokes aside uh thanks for coming out thanks for coming to the first b-sides back in person it's so good to actually have people nearby and not be communicating via discord and go to webinar and zoom and all of those fun things please have fun and be safe but safety third of course that being said welcome to net flucent taking an often overlooked data source and operationalizing it anyone who knows me knows i like click bait titles so that's kind of what this is and uh we'll get down to the meat and

potatoes of it in a moment so about me i'm the founder and instructor for the ascension by day i'm a threat hunt and intelligence engineer at mercari i wrote practical social engineering which if you're interested i'll be selling downstairs once this is done and i like to compete quite a bit as well i'm the 2017 derbycon social engineering capture the flag winner and a member of the ocean search party winning teams at defcon 28 and 29. so moving into the actual context of the presentation why this presentation well when we think network forensics what immediately comes to mind i mean pcapps right pcaps or it didn't happen that's a very common phrase that we hear

and pcaps are universal truths but that's what we normally gravitate to because of the additional rich context that you get from a packet capture and i'm not going to stand up here and dispute the validity of a pcap in terms of the importance or the value but it leaves another source overlooked and that other source is netflow and honestly a lot of devices natively generate netflow but you know really do we do we really even look at it right it's does it become just another data source is it just another log source for us to import into the sim so that we can you know make sim companies that charge by data in and data out exponentially rich

is is that the case so within this the agenda for today we're going to talk about the basis of network forensics we're going to talk about what is netflow we'll do a little bit of a comparison between packet captures and netflow and then i struggle with whether to use the term extracting or deducing or inferring net flow from pcaps and then we'll talk about analysis and next steps the next steps that's where the word osen comes into play so with this the basis of network forensics we typically have three main sources and that would be your logs your packet captures and netflow so when we talk about logs by definition they are evidence generated chronicling a

sequence of events from something like say a firewall a switch a load balancer but either way it's memorializing that an event occurred and yes there are problems with logs they can be overwritten they can be somewhat altered depending on who has access and what capability they have that would be fairly sinister you have problems such as things like time stomping that can affect the validity of a log packet captures it's the collection of network data in basically bytes and bits that's going to allow further analysis reconstruction extraction and then even going as far as to say carve files out if that's what was occurring and then with that you've got netflow which is metadata metadata being data

about the data data to explain the data to say that besides knoxville is on may 13th of 2020 that would be a log that memorializes the event of today saying that at 906 a.m the temperature was 66 degrees fahrenheit that's what i was looking at with my watch that is an example also of memorializing but it's also data about the data it's data about today so you could consider that metadata of sorts it's still an irrefutable data point but it's data about the data so data about today and please do not play a drinking game consuming alcohol for every time i mentioned the word data i i will say that fort sanders and children's hospital are

relatively close we're not far from university of tennessee but i do not want to be the cause of a congested emergency room so please do not play a drinking game with that with that though um you can ascertain a lot of valuable insight from this netflow from two perspectives the the east-west which would be your lateral movement communications between two internal hosts or your north south which would be internal to external or vice versa so as we look at this here are some of the pros and cons and i've kind of already touched on some of this logs are great and they are considered universal truths until they aren't until they are stomped uh until they are altered until they are

corrupted until they are deleted and you know oftentimes uh anyone in here working and consulting how many times have you went into a present or into a consulting engagement only to find out that logging is not properly configured and if you do incident response i'm going to say that you're probably going to say that you see that more often than if you do pin testing right because unless someone is directing and this is something that through my time doing uh full-time for about the last five years that i've learned if there's not an explicit directive to say thou shalt do this this particular way they it typically is not even i wouldn't even say minimum viable product

not even minimum viable security right if you want an example of that look at the dmarc records for any given organization unless they're operating in the federal government which is uh covered by department of homeland security binding operating directive 1801 um which basically says you must have dmarc and it must be set to reject go look at dmarc records you're going to see so many that have a policy set to none i can guarantee you that more than likely the reason that it is set that way is because someone said you have to be compliant with this framework and this framework says you have to have a dmarc record it does doesn't say you have to be in

uh quarantine or reject it just says you have to have a record so therefore you're checking the box and we and but it accomplishes nothing right logging is really no different because the thing about logging is you have a coefficient of storage time and security that you also have to maintain with that when i'm giving ocean training and i'm talking to people who are just coming up through the ranks in ocean i have the conversation quite frequently of if you collect it you are obligated to secure it if you collect it you've got to store it it doesn't matter where you're storing it you're going to have to pay for that storage in some way shape or form

eventually that five terabyte external hard drive that you have it's going to fill up eventually you're going to have to pay your monthly cloud storage bill it's going to cost money and then with that you've got to secure it so that's another i wouldn't say pro and i wouldn't say con but it's a blessing and a curse a pro and a con if you will as it relates to logging you also have things like deletion time stomping and honestly they vary in file size depending on the size of your architecture you may have exceptionally small log files or you may have tremendous log files so keep that in mind with packet captures they are excellent and they are also

considered universal truths but they are somewhat short-sighted at times due to things like encrypted protocols if you don't have the key to be able to decrypt encrypted communications it's useless right you can see the traffic occurred you can see oh well this client was connected to a vpn but you can't see what was going on in the vpn right and does that really is that really going to help you the big con i would say really with packet captures would be the file size because if you're transferring a four terabyte file guess how big your packet capture is going to be at least that size so it's cumbersome to store and secure something of that size

and honestly using wireshark it's kind of difficult to natively do statistic analysis you can do a lot of analysis i am not going to tell you that wireshark is not worth its weight in gold and in fact i'll actually say it's probably worth my weight in gold uh since software really doesn't weigh anything right but that being said you can't see the complete picture so that leads us to netflow so it's often not used because the philosophy is we've got logs we've got pcaps why do we need this right but it's often comparable in size to log files sometimes a little bit bigger sometimes a little bit smaller but it's definitely not the size of a packet capture you do

have less visibility i'm going to put that out there for you right now you have less visibility but it is easy to analyze and perform that statistic analysis so in talking about netflow we'll hit a brief history and then talk about netflow versus ipfix so basically it was introduced in 1996 by cisco to collect traffic as it traverses an interface as with many other things it was designed with the purpose of troubleshooting it's still used for troubleshooting but i mean look at how many other things that we use in a security context that was made for troubleshooting for all you pin testers here i mean factor in that ps exec if you're living off the land

was created for troubleshooting and remote system administration so it's the use in the abuse case scenario but with that within netflow it requires three components the exporter the collector and storage so that's the three elements that you need it's not going to affect your files but if you're going to set up quote proper netflow within your architecture that's what you're going to need so a device to export it a device to collect it possibly your sim and then of course some place to store it comparing netflow to ipfix we have several versions of netflow version 5 is probably the most common through my research i found that version 9 is probably the second most common

basically the thing that varies between it would be the names of the fields and some have more fields than others so you have other potentials for analysis with that technically ipfix has nothing to do with netflow um but it at the same time is also technically called netflow version 10. so one of those weird kind of legacy things right so with that here are your standard fields for netflow pretty much universally across the board if you've played with packet captures these probably look familiar to you so time source and destination ip and port it could be your ip protocol your type of service you could see it as bytes or packets or octets sent and received

so as we look at this this is going back into what is netflow but this is to kind of help us understand it a little bit more factor in think of it this way netflow is like kleenex or q-tip or google we have facial tissues we have cotton swabs and we have search engines but the fact that i was on a sales call about 10 years ago with someone trying to sell a search appliance and they were a competitor to google and they used google as a verb when discussing their own product it's kind of the same thing with netflow right you have netflow you have ipfix at the end of the day kind of think of it like

a hammer if you're a hammer everything's a nail so if you're doing this everything's in netflow you can you can get into the weeds if you're talking to the network administrators network engineers they're probably more apt than many to get been out of shape about whether it's netflow or ipfix but it's it's a semantics thing so with that you've got some rfcs to discuss it but you know this is early in the morning but and normally i would not be awake this early my work day normally starts at 11 but i'm not going to bore you with rfcs to put you to sleep this early that would be more of a appropriate for closing closing time

so comparing pcaps and netflow right netflow it's going to abbreviate the story it is the tldr it is the macro perspective of the story you're going to have your smaller file sizes meaning it's you're going to be able to store more and it's going to be easier to traverse around and allow others to gain access to it and honestly it's fairly easy to import into pretty much any sim and honestly also in csv format so with csv format the beautiful thing with that is oh well i'm just doing independent research i don't have a sim i don't have the ability to stand up something like elk or uh alien vault osim or graylog how do i

do this it's like it's a csv do you have excel or do you have python pandas do you have openoffice there's a lot of ways that you can analyze it which is good but it doesn't include the packet contents even if we have a stream of connection over http and you transfer a file just a text file that says hello world with a packet capture you could carve that out with a net flow you're not going to be able to but at the end of the day it's valuable moving to packet captures it tells the whole story it is the token version of the story you're gonna have 30 pages describing what the trees smelled like

you're going to have 19 hours of movies walking around i'm not a big lord of the rings fan but either way it's going to tell the whole story it's not going to be the cliff notes that you read three days before your ap english 4 exam about lord of the rings the cliff notes or netflow will tell you that there were trees and it may tell you that the trees had a scent but if you want to know the whole story you've got to go the other route and that's basically packet captures so with that as i said earlier there is the philosophy that's prevalent across industry if p caps or it didn't happen i can't disagree with that

and with that it also allows things like stream filtering so you can hone in on a particular line of conversation between two hosts and it also allows carving but again you have that larger file size and it's also valuable which is good because that's what we're trying to do we're trying to bring value right so continuing with this we're going to talk about how we would go about extracting or i like the term inferring i stuck with extracting in the presentation but it's not truly extracting as much as it more is inferring um the net flow from pcaps so if you come across a packet capture and you don't have access to a net flow interface or

you know you're just you're doing some google foo and you do a google door for something with file type pcapp and you come across it and you're like hey you know i might want to actually analyze this as if it's netflow so you wouldn't have access to the interface that captured it so how would you come across it well i've got some tools and scripts that i'm going to hook you up with so what we'll need for this basically you need your packet capture if you want packet captures to use for this the one that i'm going to use in the admittedly pre-recorded demonstration came from active countermeasures they have a thing on their website

called malware of the day and with it basically it's to help you get used to doing various types of analysis or hunt type activities so with that you can download those p caps and run them through the tool test them that way if you're doing any ctfs like in the intelligence village there are no packet captures today so sorry you can't test it there but anywhere you have a packet capture you can do this you'll need a linux host with silk if you use the sans soft elk system i believe it is native there you it may also be native on just regular sift i'm not 100 certain on that but you can just install silk wherever

as well optionally for analysis you could use excel openoffice any of the above you could also use something like elk in the pre-recorded demo piece i actually have elk uh as part of that and then also you've got jupiter notebook with pandas and unfortunately that was the one thing that was going to be a live demo today and wouldn't you know i ran an update and it fell and went boom wouldn't you know so uh but with that i mean we can still talk through it so to do this basically here's the script for it um if anyone wants it i'll share my contact info at the very end and you can just either stop by the intel village or shoot me a

message on twitter or linkedin or whatever i'll give you these commands no big deal but basically within silk you're using rwp to flow and then the path to the actual pcap and then you have to give it an output path that's going to create a file type of rw from there you run the next script which would be rw cut set the delimiter to a comma so you can have a comma separated value csv you choose your timestamp format in this case i'm going with iso and then your output path that's going to be the name of your csv file that you're going to create and then your uh rw file and then once you're done with that

now you have a csv that you can analyze using any of the above so and i accidentally duplicated that so now we are up at the demonstration piece so let me shift my displays momentarily and we'll start here so this is uh basically an instance of silk so i stood up a custom host with this this dfir techniques host this was basically during my period of unemployment me tinkering with various uh forensics and incident response tools so i installed it there and ultimately uh this is just basically running through it so uh right here is that rwp to flow so p caps uh we're using the evil os 10 underscore one hour p cap uh and then basically we're just putting

it in a different directory keeping the host name or the file name intact with dot rw do that cool oh i messed up when you know so now still messing up there we go and at this point we've executed the rw cut with the delimited with the comma timestamp format iso output path evil os 10 uh underscore one and then there's the dot rw file and then at that point we have it and we can verify it so here's all of them as you can see i put all the rw files into one particular location so that was what caused the hiccup with the uh the command earlier but basically you know we have the csvs and the rws as well

and that's really all there is to doing the actual conversion um moving from there though right here we have elk so i will tell you i i appreciate elk but standing elk stack elastic search log stash and cabana standing it up from the ground up can be a major pain in the derriere major so if you're going to test this out what i did was i signed up on elastic's website for a 14-day free trial be careful if you go past that 14-day free trial because i also had all my honey pots connected to a free trial and once i got out of the free trial period it was supposed to cost me 50 a

month 13 days in the month my bill was 375 be careful what you connect to it if you're only doing this kind of stuff you're probably safe if you connect it to a bunch of honey pots hanging out on digital ocean probably not the best of ideas my my wallet learned and overcame fortunately that was before i was unemployed so anyway to do this it's actually fairly simple you literally just say upload a file and oh wouldn't you know there's a netflow file so this is what the csv looks like start time source ip address source cc destination ip address and so on i don't need to read all of them but basically we are able to see

what this looks like and then each entry right here there's your uh date and your time and then from here you have your ip address cc being country code in this case that's an additional field that that particular file has so they've used something like geojson or some other geo geolocation mechanism to say this ip address is assigned to say the united states an organization in the u.s or as we see over here brazil right and then we have the actual um the protocol the source port so forth and so on and we can go all the way across some of these you're gonna see the same some you're not going to see the same

it's going to vary depending on what inputs you have uh if it's coming native netflow you're more likely going to have something like this if it's coming from uh silk it's going to be like lowercase s uh ip and then the ipo b upper case so it's going to vary but ultimately as with anything once you do the analysis once you run that initial script you need to validate what you have anyway so that you know your your field names and your headers anyway so just go through and check it to see what you have from there you know we can scroll through look at all this this gives us everything as it's put into elk we give

it an index name in this case like if you're using soft elk for example they already have elk built into it because it's soft elk and it was built for network forensics but the caveat is it to be able to import stuff directly to it is a little bit trickier than if you do just a generic elk instance because you have to match your field names up and make sure that everything is compliant uh because they've already predetermined the indexes uh they've already indexes or should i say indices either way um with that it's already predetermined and they've already got the dashboards built so if you're doing it on your own you'll have to build your own dashboards

but it's pretty much point and click it's not terribly challenging but from there i just gave it i try to start everything with netflow hyphen so that way i can use the same dashboards repetitively because i try to work smarter not harder try um and then from there let me skip ahead a little bit so i ended up naming it demo one it goes through so it's going to process the file it's going to create the index have your ingest pipeline because at the end of the day elk was built to receive files on on the fly not just whenever you upload it but in this case we're only using it for the upload portion

so it's going to do that sometimes it takes a little bit and now we have everything there so it's just going to search across here and this is the same data so this is just exploring it so it's using the discover tab to look at it this i mean you can do this exact same thing in excel you can do the exact same thing i mean from the linux command line it's not going to be as pretty it's not going to be as delimited it'll be just as delimited but it won't be as aesthetically pleasing but you could still do it and then from there you can literally run through just have a look around verify

verify the fidelity of your data you want to make sure that you've got good quality data and something's not flubbed up like whenever i pull certain things from an api like from one of the data breach apis that i connect to it never fails that somewhere in the usernames field is a file hash and somewhere under address is probably a password so you've got to go through and take a look at it clean it up make sure everything's correct but anyway we go through creating some visualizations here like i said it's literally point and click so right here we've got client ip addresses so with this particular field uh you've got client ip address but then

you've also got source and destination so client ip address takes both source and destination into account but in this right now you know for example you're working in a sock you've got netflow data and they say hey we've just got smacked with a denial of service who are the top five chattiest hosts you can find it out with netflow or i'm sorry you can find that out with a pcap honestly your sim should be able to tell you this but for example maybe you don't have a sim or you don't have access to the sim or you might just want a challenge you might be chasing one of those promotions or or something to that effect you might

want to be demonstrating your ability to automate or think outside the box here's a here's a way for you to do it so you can accomplish the same thing here and then there you go there's your top five talkers so boom we have this now the next time that i import netflow i can change it uh right over here instead of netflow demo to netflow whatever whatever i named the next one populated so now we see client and destinations what do we what are we going to do next oh ports so what happens when you see a bunch of port 500 any ideas it's a vpn so it's going to tell you just as much as a

pcap will except it's a smaller file and it can give you a little bit more information because you can look at it if you're filtering by a particular port like say for example port 500 and you could actually look and see who patient zero is so you can do your your basic analysis to find out the root cause of things i will say with ports because it's a numeric value at times elk will try to add all those numbers up for you it's like i know we don't have a port 73814 unless there's a new rfc that i've not heard about oh it's ipv7

totally saying that in jest but like right here number of octets that's one way to look at it cool oh look right here we have a median right so think about this from a statistic analysis perspective let's go back to high school or college stats we're not going to go into any like really disgusting things like confidence intervals or anything like that you've got three measurements that make the most sense for you right mean median standard deviation right mean is going to be your average so that's going to be of all data points this most accurately describes the general tendency of all other data points right you've got median which means that across the board this is the most

center data point it in some cases if we're talking about a perfect bell-shaped curve the mean and the median should be the same but this is real life we don't have bell-shaped curves well we do occasionally but not a perfect one so as a byproduct there's a difference and then finally you have your standard deviation which speaks to the general tendency to deviate from the mean so that shows you how much variance you have a very high standard of standard deviation indicates there's a lot of variance a very low one indicates that everything everything's clustered up together pretty nicely so with that you have those three measurements that you could do here in this case we have the

median number of octets that's indicative that of in this pcapp the average is 114 000 uh octets or bytes so all right we have that cool well so there's the median again so you know just showing it different ways i mean the social engineer in me says everybody interprets and learns differently some people learn better visually some people learn better by reading some audibly some by seeing so have it as many different ways as you want now look i'm changing i'm creating a new one that's the average so now we're getting the mean here that's pretty different wouldn't you think that's quite a bit larger than 114 000 without going deep into the actual pcap and what was really happening

basically that lower number for the median was indicative of beaconing uh over it's basically dns requests for text records uh so that had a very small packet size so that drove the number down in that regard but in terms of the higher average there were several file transfers that were very large that basically offset it so you had all the beaconing which drove the median there but then to get the average it's because of the large file sizes oh and look you can drag and drop and move things around you know do all the hokey pokeys you want so we can edit this so in this case i said client let's change that to source so it's not final right

and then with this you have other i always like to turn the other off because with a large data set that other is going to skew it pretty hard so now we have source and we can look and say oh well you know what our number one top talking source is also the top talking destination what's going on with this hmm that's also an internet-facing ip address enter the word oscent of the netflow cent you now have an ip address to go you know snooping on you you have a target of interest for say showdown or url scan or any of the tools du jour i mean honestly google so you have all these mechanisms that you

can poke around with and find stuff and then oh you want to filter it cool oh oh actually you know what i lied here's some jupiter but it's recorded um so if i were a fan of the band train i would make a reference to drops of jupiter but i really i really lost interest when hazel's sister came out i would make a joke about my sisters not having souls but anyway this is what it looks like in jupiter so importing the csv file into jupiter which is basically a web-based python interface that allows you immediate gratification that's why i like jupiter i've actually started writing all of my ocean tools in jupiter now because you can test it out and when

you're debugging it you don't have to put like print works to here number one works to hear number three works to hear you see the exact line and you're able to edit that data on the fly without having to re-execute the entire script you can copy and paste it and move it on so it works out really well so in this case this is that same csv input into a jupyter notebook also with jupiter you can also analyze it using the r language or also uh the julia language i know a little bit with r not so much with julia but that's where the word jupiter comes from so with this i imported it into a pandas

data frame a pandas data frame is basically an excel spreadsheet in memory existing in python it's very popular in the data science world i'm a huge fan of it but anyway what i've done here is i i've said okay well we're doing it iterating across the rows and i want to know in terms of the destination ip address i wanna know the top four talkers hence the colon four and right here we have them and i apologize it's not showing as good on the projector but you're doing this from a browser it's fairly easy to work with it so same thing with source we get that same data for the mean literally it's just dot mean or dot

median or dot std for standard deviation so from that we can do value counts so right here doing the value counts this is telling you on a per ip basis how many entries there are for each individual ip address this is giving you context can you get this from a pcap yes is it easy not always because you have to filter it on strings and do this and do that this right here natively poof you could have all of the queries that you want built into a jupyter notebook import that csv literally go up here and click run run across all the cells your analysis is done with this then you can go back to digging through

the pcap going through the logs silencing the alerts contacting management triggering incident response all the things that you want to do all the things that you should be doing that i'm not telling you not to be doing i'm just saying this is another step that you can gain additional context with so as a byproduct you know we've got all of this cool that's the same ip address we saw before oh and now we're back in elk with it so we get that exact same thing so we're able to filter and then when you do that filter you can run that same analysis and say okay with this host what are the top five hosts that talked to this host what

are the top five hosts that this host talked to if you know for example if you know for example within a particular event that this host was compromised at such and such time you could set a filter for around that time and then you could filter based on that hosts and if you're trying to triage to determine which host you should either take off the network or investigate next you now have context you can do it with logs as well but do you really with a log do you really care if the compromise host literally just pinged the other host or do you care that it connected to the host via rdp or ssh right

pings can be important but they don't provide you the full context so this is going to give you more of that so and again you can literally just tinker around with this uh however you see fit there's no right or wrong answer uh you know with many things here quite honestly the beautiful thing about this with using with having the csv uh you could do it natively in excel you could put it in elk or you can run the jupiter notebook but most importantly i don't have to tell you which way works best i don't have to make that decision you do because i don't know your organization just as much as you don't know mine

unless you might sell some clothes on mercari but at the end of the day what it boils down to is you can make it your own you can build your own queries you can have your own dashboards and honestly you can make yourself look like a rock star doing some of this so with that let me get back to powerpoint

so with that we've looked through it uh just to reiterate here you've got uh excel you already have it right we've already talked about this with elk i'm not saying that excel cannot provide visualization i'm not saying that jupiter cannot provide visualization both can it's just a little bit heavier of a lift so if you use elk you've got the native vis the native capability of visualization uh and then you can also if you're using elk is a sim you can import log data and other data points to be able to integrate there if you go with jupiter if you've got the python chops you can go with jupiter it's a little bit more complicated but

you're going to get better statistic analysis and it's easier to repeat and basically you reuse the same notebook for multiple uh p caps so you know basically with that that leads to the next steps the next steps as i stated before uh would be you could ocent ocean the planet start dropping some things into things like virustotal or otx or just your favorite threat intelligence ioc ttp feed of choice whatever you want there but also importantly it it can make for some really darn good things to put into your reports and that's where you're going to look like a rock star because i said this in conversation yesterday i meant it and i mean even continuing now

all the stuff that we just talked about like it's all for show right what what do we get paid for the reports like doing ocean investigations i mean i've done some pretty wild stuff i found some stuff that quite frankly terrified me but you know what the client didn't care about how i found it they cared that i had a picture of it and some sort of irrefutable proof saying that this was what the case was so that they could do whatever they needed to do the same thing exists here having all the cool tools is great but at the same time we do the tools for the show reports for the dough and i didn't make

that up i really wish i did that came that came from john strand actually so i really wish i'd have thought it up but here we are but basically that's what you can do with it you can take the screenshots from elk you can take those data points from jupiter you can i mean you can create the same visualizations in jupyter write them to a file import them in there as well and i mean it's going to help going back to what i said some people learn by hearing some by seeing some by doing some by reading it allows you to encompass this across the board as much as possible so that people can get the

maximum comprehension from it and that's ultimately what we want because i arrived at the conclusion that everybody working in infosec basically works in some form of intelligence right intelligence at the end of the day uh i mean give or take whatever you want with it but the purpose of intelligence is to assist in decision making so our deliverables our products our reports should be to inform in a mo in the most unbiased way possible to assist in decision making so having maximum comprehension is key with that being said i'll open it up for questions and while it's open for questions here's my contact information but any questions concerns complaints grievances thoughts opinions or otherwise not that

i've ever rehearsed that phrase

okay i'll leave this up for a moment and uh with that being said we've got the book signings in the intelligence village downstairs corey and myself are both selling our books and signing them and uh the intelligence village ctf will officially kick off at 10. the only thing you need is an internet connection it could be on a laptop could be on a phone tablet um we're in knox county so i don't think you have too much to worry about with ip over avian carrier if we were a little bit closer like monroe county might be a little bit of a problem i joke because my mom lives down in that area and has dsl and it's awful i was

like you got to get your neighbors to stop hunting your ipo over avian carriers your neighbors keep shooting your carriers out of the sky but anyway that's that if there are no questions uh thanks everyone for coming out thanks for your patience and uh have a blast if i can be of any assistance i will be in the intelligence village pretty much all day thanks [Applause]