← All talks

Next-Gen Detection: Harnessing LLMs for Sigma Rule Automation

BSidesSF · 202437:59992 viewsPublished 2024-07Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
Next-Gen Detection: Harnessing LLMs for Sigma Rule Automation Dave Johnson Explore the frontier of detection engineering in this talk, which delves into using LLMs for automating Sigma rule generation. We'll examine approaches like RAG, fine-tuning, and prompt-chaining, comparing their effectiveness in streamlining threat detection. https://bsidessf2024.sched.com/event/94140a2c82e965b3c8d704f2e3f833df
Show transcript [en]

uh today I'm really excited to introduce Dave Johnson he's a threat intelligence adviser with fedley and as you can tell by his title nextg detection harnessing llms for Sigma rule Automation and based on the amount of folks that have turned out here today I think we're going to be in for a real treat all right thank you very much um hello um this is my first time presenting at a conference like this uh pretty exciting stuff um actually came here to watch a movie um so I didn't I was going to be presenting on llms um yeah I don't do standup uh but anyway uh you came to the right place if you're learning about detection rules

and how to kind of incorporate llms into those pipelines uh so the whole point of my talk is really just you know it started from an experiment and a long time ago um I I was experimenting with chat GPT trying to create Sigma rules and I'll explain what Sigma rules are too um but it got me thinking there's got to be a better way to automate this to create better quality detection rules at the end of this so that's kind of the whole purpose it's exploratory um I do have a GitHub project that you can reference and play around with the stuff I talk about in this presentation um but yeah so if you want

to watch a movie um sly disappointed so who am I my name is Dave Johnson I'm currently at feedle which is a basically an ENT research tool uh for threat intelligence marke intelligence I'm a former FBI analyst um half my career was with government I used to work cyber crime AP groups um it really fun and actually I was based in Wisconsin which is kind of a strange Place uh to work in the FBI but um it's pretty remarkable we had some really cool things going on um so all together I have about 15 years of CTI experience and I do have a website which is pretty poor but if you want to check it out it's Dave inthe middle.com and my

email is Dave feedly.com so my whole objective in this talk is actually to keep you awake but to also educate you about how to use llms where they fit into detection rule generation and that type of thing and so we're going to talk about lots of different ways to use llms to create Sigma rules and there's going to be a common theme throughout this presentation it's about the quality of the input of the data so a lot of AI they focus on sort of the end result kind of the later stage of the pipelines but really it's about data management and once you get the data down you'll be able to make a lot more

effective use of llms and so I want you to weigh the pros and cons of each llm strategy there's basically three that I'm going to talk about I talk about uh retrieval augmented generation fine-tuning promp chaining I'll talk about each of those strategies what the heck they are in case you're completely new to it um and how they fit into detection rule creation and I think an important part of this is when you're talking about detection rules it's not just you know you just ship it out and you're done how do you validate the results and so I'm trying to use llms but also guarantee that there's some kind of quality check on it too and I think by the end of the

talk you'll Identify some opportunities where you can expand This research this is exploratory you can check out my GitHub project uh the reference is below there and this is a little bit of a like a teaser like if you just ran out of the theater right now um check out this repo it's kind of fun um I just um published it today so just type in siggen 2gs that's how I roll um so you can actually put in uh website URLs to thread Intel reports it will scrape the content extract uh procedure attack procedure data and actually generate um Sigma rules using an llm so I I'll talk a little bit more about how this all fits together in a little

bit too but just a teaser if you had to walk away right now so Sigma rules for the win um how many people have not heard of Sigma rules before okay I'll go I'll go through that so Sigma rules uh in case you don't know it is a way so was created by Florian Roth and Thomas psky in I think Circa 2016 2017 so I mean it's relatively new and it I think it grew from the frustration of learning a whole bunch of different ways to search for logs across different vendors so Sigma rules are intended to be human readable format to describe attack behavior in logs and that's the real power it's vendor neutral um

there's a whole repository on the sigma HQ repo maintained I think by Florian um never met him but I think he's probably pretty cool and the whole thing is it's it's a community where you share knowledge and that that's the intent of that repo and I think with llms it'd be a shame to not leverage them to make detection rules and that sharing a little bit better so I hope at least this lets you know that this is something that you can contribute to and so really the point is like how do we stay ahead of attackers that's that's the whole objective how do we stay more proactive how do we use llms we've heard in the news LMS are creating

malware they're able to orchestrate attack campaigns we're going to see that probably heat up a little bit more as it becomes democratized AI models you know may be centrally controlled by open Ai and anthropic but um we're seeing more open source variants um people F tuning to do bad things on llms so I think we're going to see a lot more misuse and maybe not the really large models but smaller more specialized models um so I think this is an important thing to focus on um so in my whole project so I started this a few months ago U well actually not a few months ago half a year ago and I just started off

by just asking chat GPT to make Sigma rules for me and it was really hit or miss it was terrible it was trash um do not recommend it unless you like trash uh like that guy um so can you make Sigma rules from trash data of course you can but why would you do that it's such a waste waste of time waste of effort it's not usable um so the big thing is that in any type Sigma rule generator you need to need to have some type of validation of the input and validation of the output so these are kind of guard rails you have on this whole you know automation process and so how did I do that I use quality checks

automatically with an LM and I used uh a scoring criteria I developed to basically have the llm provide um a score to an attack procedure going into the pipeline so here's an example uh maybe humorous um sometimes I like my LMS to talk dirty to me um I don't know if that's kind of a weird thing but I had this one um I pre- prompted it with uh you know talk to me like Dennis Reynolds from It's Always sun in Philadelphia and it was mocking me and um so you can make same rules very poorly if you don't have good inputs that's the whole point of this of this slide and uh it was making

fun of me and I kind of like that ribbing um but the whole point of this slide is to show you that if you have bad inputs you're going to get bad outputs oh no I should go I shouldn't do stand up um so Gathering a data set that was really the first step like how do I make Sigma rules how do I get started so it all started from basically combing through over 200 different threat Intel reports and I used um uh an an llm to actually extract the attack procedures from the text so I scraped a whole bunch of websites and from 200 threat and tell reports I got about 624 attack procedures and you might think Hey We're

Off to the Races right we got tons of procedures they're all great they're I mean most of them were trash because you know it can describe a procedure it might be mapped to miter attack um but it still might not be specific enough it might not have the technical details you might need to be actionable so that's what I was looking for and so you can see in the gray text below this is sort of the prompt I used to make sure that the extracted attack procedures were up to Snuff so you know basically on a scale of 1 to 10 how does it do against all these different criteria is there enough context for a detection engineer

to write a sigma rule um does the procedure address how the thread actor did the thing you know if it doesn't describe how they obtained access to a server it's not going to be overly useful so you need some type of data input qual or quality checks in order to create an effective Sigma rule so I'm getting ahead of myself here so I I got this data set and now I have a really good set of procedures about a third of them were usable so I have 183 roughly 200 different procedures I was ready to make Sigma rules against so how do I make the sigma rules that's kind of the next part of the puzzle I have all

this data and then how do I how do I do the thing so uh there's different ways to use llms in order to create a desired output and so you probably heard this in the in the news if you follow llms it's rag it's retrieval augmented generation I'll talk about what that is I use another technique called prompt chaining and then um the at the end of the presentation I'll talk about fine-tuning and some people think that's the Panacea of uh using llms you fine-tune it against your data and then you're you're off to the races again right um it can be kind of difficult and that's actually where I failed in this whole giant

experiment um I'll talk about why in a little bit too so the first method is on fuse shot prompting and rag so rag goes with this idea of fuse shot prompting and all that is is if you heard about this in llms it's really you're asking it's like the llm is a generally knowledgeable person across tons of different data and so they have no has no prior training to the specific task at hand so what you do is you kind of preempt it with some examples and we call those shots so like for example 3 + 3al 6 5 + 5 = 10 and so to build on that um you ask it what 2 plus 2 is and it

knows um you know that you're you're asking it to add um this is a very simple example but in the context of Sigma rules you're basically giving it really good examples to draw from in order to make a very Sigma rule um so in in order to do this you know you're selecting examples but you also have to tell it what to do and how to how to create the sigma rule in the first place um so there's a lot of good resources uh the official rule creation guide by floran um provides a really good template and set of instructions on how to create a sigma Rule and actually in the I think the template guidance it

says the best way to do it is actually use a prior example and build a on that this is kind of what that does it uses this uh retrieval this few shot prompting technique to select good examples of Sigma rules to feed into this pipeline so what we're trying to do here is with a good set of instructions you do some quality assurance you force the llm to think so like that's that's also a common theme in all these different approaches is that you want to have the llm think through logically what to do like like a human being would the more um descriptive you are the more logical you are the better the results

generally are and so I do quality assurance checks and provide two to three known good examples of Sigma rules and then U the other requirement is that you have to have an llm with a large context window and that's kind of like its memory like how much can it retain um in the prompt that you're you're providing so if your llm can only know like 10 words at a time it's not going to understand even one single Sigma rule so I talked about few shot prompting and creating the instructions but then there's this whole thing about Rag and you hear about that all the time it's retrieval augmented generation so this is how I'm selecting the examples

and what I did was I selected the very best examples from the sigma HQ repo so what I did was I downloaded all the sigma rules from the repo and used that as a database um and and did this embedding based similarity search so what what that means is if you ask a question question about how to you know make a sigma rule based on this it will take all those words and look them up in this um embedding embedding layer in order to figure out what rules would go along with that so rag selects the best rules to pull into um The Prompt I think this will make a little bit more sense as I go along

too so alt together and I do have movie references because it is a movie theater I thought it was was quite fitting um and you don't see enough Matrix references too um and I'm very disappointed by that because it was a very good movie and I love keano I think everyone does um but I think rag really kind of aligns with um The Matrix where you know he's learning Kung Fu he gets all the knowledge basically injected into his brain so that he he Masters Kung Fu um so it's kind of the same thing it's kind of a bad analogy I guess but that's what you're doing with rag you're you're selecting the best examples and you're sticking it in your

brain in order to make Sigma rules not quite as good as Kung Fu but it might help with adversaries um and that kind of goes with the F shot prompting thing I talked about too they they go hand in hand P bar and jelly so how did I do this uh so remember I had to basically uh give it some context this is what you're going to do so this is uh sort of a shortened version of the instructions I gave it you basically tell it uh who they're supposed to be who the llm is supposed to be you give it a role that's a good practice and then you you tell it like what to expect in the rest

of the prompt and that's that's what I have in red you will be shown a set of good Sigma rules as examples instead of trying to come up with this on its own you'll be uh shown a good set of Sigma rules that align with the attack procedure that you're feeding into it so I think that's um a Super Key point there um the thing at the bottom too you'll notice I have in Brackets evaluation criteria and that's to make sure that the sigma rules meet a certain quality threshold and I love this uh QA test I I build this into um basically every step in my pipeline because the results are way more consistent and so the evaluation

criteria you saw that in Brackets here at the bottom that's actually broken up into a few questions is is the rule uh specific does it know does it address something uh that's a known threat or vulnerability can can it be applied across different environments without modification um have you made a good rule like minimize uh uh false positives and negatives and is the rule compatible with the log source and that's another key aspect of this too and if I were to do it over again I would have specified and paid more attention to the original log sources so there's a difference between endpoint logs um security logs um and then what you find on the network

so um I think if you're build any kind of Pipeline with this uh giving it a little bit of context about your logging environment your your current gaps what what you can log only uh can help a lot too so you see at the bottom after the evaluation criteria I have a placeholder um I use for the rag examples and this is what's injected like in Kiana Reeves in his brain um so I have this this this kind of like notation here where I used evaluation criteria like HTML Tech TS that's a really good practice for me that I noticed helps a lot and I was inspired by I think um anthropics examples they have online they have some

um prompt generation guidelines and that that is really a good source uh of reference material okay so uh yeah people are awake I think that's good so the second method is prompt chaining so the first one that I think that was the Beast um it takes a you know Rag and and all that stuff can be a little bit complicated prompt chaining is a little bit more straightforward and so what is prompt chaining you just kind of think about prompt chaining as you're basically asking uh you're breaking up a complex action like creating a sigma rule up into very logical steps and what you do is you once you do that you create a prompt for every single step and you

feed the output from the previous ones into the next one so and then you basic you basically build the entire Sigma rule at the end with the outputs um of all this kind of like chained um prompt and response so here's an example you have a procedure you might want to identify here here's an attack procedure what are the kind of key indicators you would look for as a detection engineer once you have that then based on on the procedure and the key indicators what log sources what and what category of log Source would that would that go with with so you're breaking the problem down so that you're thinking you're forcing the llm to think through every single uh

step and so you're going to get something that's way more consistent at the

end so uh Doctor Who referen now um it's not a well I guess he has movies well David Tenant um this is the other um character and I'm aligning him with uh promp chaining because uh the doctor kind of like thinks through things very logically and he's able to come out at the end um every time very consistent in um uh surviving or regenerating um so the pros and cons of prompt chaining is that it's great you're getting you're getting consistent results for the most part you're thinking through each step um you require less data upfront and less compute than fine-tuning um it's potentially more flexible and adaptable than the technique before that I talked about

with a few shot prompting because you're thinking way more logically you're filtering the results as you get through the pipeline the drawbacks are pretty uh pretty big in my opinion so you might get consistent results but the inference time that's the time it takes to get a result back once you ask the question uh it took about 90 seconds per Sigma rule which was quite a bit of time um so if you're trying to scale this um it could be very difficult to do and this could be also fairly expensive because if you remember know you have all this all these different steps those are different uh tokens you're you're you're sharing with the llm when you're

trying to invoke it so token use is usually how you get charged um by these providers so if you're using a large number of tokens it could cost a lot of money

potentially all right so the last one I'll talk about fine tuning this is uh my biggest failure in this whole thing most expensive failure so people think of fine tuning is the Panacea of uh large language models you should find tune on your own data your own internal data um maybe not because it is an expensive uh experiment to try um so I did this I used my own money um it cost me like the first time like 30 bucks uh to fine tune based on the data set I created and the data set I made was I took the sigma HQ repo and I used that as sort of the um the uh the

output so you have like uh question answer pairs you build that list and that's your database for fine tuning and so for the question I would take the title and rephrase that in the form of a question and then pair that up with the respective Sigma rule so I thought hey I'm super clever I got the whole repo in here everything is great um but you'll see in a little second it's a little bit flawed um so why would you consider fine-tuning and it's because you really have something very specialized and maybe you want the output to be very consistent every single time with less prompting less tokens um so that that's one really good

reason to do it and another really good reason to do it is if you have an abundance of really good data you know you don't have to have a ton of data um like you would to pre-train a full llm but for fine-tuning you can get away with a much smaller sample the problem is data quality is still an issue if you have bad data say some of the sigma rules aren't quite as good they're more experimental then you're giving you're training it on potentially bad data so there's some Q&A to do and there's other things you can do too to make fine tuning better I'll talk about that later uh so this oh yeah this is Q um

from James Bond um I use the original because um uh no other cues exist in my mind um so fine-tuning the the process was collecting the data set I talked about that pre-processing creating those input output pairs to fine-tune the model um yeah and then then what you do is actually use the fine tune model against uh unseen data and you call that a validation data set so here's another way of condensing all this boring text you have this process I collected data as publicly available great project on GitHub Sigma HQ check it out um you format the data set you split it into a train versus test data set so typically you do 8020 splits 80% training 20% test

and then you set um uh some parameters and how to do the fine-tuning and that's learning rate batch size these are hyperparameters you use that for deep learning and then you actually run the fine tuning job so here's kind of uh the results uh so typically a training loss um uh below one is good but if it's too low uh you might start the question things especially when the validation loss is higher than the training loss so this could potentially indicate I did overfitting which means that when I fine-tuned it uh the whole fine-tuning process uh caused the llm to basically memorize my data and not learn how to apply it to uh external data but a

validation was still fairly low below one is good still and this is not unusual uh things I could change potentially are the learning rate so that's how when you do fine tuning this is how um how much the the process uh corrects for for errors and so if you have too big of a learning rate you might step over something and uh not get optimal results uh batch size is also something you can play with I'm not going to make this about deep learning I'm sorry going to in depth here so yeah you got all those methods I tried three I also tried a fourth one which is a zero shot prompting which is you just ask it to make a single rule

you just give it um the instructions and sometimes that works but not super consistent it's definitely one of the easiest things to do but I want to go complicated because I'm talking here right so I had all these s rules I created through these three different techniques so recap I created these procedur I have about 200 of them for each of the 200 I used each of these three different techniques and created their own Sigma rules so how do I know where I was which one was better so I created this meta prompt basically to evaluate the sigma rules on uh Precision real world applicability and alignment with the procedure and input so Precision is how

you just ask it you know how precise and specific the singma rule is and targeting the intended malicious behavior uh consider false positives I think that's pretty self-explanatory and uh a thing I think a lot of people don't realize is like you really have to underscore real world applicability in these applications because of this super theoretical no one has the logs for it but it is possible if you magically have logs um it's not going to be super useful so is it something you can apply in real world talk environments uh alignment with the attack procedure so you create the sigma Rule and at the end is it looking for something else did it like get so Lost

in Translation that it's looking for something completely different so I wanted to account for that too is it aligned with the input and so what I did was I basically scored 1 to 10 on each of these criteria Precision real world applicability and alignment with the procedure and I average them together create a numeric Score 1 to 10 pretty simple I used to do statistics and then forgot it all so probably not the most robust thing ever so this is uh mean scores the average scores uh for all of uh the sigma rules I created so for method one I got a 8.09 and uh method two was promp chaining which is 7.84 and in fine

tuning was at the bottom so you might think hey rag F shot prompting that's the way to go but there's a little bit more to the story when you look at the consistency and you know so accounting for um the highest of the highs and lowest of the lows prompt chaining was most consistent because that was forcing LM to think very methodically about each step and how an analyst would actually create a sigma rule but uh rag was generally more consistent um for high quality rules which this is why I put it in green fine tuning was an epic failure it cost me probably $60 in uh running fine-tuning jobs I tried it like one by accident and

it cost me 30 bucks and like oh that that was a little painful and then um the next time like I know what I'm doing and it turned out I didn't know what I was doing and then the third time is like okay here we go here's uh here's the college try um but there's there's things definitely to improve in that too so why would you choose rag rag I think is you know kind of medium complexity um it's not terrible to implement but you have to think about where you store the data uh for the sigma rules to reference to pull into the rag system um so that's retrieval that's a retrieval problem um you can

store the entire um embedding Vector embedding in memory potentially um you can store it in the database you might need a way to update that consistently um the real cool reason to use rag is that it can account for newer activity a little bit easier because um you might have new contributions to the sigma rule project and you can pull those in as examples for novel attacks which makes it pretty cool as long as you up up update that database why would you use uh prompt chaining well it's easier than rag if you want to get started just think about the problem breaking up in the different steps and you can get uh some pretty

consistently good results uh within reason you have to be make sure that the inputs always are are valid and good uh so cost might be a consideration and scalability might be an issue just because it takes a long time to get the results back because you're sending data through multiple steps in the pipeline and it uses a lot of tokens and um also hurts the wallet a little bit too I'm not going to tell you how much money I put into this um it was a passion project um fine-tuning and I'm not discounting fine tuning at all I think there is a lot of potential here and the biggest drawback I had was I used fine-tuning on

chat TPT 3.5 turbo and so the problem is that I used the more capable more mature models um than than that for my other tests but to fine tune if you know cost effectively I only had access to 3.5 uh so that's something that change I think um to to upgrade that as well but fine-tuning in the long run um I think can save a lot of money it takes initial investment you're you're spending the time curating data making sure that's good uh good data engineering but once you're up and running with fine-tuning it can be um incredibly efficient and much faster so conclusion and future Direction so this is kind of my last slide well I've got a couple more but

this is you know talking about the differences between these techniques in Gray you'll find that every method will improve prove based on having a more powerful model to use so we're seeing open source llama 3 uh metas coming out with uh great stuff like everyone's coming out with uh really good models um hopefully more open source that more people can leverage keep their data private um and then you can also use an llm specifically trained on security data so it will know more about the ins and outs not just not be a generalist so much so that can make it powerful too on each of these methods but in green is how each of these

methods are a little bit different and how you improve them so you can evaluate different prompts for few shot prompting you can try different retrieval strategies U meta also released um fais which is a a more efficient Vector search capability that's something that can be very effective um has some really cool bells and whistles to try uh for promp chaining it really just comes down to using better prompts and maybe combining promp chaining with rag what if you had a sequence of different instructions and then you inject really good Sigma rule examples in there too you could do a hybrid approach that could help things as well for fine-tuning I think if I were to do it

again and had a little bit more money I would explore different hyperparameters uh making sure that I try different things you can change um for example the learning rate so that it uh is not so big um every single iteration you can reduce that over time um and I would also include a more diverse security data set so including thread Intel reports for example um I think that would be a a good way to um improve that data set so by now uh most people are awake okay by now you should have learned all these things so hopefully you know a little bit more about different ways to use llms to create Sigma rules pros and cons of

different strategies you learn what a sigma rule is hopefully some people for the win uh and opportunities for future research and that's really what this whole thing is about get more people interested in open source projects um you know I just uh released this this morning um I saved up late because I tried to dockerize it make it easy for people to install but siggen check it out um and if you want to contribute to the project I'm more than willing to collaborate with anybody because I think when we're putting our minds together we come up with some really cool things and here's another screenshot of the tool if you want to play with it

that's it that's all I got thank you Dave we've got some we've got some great questions coming in here folks and don't forget you can go ahead and add them at bides sf.org qn as in Nancy a uh Dave do you think llms are effective for writing other kinds of detection rules aside from Sigma like writing python rules in a tool like Panther or SQL queries um I think I think so it depends on the strategy you're going to use if you're going to use rag then you need to have uh that document database that has good examples but assuming you can have good examples you can use rag you can also use prompt chaining and just think

through the different sequences uh of how you would do that normally break it down into a series of different prompts and you can get some uh pretty good results I think you're on a uh not your laptop but there were several folks that were wondering do you have any entirely llm AI generated Sigma rules that you could show us not yet no okay okay no worries uh let's see we got a couple more here folks uh Muhammad had a good question is there a practical need to automate Sigma rule creation like how often are organizations creating new rules yeah sure um so that's a great question you know it's I think it's it's about uh responding to threats and

keeping as proactive as possible so sometimes you're I did instant response before and sometimes you're asked to look for something immediately and like like yesterday so this will help people uh get closer to what they need to look for um maybe explore things they haven't considered and you can take that and you know get your detection faster and I think that's what it's all about here's another one uh what are the advantages of utilizing a vector database over directly embedding examples into the prompt for retrieval augmented generation models like rag can you repeat that again uh what are the the advantages of utilizing a vector database over directly embedding examples into the prompt for retrieval

augmented generation models like rag yeah so um the the advantage is that you take that prompt and you're automatically so you have this embedding layer and so you take when you're looking at a vector database of embeddings you take you know it could be words it could be sentences and then you create like the mathematical approximation and an embedding layer so if you have something that's a little bit different than the keyword then you're still pulling that relevant Sigma Rule and it's it's better because it's it's more adaptive I think okay perfect a question from Matthew uh Matthew you still here perfect okay now you've got a a name in the face have you researched llm performance at correctly

generating valid sticks 2.1 bundles relatedly how do the major llm models perform it modeling with attack from miter attack framework uh so they align with the attack very well in my opinion um for sticks bundles I haven't tried that but I have tried Atomic red team which um that's kind of like the other end of this how do you before I was going to explore how do you create the logs and I was going to use Atomic red team and I got into that like oh this is going to take too much time but that's another thing to try too so okay thank you and thank you Matthew for the question in your prompt chaining

flowchart why isn't the flow strictly sequential under what conditions do you skip steps in the flowchart yeah so for certain questions um you might want to have the outputs of two different prompts or maybe just one prompt um because you want to you know you're retrieving a lot of valuable data and not all of it needs to be used at every single uh kink in the chain or every single link so I I had a very complex conditional logic set up in the app so um I would take inputs from a couple different things but maybe not one um so it's about efficiency um that's really what it comes down to okay home stretch here anybody want to

ask questions throw them into slido here what document database did you try which one did you end up choosing and why yeah so I used chrom ADB as open source and uh I think I knew the most about that than any other one so it was I saw it like example code like which one should I use and that was the one I picked easy peasy yeah when you do it that way uh and last question unless anybody has any others to throw out did you only use llms to generate the final scores or did you or other humans also score them like how did you know when you can trust llms for scoring results yeah that's a great

question um this is part of exploratory research and I only had time to spot check a few Sigma rules that were created this way and then I used the that sort of that meta prompt to make you know the broad sweeping quick uh quality check so that's definitely something that we can work on okay well thank you for your your presentation we've got a gift here from socket security our friends over there wanted to make sure that all presenters go away with our gratitude because this is a lot to come out here um you know more than 300 folks vying for just 100 slots to be able to speak this weekend uh ladies and gentlemen uh Dave Johnson with feedle

thank you so much than