
how's everyone let's do a a thumbs up if you're doing well I know it's warm but we're doing well okay awesome well thank you so much for attending this talk and I just want to say a big thank you to bides Las Vegas for accepting this talk it is always an honor to be here back in 2018 I actually volunteered here and I didn't know anyone in this industry it was pretty daunting and scary as you could possibly imagine but I made a bunch of friends and they opened the doors for me I got a job in bug crowd and I end up getting into the hacker space all because of this conference so I'm always so grateful and thank you to
the vaults and you know next year if you want to think of something to do volunteer it's amazing experience so we're going to go into in a little adventure and I'm going to try to make a security for AI talk that is a 101 a little bit more exciting I know content can be kind of dry so we're going to have a little bit fun to start us off you could win a prize at the very beginning of this talk if someone can tell me what do they see in the tree and with that I'll give you a beautiful threat report that has basically it's like a miniature textbook about everything that you need to know about
security for AI does anyone see what's in the tree this buha was taken in Oakland California in a little par called Lake Meritt does anyone see it all right hummingbird no not a hummingbird any shout out if you know it it's okay you're correct congratulations you want a report let's give him a hand of Applause yes good job all right so yes this is actually squirrel and they've evolved they got a jar of GIF peanut butter I looked in the chair like what is that is that squirrel what is it holding it's not peanuts what it's a gif peanut butter it's learned and it knows it can't get the top off so it went from the bottom which is easier
ah so intelligent anyway I bring this up because you're going to hear some really really bad jokes and by bad jokes I am not a comedian but I will try today um so here is a corny joke you guys ready for this why did the squirrel bring peanut butter jar to the AI lab because I heard the algorithms were nuts about data oh yeah okay um so a little bit about myself my name is chlo m and I'm the head of threat intelligence over at Hidden layer I'm also one of the founding members of disclos doio and a board member for Diana initiative who went to Diana initiative two days ago yeah I'm also The Advisory Board member
for the election security research form those are my links if you want to do me a favor uh marketing from hidden layer would love if you could take a photo of me while I'm giving a presentation and send it to me because I don't have someone here to do that for me so if you can DM me I will I will figure out what to give you all right so we're going to do some true and false questions throughout the talk and I want to make sure that no matter what your background is that I think you would feel comfortable enough to be able to answer if it's true or false using what already knowledge you know you
should be able to try to figure out what could be the right response so first one white box testing methods are unnecessary for assessing the security vulnerabilities of AI models because blackbox testing suffices if you believe this is false raise your hand congrats you got one right so far there's 10 of them by the way not in order though one more implementing differential privacy can m gate risk associated with data leakage in AI systems without significantly compromising model performance if you believe this is false raise your hand if you believe this is true raise your hand it is true you got a point keep track of your points just for the heck of it for
yourself all right so when we think about AI or how the world sees AI it tends to be a little bit of this which is basically like Terminator world or end of the world situation and yeah I can understand the doom and the Gloom here but I also believe there's a lot of misinformation and disinformation about Ai and especially when it comes to security for AI so one of the things that we did come out with was the threat report which our wonderful person in the front won the physical copy but it is also you can find it online right but it really helps to paint the picture so after this talk if you need to refresh feel free to go
and get the report it has a lot of this in there so we're going to talk a little bit about AI so the thing is AI is here it's not going anywhere it's like no one can use chat chpt we've all used chat GPT now we're not going to stop using it the thing is it's already out there and you know you know it's going to get even more so ingrained in our everyday life is when our smart fridge starts judging us on our midnight snacks like another slice of anchovy pizza Bob but to be honest let's be fair with one another it's not really the pizza slice for me it's the anchovies on the pizza that
just doesn't seem right anyway when we think about how do we secure things are we going to be okay we've been here before we've been here before when it came to the cloud when it came from PC so we are we're we are all learning together and we're maybe playing a little bit of of a game of catchup we did do a survey to find out you know what do people think when it comes to some of the serious issues about 77% of companies report that they did identify an incident with our AI within the past year and that 1.7 th000 models are in use at organizations and one of the trends that we did see in the past two years was
when you ask aiso how many AI models do you have they didn't have an answer or their answer would be we don't use AI we have no Ai and then they come back like okay yeah we actually do I found out I was never told because let's be honest it's always that we're in silos with one each other then you have the 98% of it leaders believe that this is so incredibly important when it comes to their organization success I don't understand that 2% though like doesn't make any sense so I'm going to just say that it's probably robots in Disguise I'm just putting it out there because this doesn't make sense um anyway so let's go talk about
the risk related to the use of AI but true or false let's do it so AI models can be protected against model stealing attacks by limiting the number of curies allowed to the model and by using techniques like query opusc I should learn how to pronounce that for pretty well raise your hand if if you think this is true raise your hand if you think this is false raise your hand again if you think this is true wow you guys the true ones are correct don't worry we're going to cover it later on we'll all be on the same page all right next one water marking AI models can help and identifying improving ownership which can prevent
unauthorized usage and intellectual property theft if you think this is false raise your hand if you think this is true raise your hand you are correct it is true so when we think about AI of course one of the concerns is harmful content creation we're going to be afraid of what could be out there because we also don't know what is it trained on as well so try to think about it as AI like giving a todler a paintbrush and expecting it to do the mon Lisa for us it's just not going to happen it's going to be messy and sometimes it will feel like at times this is why we can't have nice things we also have to worry about deep
fakes because I mean has anyone do you guys remember the Joe Biden Robo call that happened earlier this year yeah that that to me is kind of election interference in my opinion and then you have this case of Hong Kong raise your hand if you read about this case in Hong Kong all right so this is a fun one to read about so there was this guy at a company it was late at night and he thought his Chief Financial Officer needed to jump on a call with him on Zoom to try to get 25 million paid out so he jumps on this call and it's a deep fake it is so like his boss
that he does not think that this could be a deep fake and to be honest I don't blame him in the new times a few months ago they actually came out with a quiz to test your skills out if you could tell the difference between something that is AIA generated or not and it was very common that people did not get 100% correct I mean dpix are starting to looks so good that I'm actually starting to be in doubt of really if those are my Sheba Enos in my Instagram photos because I mean look at them they're so cute that's Sherlock by the way and that's Luna anyway okay they're amazing and they don't usually do that by the
way they do this thing called the side eyed where they judge you on every little point now data privacy this is a good case with Samsung so one of the things says that they found that there was meting notes and source code that was actually leaked in chat GPT I know about a year ago or two years ago let's just say some companies were Banning employees from using chat GPT during this time when employ employees would do is like well it makes my life easier and I don't want to do that well I guess I'll just do all my personal devices so what do they do they take whatever that they need to create reports or anything they send it to them
via email to their personal email address to use their personal computer to do chat GPT on it and then get the results and then plug it back at work and what this happens to do is that it leaks sensitive information out and this is why you shouldn't ban chat to PT because then you have these things so the one way how I always think about like data privacy with AI just think about that there is always going to be a possibility that a secret will come out just like on reality TV shows someone's going to spill something of course and then you have copyright violations where artists are like hey how did you train your model I'm out of
curiosity because I believe you're using my stuff so this has been an Ono an issue because we don't know what models are trained on it's a very much like as a consumer you don't know and so that is still a rising problem that still needs to be addressed to this day then you have biases I always say and even before AI let's be honest with one another if you don't have diverse folks working on a product it's not going to work for everyone that also means that you're not going to capture all the bias out there this is why it matters on all these fronts but also where did you get your data from that's so important because
that can really cause so much harm to populations if we don't check these things and just think about AI biases like an AI deciding that anchovies belong on Pizza which is so fundamentally Wrong by the way I'm so sorry if you like anchovies on pizza I just no it I could say pinea belongs on pizza but I don't want to create a debate or situation in this room by saying that so yeah anyway and then you also have to worry about ethical Ai and this was one of those cases of an AI chat bot that encourage someone to plot to kill the queen and the way how I always think about ethical AI it's like trying to
train your dog not to bark at the UPS delivery person it seems impossible but it is possible to do so with sheas I don't know it's a little bit hard they're kind of a hard ones to figure so let's go down now the risk faced by AI based systems now I just want to say that this is where we go down a rabbit hole of some stuff so bear with me I'm going to try and make this as entertaining as possible but before I do hear some true or false questions so the first one we got here is securing the communication channels between distributed AI components is less important than securing the AI model itself if you
believe this is false raise your hand congratulations you are correct next one the use of AI generated synthetic data and training models can eliminate the need for data security and privacy measures if you believe this is true raise your hand ah if you believe this is false raise your hand nice I was trying to trick you there for a quick sec okay so I think one of the things is that we tend to get a lot confused between what is an AI model from from an AI system so let's kind of talk about that really briefly so an AI model is a mathematical or a computational construct designed to learn from data and make predictions or
decisions based on that learning it is the core component that processes input data extracts patterns and generates outputs now ai systems are different so an AI system is a complete setup that encompasses the AI model along with the necessary infrastructure and processes required to deploy operate and maintain the AI functionality it includes the AI model and additional components such as data pipelines user interfaces and deployment mechanisms monitoring tools and security measures so just for you to know the difference between these two groups functionality the AI model handles data processing and decision-making whereas the AI system ensures the model can be effectively used in the real world and that means like the data management the user
interaction and the system maintenance now when you think about components between the two an AI model consists of algorithms and learn parameters while an AI system includes the model plus additional components like data pipelines deployment environments and user interfaces so when we think about well who are these people that are trying to go after AI well it's the usual three right it's the nation states the competitors and the Cyber criminals and on the side you can see all the reasons why they all are incredibly valid so think about it as like your annoying neighbors are trying to steal your Wi-Fi you know they're out there they want what you got especially your cat memes you
know anyway so we're going to go into some of the risks faced by AI systems and that includes prompt injections of course we're going to talk about prompt injections um but data poisoning and more and it can really do feel like a bad episode Black Mirror but in real life but like I said I'm try to make this entertaining as possible but I'm also going to try to do it like as if you're in a college session now so let's go first down to data poisoning attacks so model training is a critical phase in developing AI Solutions where the model learns from the training data set so malicious interference such as data poisoning can severely impact the
model's reliability data poisoning attacks aim to manipulate the model's Behavior by altering existing data or injecting doctor data now ai systems using continuous learning are particularly vulnerable as they do continually retrain on new user Supply data which may not be properly validated let's be honest so consequently adversaries can actually introduce specific inputs to the bias the model even a small amount of poison data can lead to bias or incorrect predictions if Amplified by public manipulation or botn Nets so think about it as your chef and you add too much salt to your dish it makes it terrible but one of my favorite examples is Tay which launched on Twitter well now known as X um in March
2016 Tay was this really innocent being on on X and it was it was so excited to meet the world but you know within 16 hours users manipulated it to produce the most rude racist harmful content imaginable although this was not a corate attack it still impacted Microsoft and also had threats of legal action as well and there are definitely more data poison in attacks my favorite one is the artist trying to get back at gen by using a tool which would allow them to do some data poisoning themselves up on the top right right there all right now we get to go model evasion attacks so evasion attacks is like trying to trick your teacher into
thinking that you did your homework and your dog ate it okay they knew you were lying but you're trying whatever can to try to get them to be convinced so inference attack targets AI models after deployment and this is either on endpoints or in the cloud and these attacks extract sensitive information about the model or its trained data by curing the model and analyzing its outputs attackers only need to access to the models predictors usually through the UI or the API by repetitively sending slightly varied curies attackers can understand and potentially reconstruct the model leading into a b model Bypass or even theft itself evasion attacks is a type of model bypass that uses adversarial examples
which is basically like input Sly altered to classify something with a misclassification these modifications are often not noticeable to humans to be honest um such techniques um are like adding invisible noise to an image for example and these techniques have been used to bypass spam filters as we know of malware detection biometric authentication as well and it has been used for a good number of years now by the way now I want you to imagine you're in a self-driving car all right you get to the stop sign and your car instead of just going to a complete halt what it does it steps one foot over taking back three steps y'all one hop this time if
you know what I'm doing is the Cha Cha Slide and heels which is a little bit harder to do on this carpet but imagine if your car could do the chaa slide by because they saw a stop sign that looked like this which has little stickers on it true story there was an instant where this did happen granted the car didn't do a Cha Cha Slide and I don't even know what a car would look like when it did CH slide but we can use all our imaginations right but what happened in the self-driving car is that when it saw this sign which had these stickers on it it decided to bypass the rules and went
straight through the stop sign another good model evasion attack situation is the one that you see up here so basically these type of attacks can also Target facial recognition with modified sunglasses but also military systems through deceptive imagery as that you see on this slide right here and this composed really significant risks such as just painting fake bomber Jets on the ground apparently so model theft attacks so adversaries May Target AI models not just to mislead them but also to steal the models themselves IP theft in AI involves replicating or extracting sensitive data from Advanced AI Solutions developed by companies even without public access to model details adversary can use inference attacks through user interfaces or or apis to replicate the
model or extract valuable information Oracle attacks involve understand the model's architecture and the parameters to create surrogate models or steel sensitive data now according to nist raise your hand if you know about nist good I'm I'm glad you all know about nist um so NY defines Oracle track attacks in three different categories one you have extraction attacks this is when you're trying to extract the model structure you have your inversion attacks and this is when you're reconstructing the training data and then you have your membership inference attacks which is trying to determine a specific sample as part of a training data one of my favorite ones to study is the model theft attack when it came to
Tik Tock and chat chpt raise your hand if you have Tik Tok on your phone good what no why okay all right you have it on a separate device without your personal stuff hopefully we'll just say you did okay okay it's all right it's okay you're you'll you'll get so nervous and paranoid after this week don't worry it's going to happen but yeah so you all are aware of Tik Tok it's owned by bite dance well bite dance got caught last year trying to make a replica of chat gbt's model and they actually called it the code name was Project seed believe it or not so when we think about model theft attacks think about it as like you're stealing
someone's Netflix account you like you get the service but without the guilt of pain all right let's put in some true and false in here so AI models that use transfer learning are less susceptible to security risk because they leverage pre-trained models if you believe this is true raise your hand if you believe this is false raise your hand congrats you are correct next one it is unnecessary to applied the principle of least privilege to an AI system component since they're all part of the same application and need full access if you believe this is false raise your hand congratulations you are correct once again so yeah we're going to have to talk about prompted directions I know
it's like the one thing that everyone talks about when it comes to security for AI but we'll have to talk about it but uh one way to understand it is imagine you have a roommate and they have a parot and you think it would be funny to teach that parent to swear without your roommate knowing now let's also pretend that your roommate is hosting a Thanksgiving dinner and he's bringing all of his friends over his family as well the grandma the grandpa everyone and they might be going around the circle saying grace or maybe about what they're grateful for for the year but then out of nowhere this par screeches like God damn it really really
loud you know how embarrassing that would be in the middle of that really nice dinner that's prompt injection you're trying to teach it to do something that it wasn't meant to do in the first place but I'll give you a more descriptive thing around it so to prevent misuse gen providers Implement security restrictions to filter such harmful content to block access to Illegal information and to prevent aiding in illegal activities these are also known as as anyone know guard rails so these filters also ensure compliance of policies and laws however these restrictions can be bypassed using a technique called prompt injection where specially crafted prompts trick an AI bot into performing a restricted actions and this can lead to chat Bots
executing actions originally blocked by their developers with methods varying by model type version and tuning multimodal prompt injection is a technique used to manipulate not just text but also images audio and video and by crafting specific prompts across these different modalities attackers can actually alter the AI model's Behavior leading to unattended or harmful outputs and this method exploits the AI ability to integrate diverse information making it sophisticate form of attack that poses significant challenges for ensuring the security and the Integrity of AI systems and then you have indirect prompt injections now these work to manipulate an AI systems Behavior by subtly altering the context or the environment rather than directly feeding it manipulated props this can actually
involve changing surround text modifying the data sources or influencing the systems inputs indirectly and this technique is so so subtle and incredibly hard to detect which is making things incredibly challenging when it comes to securing AI and then you got codee injection which is basically like you know trying to teach sir to prank call your boss just because you can doesn't mean you should um but basically code injection think about it as this way so gen models are typically limited to generating text images or sound and cannot execute actions like running shell commands or scanning networks however they might generate fake outputs suggesting such actions were performed hidden layer themselves discovered that some of the
AML can execute user provided code for an instance streamlit math GPT application actually converts the prompts into python code this is then what happens is the model executes the answer those math questions of course but the approach does allow for arbitary code execution through prompt injection which is one of those things that is incredibly dangerous when we're running on user supplied code and of course we have to talk about supply chain attacks because this wouldn't be a security talk without mentioning supply chain attacks so think of it as the Trojan Horse um but instead soldiers it's full of corrupted data so supply chain attacks occur when a trusted vendor is compromised leading to products with malicious components
notable examples include solar winds which cause a widespread security breaches in ransomware anyone where was in that room during that time the solar winds yeah well it's okay if you don't want to raise your hand that was scary um but anyway these attacks exploit trust and have extensive reach making them particularly dangerous as you're all probably aware of now the ml supply chain does involve various tools and services that do increase these type of risk and 75% of it leaders viewing thirdparty AI intergration as especially risky organizations must adapt to security controls to address these new vulnerabilities so think about like a supply chain attack situation is like you're you're expecting a very high-end Music Festival maybe a Taylor Swift
concert or something but instead you end up at the fire Festival you know FY re e festival and you want see that documentary either on Hulu or Netflix raise your hand yeah no that that was empty promises and Chaos all over so thing to know is that there are these specialized repos like hugging face which hosts over 500,000 models that are pre-trained which make it incredibly easy for developers to integrate these models into their applications however if an attacker reaches this repo they can actually replace the models with hijack or backdoor models versions and this does lead to significant Downstream consequences so think about a m back door models like a pizza with hidden anchovies you take a
bite and then you realize oh my God I have so much regrets with this pizza so a skilled adversary can tamper with an AI model algorithm to Alters predictions by injecting a specially crafted neural payload into a pre-train model this introduces a secret unwanted Behavior known as a model backd door which can then be triggered by specific inputs defined by the attacker to produce a desired output skillfully backd door model appears accurate with regular data but misbehaves with inputs manipulate in a specific way known only to the adversary and this knowledge sold or used to ensure favorable outcomes for customers such as loan approvals or insurance policies so the thing to know about machine learning model is that when it's
stored it has to be serialized into a binary form because it's so large um but many widely used serialization formats such as torch script used by pytorch hdf5 which is used by curas and then save M which is used by tensorflow are all incredibly vulnerable to arbitrary code execution and these vulner abilties to allow adversaries to create malicious models or to hijack legitimate ones to execute malicious payloads hijack models can then serve as an initial access points for attackers or spread Mal to downst customers and supply chain attacks now the thing to know about machine learning development it does rely on so many tools and Frameworks many of which lack adequate security controls which include basic
authentication and this poses a real risk for data breaches and supply chain attacks security has of been an afterthought unfortunately let's be honest it is very much true we are always reactive not proactive which has led to vulnerabilities in so many popular ml Frameworks and recently there was a malicious pie torch nightly build that was compromised via the torch Triton package which did allow data exfiltration from affected hosts so when you're thinking about ml tooling security is like locking your doors by keeping the windows open where do you think the burglar is going to come come in from anyway so very very cool cool cool cool cool cool cool cool cool if anyone has seen any Brooklyn 99 raise your
hand yeah I love that show that's how I feel almost on a daily basis when I look at the stuff that's happening and I know that this might be scary too as all of us in the security space only having to learn this skills like really fast it is scary because there's not a lot out there right now when it comes to training f folks so let's go talk a little bit about the advancements of where we are and where we need to head but I have two true or false questions for you so the first one is aersale robustness is the only security concern that needs to be address for deploying Ai and critical applications
if you think this is false raise your hand congratulations you are correct next one Federated learning improves the security of AI systems by allowing models to be trained on decentralized data without exposing the raw data to Central servers if you think this is false raise your hand if you think this is true raise your hand you are correct it is true now offensive tilling is tools designed to be good but are often used for Mischief it's like you know giving a teenager keys for your Ferrari it's exciting um but a little potential for chaos so there aren a lot of offensive security tools that are originally for red teamers and Pen testers out there but are also being used by malicious
actors as you see on here these are some of the ones that um really stand out most of the time but there's a lot more that are missing on this slide but these basically help to test and improve AI security but also could be misused to facilitate attacks making AI vulnerabilities more accessible to adversaries and of course if you have your offensive tooling and your and Frameworks and all that now you need your defensive Frameworks so think about it as like putting your AI in a protective bubble it's safe but don't forget that bubble can pop so as new AI attack tools and techniques emerge now you have to have a defensive approach to crucial to protect
the technology in the past two years actually a lot of cyber secur organizations and companies have developed comprehensive Frameworks with security practices strategies and recommendations for AI which is a great initial steps for safeguarding AI systems but there still we're playing a game of catchup and what I mean by that there's still gaps unfortunately in these but don't worry everyone's kind of working together and trying to keep it ongoing and updating it as best as possible now policies and regulations they're kind of like the rules of Monopoly like no one reads them but everyone are like agrees about them or maybe argues about them I could say more so AI does have a potential for harm so
it is critical for us to work together and and create policies that's good and as we know around the world that's what is happening right now with the EU act which has been a little bit controversial in some ways because it is very restricted and what has happened is that it has stting to create some limitations on Innovation but also in a sense of not creating a healthy competition so if you are a startup or small company it is incredibly hard to follow along and try to innovate something new so the US has taken a different take on this which is like you know it's all about free markets right so it's like we want a healthy
competition we want people to innovate so that's why you haven't seen anything like the EU AI act happening in the US instead you have the executive order which is about hey you know what would be good red teaming however there's still work to be made such as what is red team me in AI Define it what should be the scope of it and these things are kind of missing still but they are in conversations at this time on trying to do so now let's go down some predictions and recommendations and unfortunately I only have these are the last two true or false questions so give it all you got now right so securing the trained data
of an AI system is not as critical as securing the AI model itself if you believe this is false raise your hand excellent good job all right the last one model inversion attacks can extract sensitive information from the outputs of AI models by exploring the relationships learned during training if you believe this is true raise your hand congratulations you either knew this knowledge or you paid attention in this heat and by the way congrats it is hot in here I feel for you so thank you for staying in this room with me so questions to always ask at the end of the day is when it comes to AI security what are you doing to manage your AI
cyber risks is it enough and you're like yeah yeah I think I think we're good how do you know that now becomes a conversation where you actually have to talk to other departments and teams and hope that they will talk back not at you but with you and that's where we are right now the only way that we're going to fix this situation to actually know how bad is our situation how bad our silos at our organization at this time and know you know trying to turn it off and on again is not going to count so don't even try it crab fans representing them here hopefully yeah I love IT Crowd I go watch that every single day okay so here
are some predictions for 2024 I think you all can read this it's a beautiful nice slide but basically in general more AI attacks more supply chain issues and I'm really really really hoping that Us in this room in the hacker space that we start learning these skill sets and I know it's also kind of a fault on organizations for not helping you either because right now we're all playing a game of catchup you know there aren't really any shts out there that could help bring you to where you need to be at this time so I know that they're working on it so do keep your eyes open I know by beginning of next year you'll
have a lot more cont content to learn um but if you really want to learn more about this space um hidden layer does have a research portal so you can actually learn the latest case studies that we've done and the research we've done as well and you can actually learn the techniques of how we went about it but I do have one more prediction which is possibly robot cats taking over Instagram I really wish that that would happen because that would make Instagram a little bit more exciting for me so so these are the last recommendations here and it's by each group of the process we have when it comes to AI so in the
design phase is simply to do your data source evaluation where did it come from right that's what we need to know we need to know where did our data come from but also to know the model robustness evaluation also where did we get our model from has there been any cases with this model and in predeployment is to do model Integrity checks of course and also doing security scanning um to look for any vulnerability is already in existence but also one of the things that's not in here is we should also be doing AI red teaming and I know that's still being developed of what that scope should look like and that and hopefully we'll have
more of a definate thing for everyone standard so we can all agree on but last but not least in the post- deployment is to you know monitor your input and outputs but also to ensure you have a good overall security hygiene which shouldn't be too hard if we become more proactive and a lot of this talk is about us becoming more proactive now because being reactive is not going to help the situation as we've all learned so regular update and monor AI systems is one of the top ones the other one is use adversarial testing and then last but not least educate your users and your employees of all the AI rates that exist out
there and these are the last key takeaways which is ai's powerful bonable don't forget I think we all kind of knew that though um security requires constant vigilance and stay informed and be proactive please now I know I work in security and you work in security but we trust each other right there's a beautiful QR scan code right here if you want to do it um but this is where you could get a copy of the report it is free so feel free to uh to get the report it is great if you ever need a summary of what this conversation was about it goes into everything thing that I talked about last but not least I know almost at time
and this is not relevant whatsoever but Stanford is doing a study right now about figuring out what is going on in the workforce and are you safe this is an incredibly important study and I really hope that you all can take a picture of this and share it with friends or colleagues because the more information the more we speak up about our cases and our situations we actually have data to do something about it if we don't speak up who will your data it will be completely private like no one will ever know your name or anything like that will ever get out there so I really do recommend please pass this forward we need this information more
than ever before I know I don't really have a lot of time for Q&A but I just wanted to put that out there um but thank you so much for existing thank you for being here thank you for dealing with this heat thank you for being in Vegas for this Wonder ful hacker summer week and thank you for Beast nights once again for having me and thank you to the volunteers for being in this room and making this such an amazing event so thank you all for being [Applause] here all right we're going to open it up for questions now I also can take questions privately if that's more comfortable I'm more than happy to do it outside the room or right
up here whatever works best for y'all all right uh thank you very much Chloe thank you