
[Applause] okay can everyone hear me okay with the mic there okay I um if I hold it I'm liable to turn on turn into like a standup comic and no one wants that so it's it's for the best that we don't go there um but thank you very much for sticking around until 600 p.m. the last session of the day um I appreciate it um I'm a bit jet lagged at the moment my flight was meant to arrive last night I'm from Australia it arrived today so feeling a bit woozy but I think that means that I'll be like very candid and honest maybe and hopefully I'll remember my slide order but anyway I'm delighted
to be here with you um so on your Oceans 11 team I'm the the AI girl but guy has a a better ring to it so so that's the title of going with so our objectives and the talk today um are to hack the casino um specifically the AI the artificial intelligence of the casino in question um so we're going to start with a bit of um open source intelligence and um and Recon we're then going to focus on the facial recognition Ai and I going to put it in the perspective of AI security uh I'm an AI security person I run an AI security company in Australia um I've been working at the intersection
of data science and cyber security for about 10 years now I started in Consulting uh then moved to a startup I worked in the Australian government and my PhD is in machine learning security and then a year ago I started Mal um I'm also online in various places at Harriet hack um this social media content I'm not really very good at but you can find me on X um Instagram Tik Tok maybe you don't need to check out all of them but um you can find me there we have a podcast it's a it's a whole thing um not to be totally cringy so why the casino well it's Las Vegas why not um the casino is
a really good case study or analogy for every large organization at the moment um who's grappling with this sort of t of trying to adop artificial intelligence very quickly but maybe not necessarily feeling um like they understand all of all of the potential risks involved and so obviously the casino is one of them and they have a lot of money so they need to care about risk um as do many other organizations so some disclaimers um we had this particular casinos permission um the attacks are real but I'm not going to show you exactly how to implement them um on you know camber casinos uh AI itself um and can I get a a show of hands in the room of who would
consider yourself maybe an AI person okay and a show of hands for the Cyber people okay that's what I thought so AI people please don't come at me this is more geared at Cyber people who might not know that much about AI security um or it's meant to go sort of from a gentle Z to maybe 80 um I'll be giving this talk at Defcon as well and it'll go into into a bit more of the technical detail um but this bsides talk is a little bit more about the the theory side but but with some cool attacks um while I'm here we're talking about um CRA casino's permission so as you can maybe hear from my accent I'm from
Australia I'm from canra is anyone familiar with canra Australia yes I hear some some yeses for those of you who aren't it's our Capital City a lot of people don't know that a lot of people think it's Sydney um but cry is our Capital it is our political Capital so it's very politically focused um it's a bit like DC but much smaller so we have one Casino um and fortunately Casino CRA were really um really accommodating and willing to work with me on this so C CRA casino is our City's best casino and our only casino but don't let that deter you they're they're pretty good um if you're listening you great cber Casino okay so um for the
non-gamblers Among Us I think I went in with a certain set of expectations around how casinos would use artificial intelligence um but um I I found that they weren't necessarily fulfilled so so let's just dive very quickly into casinos in general um so they make a lot of money so they need to care about AI risk right um they have a controversial history um in particular in a Australia um some of our big casinos have been under quite a bit of heat lately for uh money laundering and not complying with their regulations I'm not saying this is true of Casino camber but this is sort of the landscape in Australia at the moment there have been some Royal commissions
uh around the world casinos have been um known to have money laundering as a risk you know of being able to move money around that's that's sort of the nature of it so um in terms of you know using technology appropriately it's something they really care about given the the landscape that they're in at the moment um they use artificial intelligence um I thought that they might use artificial intelligence quite a bit for say detecting card counting or things like that um I I realized very early on that actually facial recognition and person identification was the most important use of artificial intelligence um and there aren't that many providers of this kind of AI technology to casinos quite a lot of
them um are the uh sort of Casino chip um and and card providers themselves it's it's a small landscape um does that matter well we we'll get to that later anyway um but facial recognition and person recognition are definitely the most important forms of AI um and that's because card counting isn't illegal it's considered Advantage play um which maybe it sounds obvious I didn't know that um so you can card count but you shouldn't do it conspicuously or you can get thrown out because the casinos are able to throw out anyone they like um and so if you uh um you know ostentatiously card counting and and winning a lot of money they really don't like
that um because the like sort of the the algorithmic bias into in the favor of the casino um like what I also found interesting was that the in terms of hacking an artificial intelligence system um that can mean a lot of different things depending on how you actually Define ai so here's the first bad joke of the day so when I first told my mom that I work in AI she said Oh Darling why are you working in artificial insemination because see that gets a lot it doesn't get to laugh in a lot of crowds um but she works in the medical profession so for her that's what AI is and so for I think most of us
we would know that AI is artificial intelligence but even so we all have a slightly different idea of what that actually means so maybe for some of the Cyber people um as in terms of level setting when I say AI security um people often interpret that in different ways often people assume it means AI for cyber security um among the AI security folk um that I sort of work with um in this context AI security is the security of the AI system themselves but that term does tend to get muddied a little bit so when we think about AI security as a field it's more likened to sort of cyber security when you talk about the actual security of the of the
system itself and in terms of the potential attacks and vulnerabilities in in AI systems um these historically have come from the academic field called adversarial machine learning which just been around for about 10 years and the the slide that should be showing is an example of an adversarial machine learning attack in the wild so this is being able to add for example um specially crafted pixels to an image or specially crafted sort of material to an image that can prevent a model from recognizing it accurately so if you're able to add these perturbations to a stop sign um an AI system that's meant to be doing stop sign detection or you know object detection um can be
essentially hacked you know so it's disrupted or deceived so that it can't actually recognize the stop sign so this attack sort of came to the for with this classic example which many of you may have seen before um but this is from sort of 10 years ago it's by pivotal paper by Z getting good fellow not to get too theoretical or academic but it's basically showing that you can add specially crafted pixels to an image um in this case of a panda to prevent the model from recognizing that Panda and instead recognizing it as a gim instead even though to a human Observer they see no difference and these are specially crafted pixels they're not
random there are lots of different methods to create these pixels so in a computer vision uh example like this you can see um that the the noise we're adding um is is basically that that sort of pattern but all of these methods can be transferred to other fields as well or other domains that are a bit less obvious for humans to detect like uh like audio signals uh you know things like RF um those sorts of fields so this is this this field of adversarial machine learning is um sort of the um the origin of of a lot of these offensive attacks so offensive attacks on AI systems themselves but when we think about actually hacking um
AI or whatever has been known to be uh whatever has been referred to as AI in the past we can go way back and in the context of casinos as long as people have been using algorithms or as long as organizations have been using algorithms people have been finding ways to hack them so if we think of an example that we would all know things like email spam filters you know people have always been trying to those kinds of algorithms so that they can get spamm through in the context of casinos a really you know cool example I guess is from things like random number generators um as we all know random numbers do not exist in a
computer sense you have to have an algorithm that um that gives you the next number and so there have been some pretty high-profile hacks in the casino World um in in the '90s uh for example this um particularly big one is from 1995 Ron Harris was employed by the Nevada gaming board that whatever that that um board is called um and in 1995 he was able to predict the next numbers um in in a game of Kino so that he was able to win $100,000 um and he was uh he was found you know they're able to to go up to his hotel room and find him but there have been quite a few examples of these kinds
of I guess hacks um in the casino context another one you could really consider a hack in a is is card counting itself cuz the whole idea is that the statistical um nature of of every gambling game we play means that the casino always wins right and this advantage that the casino has is different for every game so things like the slot machines it can be up to 25% uh advantage to the casino and the only reason that's limited is because of Regulation so that it's not even higher but in a game of blackjack it can go down as low to 0.4% depending on the rules of the game um and if you're able to play black cheack according to
perfect strategy and then include card counting so that's a way of you know sort of hacking that algorithm as well so that you know exactly the right plays to minimize the um the favor in the casino so thinking in terms of the adversarial machine learning landscape so far so today in terms of the number of attacks that you could Implement on an AI system there's over a 100 um and this diagram you don't need to worry too much about all the all the parts of it except that for every kind of attack surface you have in an AI sense whether you're talking about um a machine learning model like a convolutional neural network which is a type of AI
model that's really good at computer vision so recognizing images or something like a Transformer down the bottom which is very good for natural language processing so think things like chat GPT um have different kinds of model architectures therefore different kinds of attack surfaces and therefore different kinds of attacks that are more likely to work at different stages of that attack cycle um and these are just a few of them but in things like mid's Atlas repository which is the MIT um attack equivalent but for adversarial machine learning there are over 100 different kinds of attacks listed so that's the landscape of of adversarial machine learning um for for cyber people so I everyone here has seen
Oceans 11 or you are aware of the premise or Oceans 12 or 13 or8 um I I don't think I need to belabor it basically it's a it's a heist so I thought what what better sort of narrative structure to have for a talk at you know in Las Vegas than to to try and commit a heist um and I'm not you know unfortunately I don't have any George Clooney or brid pit Lal X in my team um but the idea of this talk is to hack the casino AI that is most relevant to them which I found was the the facial recognition AI um to sort of get to my goal and what that goal is we'll also
discuss a bit later on so this is the process I went through so first of all we're going to interview the casino staff we're going to pick our Target models we're going to implement the specific attack that I'm looking at which is called a distributed adversarial region we're going to disguise those regions in in different ways and then I guess we'll reflect on it and get some lessons learned or something um so the first step um we want to do a bit of bit of Recon we want to understand our environment so what I do is I tell people I'm a PhD student doing a paper paper on AI security and I want to interview experts this is very
successful I am actually a PhD student I've been doing it parttime for a very long time um like way way too long it's very sad um but people are more than happy to chat to me about some of the most intimate AI security problems um people are extremely candid um I'm not saying that's necessarily true of of camri Casino um they were appropriately candid but I've been doing these kinds of interviews with lots of different organizations so Casino CRA was one of them which is sort of perfect for this talk but over the course of the interview process which is for the PHD and also for the company so it's it's sort of nice jeel jeul use there um I've
already done over 50 interviews of 43 different organizations and it's all on the topic of sort of AI security and the potential vulnerabilities in their AI systems and whether they know much about their AI systems or the AI security risks and as some you know initial um insights um 94% of those people in the organizations could articulate how they use AI you know basically every organization is using AI at the moment but only 8% could articulate how they secure their AI so ability to secure it from those adversarial machine learning I was discussing earlier so things that would disrupt deceive or disclose information from their systems um there's a really big gap there now that's not necessarily the the case
for Casino CRA for for them they they're at an interesting inflection point where you know big casinos so the the kinds of casinos that are in Las Vegas um already Implement facial recognition Ai and many other kinds of AI um as perc course like the the the building has been designed to be inherently um AI aable you know the the way that the architecture is you know you have to walk through certain corridors at some point that's to make sure that the facial recognition cameras can can find you the you know where you're able to park your cars um just the different thorough fars in addition to all of the sort of psychological tricks that are done to keep you in the
casino um are there because surveillance is inherent in how you experience the casino um um and it was pretty clear early on that facial recognition is the most important way they do this even for things like money laundering or card counting it's not necessarily the AI that is you know detecting that but it's the responsibility of the dealer to be able to identify what's going on identify dodgy people and then alert the casino so that they can then put it in their facial recognition AI systems so this is by far the most important use of facial recognition AI um for those casinos that haven't been able to fully adopt um that kind of AI yet it really still
relies on their people so you still have rooms full of people surveilling cameras in real time and trying to um hope that they catch the right people based on um you know manual processes which is really hard and obviously this is um you know less likely to catch as many um people or entities as an AI system would but then um you know there's there's the whole privacy tradeoff there too and just as every organization is you know trying to adopt artificial intelligence at the moment that's true of casinos um it's very expensive um and they really have to sort of make that judgment based on um you know how profitable they are so Casino camber is going through that
sort of inflection point too and that decision- making process so we moved to step two and now we're choosing a victim model so there are lots of different uh sort of surveillance AI providers out there there um these are some open-source facial recognition models that exist just a few of them there are many um I could also create my own sort of custom model um but at the end of the day they're all pretty similar because they all are for image recognition and so they therefore all use this particular type of AI model uh machine learning model called a convolutional neural network uh and at the end of the day that's basically a model that's based on
the that humans are able to identify faces and objects as well um based on our brain structure so we have input data it goes through a sort of model of of neurons and synapses between them and then we're able to predict uh who someone is or or what they are um and so in a machine learning sense that's very much the same too you have neurons connected by different layers and there's a training process so that based on historical data You can predict future output so regardless of how um you decide to I guess customize your model um at the end of the day they all rely in the specific architecture and a very similar training process and so
what this means from an AI perspective in terms you of how you actually um predict what a face looks like um it sort of compresses in a mathematical sense into this space called an embedding space so all of the higher dimensional data that is captured About Faces um if we were to display it as a 2D image here or if you could imagine it in 3D it's basically like different clusters of features that are that are similar so in facial recognition for example it's very much based on the geometry of the face so how close people's eyes are what kind of shape they are relation to the rest of the face and so you end up with different
clusters based on different features you're looking for and based on how similar the Clusters are um a model is able to make a pretty good prediction the other thing that's unique about talking about artificial intelligence models versus you know a cyber system is that if you're thinking about a model um you you usually refer to its it's sort of evals or um you know lots of different factors you know statistical mathematical that tell you how good it is at predicting something so here we've just got accuracy and these are a few examples of open-source facial recognition models and how accurate they are so they're all pretty accurate um they're all over 97% most of them are close to 100% of course how you
decide to test this um is a is a question these are sort of based on um research papers that that Benchmark these kinds of models um however uh the thing about machine learning models is that because they all rely on that same convolutional neural network architecture you end up with these models that converge to the same kinds of features and representations um no matter who is building that model or or what organization is creating that process uh research has shown that actually if you're comparing uh clusters of models or similar models that are designed to do similar things they are 95% similar to each other at least all of the the numbers range from 95% to 99% so what
that means is that if you take two models created by different organizations by different researchers if you were to compare what they look like on the inside they are 95 to 99% similar and this is true of uh you know models created by companies that represent that IP there's only so many things you can change about a model if it's designed to do a certain thing so this is an interesting point but for us this principle of convergence makes it really easy to attack models because I can create a model that is almost identical to any other model and then create an attack and launch it um that victim based on this surrogate model because the same data plus the same
training type basically equals the same model so the third step is I'm going to create what's known as distributed adversarial regions and this is the attack that we're implementing here so here's just an overview of some different kinds of adversarial machine learning techniques that have already been tried in in the wild so we've already got that that Panda image there the the one in the middle is these adversarial glasses so someone is able to um not be recognized by facial recognition or or person recognition by wearing um either glasses with this special adversarial coating or a jumper with a adversarial patent on it as well um the last image I like to show because it's funny but basically the the one
with the cardboard box over the person is a a study done by Dara or or rather Dara created a person recognition model to use in um sort of Urban Camouflage environments um and they asked a of their Marines to try and hack the model and basically they were just able to act like not people and defeat the model um so they did things like hold branches and wave them over their heads and put you know cardboard boxes on them on their heads um so anything they did was that was a bit outside the norm was able to hack the model um which I like to show because I guess it's not really a sophisticated attack but these attacks
do still do work um now is this attack a sophisticated attack I guess it depends so this is an attack that I created um for specifically Urban Camouflage settings um so the research that I do um well I started when I was working with the defense department so a lot of it was sort of very focused on sort of military applications and the National Security context and so the idea is that you would be able to apply the specific attack to um to Urban Camouflage environments so this is the methodology for those people who to a methodologically inclined or like a good diagram but basically at the end of the day I'm testing I'm taking an image of
of something so a ship for example here um identifying a region in that image or in that sort of that that video that is most likely to cause a misclassification and I test it by applying different kinds of adversarial machine learning methods to that region um testing different models and then from there I can apply lots of different sort of case studies and settings and kinds of objects and so the the point of this specific attack is really that I want to see how I can instead of having to perturb say the actual ship or an object that's being classified how I can add regions so distributed regions to this environment that prevent the model from
recognizing what that object is without having to actually change anything about the object so for example if I'm looking at a ship can I add adversarial Boys around that ship that prevent it from being recognized by um image recognition models and you can you can do the same for for planes um and for other kinds of military platforms as well and the the reason that you would like want to do something like this even if you're not actually um you know even if a human can recognize that maybe something there is a bit different is because in a lot of settings you don't always have a human in the loop or you don't have a human in the loop until a
lot later on the process so it's more about disguising something like a you know platform like this from some sort of automated detection so because I'm a researcher unfortunately have to end up with sort of graphs that look like this um as part of the the testing process so what we're really doing is we're testing the extent to where which um applying these distributed regions to an image are likely to um reduce the confidence level of a model in what it's looking at so um for example if I were to um Place some distributed adversarial regions around that ship it would reduce the confidence that a model has that it's a ship by say 40% um and the we we did find that
applying these regions to those sorts of object decreases the probability by 40.4% but I'm applying this to casino facial recognition AI so this is a very unflattering picture taken of me um walking up to facial recognition detection and using this region I want to try doing things like adding jewelry design is maybe to be you know um could be better come on video hang on I downloaded it over here so this is um what this sort of what this ends up looking like so I have a demo here this is just um implementing the attack against one of these open source models if you're not a code person or you're not really familiar with machine
learning code the point here really is that you can do this in a minute or so so you don't need to worry too much about what the code repres present apart from noting that we're taking this open source model we're applying we're creating the pations based on different um different targets this time it's me and my face I'm testing adding different um jewelry um around my face and I want to see if I can prevent that facial recognition model um fine-tuned on images of my face as well um to prevent it from actually recognizing me so I know this is really exciting but you have to include a code demo right um so here it's shown that there's a match
found so that's me testing the clean image of me without any of those perations it was able to find the match that's that original image and then I'm going to test the new image the one with the adversarial regions applied and we're going to see if there's a match or if it loads as a spoiler there was no match so just does that add to the suspense I don't know um so I guess we could say that we hacked the AI right we were able to prevent it from recognizing me um we certainly want to test it we want to see the extent to which we're able to decrease the confidence of a model in
predicting and recognizing me and the number that we end up with is like I said before 40.4% so for the images that that model was tested on the confidence in being able to uh correctly predict what it should be was decreased by 40.4% so I had a real problem problem like trying to actually describe this as a hack you know I think the challenge in applying cyber security kinds of terms and methodology into the world of AI security is that machine learning models are inherently probabilistic so it's not like you'd necessarily have a a binary answer at the end of the day or or you might but it'd be more scientific to test it multiple times using you know the
multiple different variables and and different kinds of images so maybe I'm able to prevent that model from recognizing me once but on the whole it's more about actually reducing the likelihood that it is able to predict who I am and the the tough thing about applying this to fascia recognition is because in is because it you're more likely to sort of bound the face so to look the model will only look for features within the set of bounding books in my face whereas other images are more likely to rely on context for example um like that the the Water behind the ship um helps the model understand that that's a ship um so I kind of hoped that it would be
better like I hoped that it would you know that the results would be I don't know more shocking or more interesting or just more of a a hack or an attack um so I sort of I was a little bit disappointed I had to take a pause I did you know what you always do when you want to run away from a problem and that is I went on a trip I went to Europe it was lovely um and then I realized that the problem with my sort of disappointment in the attack was the same problem that always encountering in my my work and that's that most of the organizations that we work with really
talk about adversarial machine learning in terms of attacks and defenses so all the different kinds of attacks that could be imployed against a model and you know versus all of the different kinds of defenses that should also be applied to that model but that's not really the right way of framing it because in cyber you don't really talk about cyber attacks versus defenses as you know that the the be all and end all you really talk about maturity and risk and so an organization has to understand their you know their risk their highest priority assets and targets and then decide the priority mitigations right and that's not something that is historically talked about in the world
of AI Security even though it should be like that's the point we want to mature to so the questions that we really want to be asking if we're trying to hack this facial recognition model is you know what what is the target what am I trying to steal if I'm bypassing a facial re recognition AI in a casino context why is that and you know if we're sort of generalizing this and applying this to any organization that is a question that every organization needs to ask but if we think about it from the casino context it really means that you know someone is possibly a money launderer or a card counter and you just want to do that and not be
recognized and taken out of the casino and if that is the goal maybe go to a casino that doesn't have facial recognition AI um is the first thing the next question that we need to ask in this context as well is like I prefaced before what does it mean to hack an AI model it isn't really the right terminology to use when if we're thinking about risk in a cyber sense and thinking about all the risks that a system might have in terms of you know the system as an attack surface um just so it's more about the risk that an AI system is going to cause a misclassification a rather than the risk of an AI system in its entirety because
many AI systems can do the same thing but there currently isn't any regulation or really understanding there's certainly no requirement to have any research or open source information um or close Source information about how robust those models are so just because a model can perform facial recognition pretty well um all of the different models we tested were extremely variable how robust they were to um to an attack like this and that's because as I say here a machine Learning System is inherently probabilistic it's it's not deterministic like many of the Cyber systems we're used to dealing with um and the last thing as I've said before is what are we trying to protect um we're trying to protect the casino's
money their profit their brand reputation um all of these things but thinking about it from the perspective of risk um because an AI attack an attack on an AI system is just part of the kill chain right it's not like we really think about um having one attack that's powerful and able to do everything it's more about all of the different ways that you could alter this attack depending on the um depending on your Target or the particular case study and the way I like to think about it is in terms of a stu nut style attack versus a Dos attack uh because many adversarial machine learning attacks tend to fall into sort of a stuck net
style they're very cool they're very complicated they're able to you know deceive an AI system in I guess a really cool way you know a way that a researcher can be like ahuh this is a cool attack right um but at the end of the day it's actually maybe the these sorts of attacks that are that are here um that aren't quite so cool to an AI person maybe but are actually able to disrupt almost all models um maybe not in the you know maybe I'm not able to get the model to think that I'm Angelina Jolie rather than Harriet farow but I'm able to cause a misclassification almost all of the time just by disrupting it
and that is far more likely to have an impact to a casino um or to an organization um because even though that you know that that number that was 40.4% which is a bit disappointing if you're able to change that attack if you're able to think about all of the adversarial machine learning attacks that are out there and apply that as part of your existing cyber kill chain the success rate is 100% so this is the kind of you know thinking that needs to happen in the field of AI as well um all of the the other sorts of creative ways that you could apply something like a distributed adversar region in the facial recognition context
um you know things like pimple patches being able to if you have access to the actual camera and being able to stick like a clear piece of sticky tape with that adversary a region in it depending on all of the different you know points of access you have to a system those are all things that could be done the other reason I really like the casino context is because it's all about surveillance and it's about surveillance by Design and as we're moving as a society to having artificial intelligence that is uh increasingly being adopted that's something that we really need to consider because our societies are increasingly becoming surveillance by Design um and whether this is a good or a bad thing is you
know discussion for lots of people including you know sort of thinking about policy and regulation but if we think about the cassino as an interesting test bed for this then all of the different ways that facial recognition is being used now in airports for you know social credit scoring for surveillance for policing um there are all sorts of reports about how minimally robust these models can be um that there's no requirement for how robust they need to be at the moment um because AI security is real these are some statistics from um company we love hidden layer um they released this report that shows um lots of different things about the reality of AI security
but 77% of companies reported identifying breaches to their AI in the past year this is massive um 96% of it leaders expressed that AI projects are critical which we can all you know understand right everyone's talking about AI at the moment um this is a real Attack so even though being able to evade facial recognition AI is you know is possible and scary at the end of the day uh it's in the context of the process that it lies within and the people who are making those decisions in the process that I encourage all of you to think about as well um even in your roles as cyber people or you know what whatever kind of person you might be what kind of
role you have it's more about being able to bring all of the different disciplines together so that AI can also mature to the extent that it needs to be given just how widely it's adopted at the moment and if we really think about it um I was able to reach out to casino Cambra through my you know pH I shouldn't put that crit I am a PhD student um and I asked them very kindly if I could go into their casino and sit there and fil myself hacking something and they said yes so at the at the end of the day um it's more about the cooperation and you know what does it mean to hack an organization right so
this is a bit of the um sort of the the theory of these attacks um I'll also be talking about the same stuff at Defcon those talks will go a little bit more into the technical detail of how it worked um I also have another talk at Defcon in the policy Village about my experience working um in an intelligence agency uh in Australia so you can come along and hear all about that too the things that I'm allowed to talk about of course um so please do stay in touch I hope you found this interesting I think there's some time for questions um I should add I'm also looking for survey participants for this um AI security
interview um so if anyone would like to be interviewed um please [Applause] do if you have a question raise your hand and I'll bring you a mic um I was at Defcon six a while ago um they had a similar talk does everybody just roll their own or does there exist an existing package for this dos that that's a good question so like I said there's over 100 different kinds of attacks and a lot of the time they're based on the similar like the same principles right it's an optimization algorithm and you're optimizing between different constraints so uh whether it's trying to affect a misclassification in in an image sense or if you're trying to
um like cause a specific targeted misclassification it is all just optimization at the end of the day so you can easily code it from scratch um but there are pre-existing packages that help you implement adversarial machine learning attacks as well uh just curious throughout the course of your research and conducting some of these uh modifications to like the Jupiter notebooks was there any any um attempt or situation where you were able to actually uh go live with those modifications to the artificial intelligence or was it just through the modifications of the Jupiter notebook that you were able to see that the the image was misclassified um so to understand your question whether we're able to test it
in like a live environment um not quite yet we've been sort of conscious of Casino cber not really wanting to do that um we are sort of working with some companies that do look at the bio Biometrics and testing some of that now so the for example the Urban Camouflage um research that we were doing is applying the pixel alterations digitally but to sort of things like videos so in theory you could apply exactly the same pixels in real life um just like those stop sign examples um if you're able to um and you can sort of dynamically alter them as
well uh since you mentioned that surveillance is a big field for this type of Technology do countries try to make bypassing these AI facial recognition tools uh illegal or do they be for me it's currently difficult to imagine to make it illegal that you're wearing weird earrings weird head Etc but are there movements into that direction or is it even illegal in certain contexts that's a really good question um as far as I'm aware there's nothing that's illegal about wearing something that could disguise you from facial recognition um the question I mean in so far as you're able to exist in a Society where it's mandated I mean I guess there's nothing stopping you from um
walking up to the facial recognition AI at an airport and like wearing weird glasses but then they ask you to take it off you know so in so far as you are sort of forced to interact with AI in the various ways that you interact with your life um that that's sort of the requirement you have it would be interesting to see if different jurisdictions did try and regulate something like that um yeah I'd be curious to see so yeah great talk by the way that was really awesome uh Curious can you speak to any of the other algorithms that might be used to as like layered defense against these like design bases threats or maybe the lighting in the
room modifies the algorithm in some significant way or other types of uh behavioral tracking such as the time of day the person might frequent the casino the way they walk or any other kind of handicap or other significant item yes absolutely so the in casinos um they don't really have um well well most of them wouldn't rely on um sort of behavioral patterns like that other kinds of systems would so other AI systems that are forced to rely on uh you know time based decision making or sort of more more behavioral information like that um that know wouldn't be a case study for that but where those kinds of systems do rely on other information like that um yeah I mean you
could alter the time of day that you frequent a place or change your behavior in some way just like sort of traditional algorithm hacking um th those models that do rely on pattern of life kind of stuff um I guess that's already did I understand your question um uh you did I think there was a good Prelude to it um I guess what I'm kind of getting that is is do you incorporate other forms of AI or ml to defend the inefficiencies of the facial recognition technology would that be kind of the the casino secret saw so to speak where they're able to fine-tune the open- source algorithms in some way that makes them more effective uh yes no definitely
but that's a good question yeah I mean if a casino were able to incorporate other kinds of decision making um to to to make the facial recognition or the person identification more accurate that would be ideal whether a casino is the right you know whether they decide that that's something that's worth investing in based on the potential money they could lose I think would be I I personally wouldn't know if that cost equation would work out for Fields though like policing where it's extremely important that you're able to identify the right person they do already add other kinds of decision making intelligence there as well and that usually is a requirement too as
well uh thank you for the talk um my question was just related to the vectors that you tried you focused on um the jewelry in this talk and you mentioned sort of those pimple patches what other sort of mechanisms did you look at did you you know get a temporary face tattoo like what what what other sort of things did you consider yeah I I would have liked to try something like that I think the thing with a temporary face tattoo or something big like that is that it's akin to the Dara example where it's not really an AI attack um that's more just about not being as you normally would be so I mean if I were to wear like a
ridiculous hat or wear face paint or something that would obviously also prevent the model from recognizing who I am most of the time um but then is that sort of fair in terms of this research probably not I think the idea is that it's um sort of at that at that boundary where someone could be like huh that's some interesting earrings but not something where someone would really care you know um I like that you kind of highlighted the very gray line between like the fitness for purpose of the algorithm and the ability to quote unquote hack it or organizationally speaking who owns this problem is it a data science problem or a cyber security
problem oh that's my favorite question um because every every organization we work with that is the question that no one asks you know usually we go in and we find that the executive level thinks that the IT team is handling AI security the the IT team think that the AI team or the data science team is handling it they think that the IT team is doing it no one's doing it um some some people are but you know on the whole no one's really thinking about it or the owners of people who don't know a whole lot about it you know often those owners might be cyber security people who've been given some rudimentary AI training
and um you know it's great that they're being invested in but cyber security is a very different sort of discipline to AI security so it's often very hard on those teams um it's a discipline in itself and there's a lot of crossover between cyber security and AI security but not enough that you can just expect a cyber team to know what to do so yes it's a big organizational problem because um either organizations think it's not important or it's not a real risk or they think that their existing teams are fully capable of dealing with it or um they're just not sure the right thing to do so it's a good question all right that just wraps it up
thank you everybody thank you everyone [Applause]