
in the beginning there was part one where we learned about to AI a productivity tool that makes PowerPoint presentations for you mostly I discovered that slides with black borders are just way cooler than slides with white borders so um let me just fix that there we go um that's put up there were some jokes about me being absolutely crap at PowerPoint and that wasn't my fault just now just be clear um and was able to demonstrate how these productivity AIS can make some admin tasks easier um so long as you're really comfortable sharing your data with a bunch of third parties the main infos takeaway was around data loss prevention we went on to establish that
tools such as chat GPT were never designed to be a source of Truth in fact it's more accurate to say that chat TOS were designed to mimic human conversation they conflate sources get things wrong and in some cases outright lie in order to maintain an illusion of conversation we also picked up on this a few months later in part two when I talked about hallucinations confabulations and delusions in [Music] leads basically chat GPT is a big fat liar and that is by Design so please stop using it to do do your homework we briefly looked at how AI generated art was now good enough to win competitions photography competitions actually competent professional competitions and we talked about how in
this context um trust and forward was a concern we listened to David geta deep fake &m uh which was actually really cool and we watched XY chromosome demonstrate to us how impressive face mapping tools and filters have become and bear in mind of course that this was last year things have gotten significantly better in that time we also watched mostly sapen explained why we should be worried about these face mapping tools and how giving permission to Tik Tok and other apps to scan your face also enables them to map your room and everything behind you in your surroundings basically technology companies are now doing some very big and perhaps very worrying things with your data depending a lot on your
Viewpoint and your politics on the topic of politics if you ever get bored um take a look at how this face mapping technology enables China's social credit system facial recognition in particular has become absolutely ubiquitous in China and it's used to Grant access to everything from airports to supermarket checkouts the Technology's largely gone unchallenged as the public accept the trade-off between privacy and safety or convenience even better watch a Netflix documentary called coded bias for a real look at what can and is being done in other countries today but that's enough on the tangent so let's get back on track despite my best intentions part one ended on a bit of a downer as we
started to put all this deep fake stuff into context it was clear that this technology in the wrong hand can easily erode our trust in security control such as
Biometrics a short time later in leads you'll be sad if that say it wouldn't you um in part two I tried and failed to put more positive spin on things but we did have fun along the way I experimented with the lowbudget Mischief making potential of all of the AI tools offering free trials I used AI tools to write some terrible poetry and I forced a digital clone of Glenn to confess confess his undying love for me with help from the mad scientist we convinced chat G PT to teach us the process for deploying weapons of mass destruction and Kevin Rose talked to the hard Fork podcast about the dangers of giving chat GPT direct access to the
internet which incidentally is something we do routinely now we nervously welcomed in our new robot overlords and I utterly failed to put a positive spin on any of this sure we had fun laughing at Ai and how it can't draw pictures of people eating Italian food but that sense of unease around Ed around where AI was heading never kind of left me as much as I enjoyed presenting those talks it felt to me that we the people had very little power we were being fed scraps just enough of the dream was being shared with us that it seemed cool and sexy here we are creating silly pictures whilst corporations are using the technology to determine how much we should pay for our
car insurance or whether or not we get the job interview we need it was clear that the low cost and freemium tools that most of us had access to last year were basically crap there's an old quote that's often attributed to William Gibson that goes the future's already here it's just not evenly distributed so how did we get here how did I get to be standing in front of you today talking about so-called AI again well the way I see it part one was about chat tools and productivity tools and part two was primarily about voice generation and trust and whilst I'd skirted around it in part one thanks to the way to I AI worked
I haven't really looked closely at image creation or image generation if you prefer complete a side for a moment I'm ridiculously pleased with how that text worked out um if you've had any experience with these tools at all most people nowadays just use Photoshop to put text in afterwards now it's the end of 20123 I'm thinking about image generation and it seemed really on Trend the news was full of people talking about art and plagiarism and some of the image generation tools were getting cheaper cheap enough that I felt comfortable throwing my own money at them now just to see how they work so there I am minding my own business trying to create my own
characters in mid Journey when I start to see more and more people online quoting Cathy O'Neal and her book weapons of math destruction in particular she makes a point about Big Data processes and how they codify the past this comes up a lot in that coded bias Netflix thing I mentioned earlier as well now I don't often read directly from my slides but this point Kathy makes Warren a moment's Focus she says Big Data processes codify the past they do not invent the future doing that requires moral imagination and that's something only humans can provide we have to explicitly embed better values into our algorithms creating Big Data models that follow our ethical lead sometimes that will mean putting
fairness ahead of profit now if you can I'd like you to try and keep that quote in mind or at least the spirit of that quote in mind for the rest of this talk corporations are now making decisions about people using data that is fundamentally rooted in the past Insurance education criminal history heck I'm even aware of companies that feed historical data into computer models to calculate what somebody can afford to spend this month and spend value is then used to guide marketing campaigns I'm sure these tactics are true of most people working in an industry where affordability is a consideration now and that's everything from B banking to betting and gaming sure the UK uh gdpr restricts you from
making solely automated decisions including those based on profiling but despite that there's a roaring trade in AI tools for customer insight and customer support right now which now should bring you bang up to date with me standing here today in front of you for part three and for the record both of my previous talks are up on YouTube thanks to the lovely people at bsides um LS and leads koopa's team in particular so if you're interested in watching them back please feel free to do so um all you have to do is search my name um there are not a lot of people that spell lium the way my parents chose to so we're 26 slides in and we're only
about to get started let's fix that strap line for starters and let's not forget our theme tune either um that's better so strong strap line you've got there Liam uh let's take a look at that shall we AI is an ableist racist sexist toall of the oppressor and we should all be ashamed of ourselves a humorous look at Amplified bias in the current crop of machine learning tools and non-technical Tech talk well as I mentioned a few slides back I was feeling uneasy about the direction of travel after all AI tools have been trained on historical data lots and lots of data Big Data from the internet and I think we can all agree the internet is not the Bastion of
righteousness and equality would' like it to be so let's start with sexism shall we is that going to play so I used another generative AI tool this one is called mid Journey it generates images from texts um and it will produce four Images at a time so what I did was I asked it to produce from its own imagination the CEO of a large large hospital system now in the United States it's like we're in the 1800s there compared to you so we still have four profit hospitals and uh you know CEOs running them but for our purposes I asked M Journey imagine the CEO of a large hospital system here's what I got all right so we have four let's call
them mature uh men white men who are the CEOs this should not be a surprise overwhelmingly the largest percentage of the Fortune 500 CEOs they are mature white men so not a shocker all right so let's try again how about the CEO of a midsized hospital system okay now we've got four shockingly attractive men I like the guy on the bottom right with the somewhat beard okay that didn't work how about the CEO of a very small rural Village F so I'm detecting a p right so so I was like what the hell is it going to take for the system to produce a woman so um so I was I'm thinking through like how do I make this work so then I was
like okay what part of the United States has the most women so I went to the US Census Bureau and it turns out the city in the United states with the most women is a place called Jackson Mississippi which I have never been to before and I had to look up on a map that's where all the women are in the United States okay you guys ready should we do it together CEO of a hospital system in Jackson Mississippi and yes all right so this uh this is kind of annoying so now I'm like again what do I do to get stupid woman produced so I take a completely different test this time I say the CEO of a company that
makes campons that's what I got yes that's a tampon that's C I'm definitely going to put that inside myself yes okay now this is hilarious slash an apocalyptic H state of Doom well that was Professor Amy web talking at the Nordic business forum and I think she just gave me an early win what about all that other stuff in your strap line Liam can you believe this person is claiming AI is racist it is a to prove it okay let's ask it to generate an autistic person so and let's do it again and once more and again and then and how about are we starting to see it happen you seem to think you're proving
something here's more and more and more how about this and another one more more more more more more more more more more more more more more more do you see any diversity say race gender age oh what most were and weirdly enough a disproportionate abundance of red hair how many did you do until you gave up after more than 100 and the closer you look you start to notice other trends like were any of them SM it was all Moody Melancholy depressing I noticed you used life like Photo and photojournalism in your prompts what when I didn't include those I get puzzle pieces in everything some of these are cartoons others are whatever this is so AI is racist sexist aist and not
only ableist but uses harmful puzzle imagery from hate groups like autism speeds why the AI we have today is not artificial intelligence artificial intelligence doesn't exist yet this is just machine learning now here's a bit of an oversimplification but this machine learning focuses on patters it looks for the majority of similarities and then excludes outliers so if the data that is accessing is mostly young white boys in this case it will amplify that bias so are you saying that in order for AI to Be an Effective tool you have to be smarter than the AI scary thought for someone who is looking to confirm their biases or is unaware of their subconscious bias it becomes a
tool of the oppressor I think plagiarism software I mean machine learning I mean AI has a lot of potential to do good especially for the disabled population but we must be aware of its limitations and pitfalls and probably some government regulation is in order at the very least to stop these companies from profiteering off of artists work without compensating those artists in the 148 images I generated mid Journey depicted an autistic person as being female presenting twice as older than 30 five times as [Music] white 100% of the time and zero were smiling well that was Jeremy Andrew Davis speaking there and [ __ ] me if he didn't just lay out my entire talk and he probably did it more
eloquently than I ever could and in about 3 minutes instead of 30 so okay that's a heck of an accusation right so let's look at some other tools it can't be this bad for all of them can it I know mid Journey's biggest competitor stable diffusion let's take a look at that now I'm going to drop in a very quick warning here before I play the next clip if like me you're old and gray and you can't get through the night without needing a pee um maybe you grew up watching the 6:00 news then this is going to come as something of a shock to the system but this is how the Youth of today get their news and I'm nothing if
not hip with the youth a readers are being trained on explicit photos of children more than 3,200 of these photos were found in a database that was used to train staes of fusion and other tools now the fu now let that sink in for a minute because that's a big deal right people get sued for saying stuff like that if it's not true so let's look into this a little bit I am well aware that AI image generation models have been used to create porn there's even a genuinely amusing war going on right now between content creators as they sort of and and the sensors as they try to stop people making pornographic imagery with these
tools um you'd be surprised at just how many imaginary but hauntingly beautiful people appear to have hot glue on their everything um use your imagination um and in order for the image generators to be able to do that they must have been trained on pornographic material heck it even makes sense if your aim is to train a model on how human bodies can move and interact with each other but speaking for myself I really struggle with the ethics of this now even if you can get yourself to a place where stealing somebody else's artwork is okay can you say the same of child sexual abuse material and whilst the Stanford article mentions three ,200 images from The Lion
5B data set um I am pronouncing it lion because it kind of makes sense to me but I might be wrong um the reality is that this study focused on stable diffusion simply because stable diffusion publicly acknowledges their training methodology significant parts of the tool are given um open source we can see how they work the fact is mid journey and many other tools are trained on this data set and they had to be um or they would missed the rush to market now this was a very big story a couple of months ago and the lion 5B data set was taken offline once mainstream media jumped on it but the Stanford report specifically called for
a clean slate delete all the tools they said delete all the tools that had been created using this data set and start again with certified clean good quality data and of course that has not happened instead we've got a censorship War um as I mentioned before limits on what type of imagery can be created as output have been imposed but the training data remains and to save you all reading all these articles um Mid Journey never responded to any requests or comment whilst stable diffusion publicly acknowledged the issue and promised they'd do better at restricting output nobody has offered to clean up their training data um but they have mostly now started to restrict people's
ability to generate adult content now before I move on I'm going to address an elephant in the room as it were I'm purposefully not going to talk about deep fake apps today um in fact I've redacted the name of this particular tool simply because I don't want to inadvertently Drive traffic to their site um as far as I'm concerned tools like this are openly criminal products um which are designed to monetize harm my interested days is in the legitimate and publicly available tools that purport to be suitable for Mass consumption um deep faking porn is something you can learn about on your own time um just before I talk about this one too much I am because sadly we
had a bit of a technical hitch going to run over by a couple of minutes into coffee break I do apologize um so this is adobe's Firefly um and there are a couple of surprises that came out of me trying to replicate other people's findings um now apparently Google had planned to use the a related uh lion data set called 400m um but they bended it off after they found troubling imagery and stereotypes in the data that made it unfit for use well done Google um and Adobe they announced that their Firefly tool is trained entirely on their own stock imagery now there are numerous cases of reported I IP theft where Adobe stock imag are concerned but the chances
are good that they've got at least the least amount of nudes ludes or similar adult material in their AI what's more is that when I tried to replicate some of the ablest racist and sexist content results in Firefly I I couldn't I was genuinely impressed so I thought let's try that last one let's try to generate an image of an autistic person [ __ ] um I think Adobe have really dropped the ball here I've got no idea well I was going to say I've got no idea why everyone's plugged into a VR headset but actually I suspect I have um where we talk about autism there's a disconnected from reality vibe that sort of gives me
pause for thoughts here um and I'm really uncomfortable with the raised arm on the picture on the right so just follow my logic for a minute because we're talking about Amplified bias right we're talking about historical data we're talking about training models we try really hard nowadays to avoid the term Asperia syndrome for two reasons the first is because the authors of the diagnostic and statistical manual for mental disorders wanted to avoid a misconception that Aspergers was somehow a different condition to autism now the second reason is because Hans Asperger was was an ny he collaborated in the murder of children with disabilities under the third rank that is not up for debate so I can now see the threads of
logic that Amplified bias that comes to this point disconnected from reality Asperia syndrome third right and this is about where things start to get weird because as I'm looking into this now I'm looking at trying to replicate some of this IST sexist um kind of material for the purposes of this talk but mid journey in particular starts to give me more diverse results it was almost as if they were responding to criticism in real time as it happened in the media as I happen to be looking into this and even when I reached out to my network of friends to try and replicate these results we all start to see the same thing more and more diverse results over
time I was starting to feel quite hopeful until the Nazi thing smacked Us in the face again this is Google's Gemini image generation in particular um it started to inject diversity where it really shouldn't exist you'll notice the highlighted section in the US senator's example no diversity was requested in the prompts that generated those imagery but Gemini added it just for fun heavy on diversity light on historical accuracy and then Google took the whole thing offline now I've heard from reliable sources that Microsoft's co-pilot designer was creating pictures of demons and monsters in response to prompts like pro-choice but like mid Journey they started blocking a wide range of keywords before I could replicate those
results oh and just in case you weren't aware copilot designer is just a Rebrand of being image Creator which in actual fact is just open AI darly 3 with a Microsoft skin on [Music] it when I originally conceived of this talk I thought this was going to be a quick win AI is bad it's clearly bad it was trained on stolen in illegal data it was brought to Market fast in an effort to ride the hype train but all this unethical morally problematic stuff will get called out by the media and the companies have to put things right won't they weren't they no apparently not this stuff has just been swept under the rug because I don't know because capitalism
maybe it hasn't been fixed because a lot of really rich people and corporations have invested a lot of money in the technology and I genuinely think it's too big to fail now so the media keeps spinning it and AI is still cool and it really is cool I mean despite the fact that I've stood here ragging on it for 25 minutes I'm still obsessed with the idea of turn in photographs of me into Star Wars characters and I like to think that I'm getting quite good at character studies and portrait photography now this is all very interesting or at least I think it is or I wouldn't be stood in front of you today but I suppose how does this
affect us now how does this affect how does this relate to infosec and that's where I'm going to get on my soap boox for a moment I'm going to ask you to call to mind that Cathy O'Neal quote you see a genuine believe that infoset professionals we're the good guys we're trying to help we're trying to make things better I've lost count of how many times I've had a conversation where I've pointed out that a secure product is better quality than an insecure product we literally enable delivery of higher quality product that's in everyone's interest right and if you've got any number of qualifications professional qualifications that chances are you've had to agree to a code of
conduct even to just be here today you've done that and those codes of conduct all basically say the same thing whether it's ISC squ oaka EC Council whatever they all say the same thing just don't be a dick and everyone in this audience is in a unique position within the company that you work for because you're working on governance on risk on compliance on security you're the Auditors you're the Champions the advisors so when your boss starts talking about adopting AI at work go in with your eyes open understand where this software has come from and where it's going because this ride is going to be rough in places be ethical be moral speak up for the
customer speak up for the end user speak up for the artists and speak up for the victims speak up for privacy and decency understand that sometime people should be allowed to leave their past behind them I really want all of us in infos to be on the right side of history on this and just to really Ram that point home IBM knew in 1979 that we shouldn't be letting computers make management decisions security isn't about compliance with a standard if done well it improves the quality of a product if done very well it's about integrity and doing the right thing and putting people before profit I'd like to make one final point if I may and it's about that Gibson quote I
mentioned earlier the one that goes the future's already here it's just not very well distributed I read an article by AA Schwartzman recently where he challenged that idea directly he says the more we use this quote as a mantra the more we relinquish our own agency it puts us all into the position of living in a future that belongs to someone else and never but ourselves and I want to leave you with that thought because maybe we can use our agency as information and Security Professionals to make sure this technology is used wisely ethically and securely and maybe for once I can leave you all on a positive note because you have the power to make a difference here
[Applause]