
okay I got the anytime so I'm going to start I am John below for those of you that don't know me I am the head of infos SEC at uh Jasper AI we're an AI company that does marketing stuff um I also uh serve on the board for Salt Lake ciso which is a group of local heads of security and cesos I made a mini badge for Sac con last last year and we lost them for since then but we found them and I have a few extra so I took some up to the soldering Village which I recommend you stop by and there's a few left here so if you want a cool mini
badge that uh I made my kids help me design then uh come grab one um and then I'm also a uh an adviser at a VC firm called TOA Capital uh I also recently survived Disneyland and this is John 2D2 which I designed which I'm very happy about uh okay let me talk about why I'm here a little bit what I want to talk about so I've been working in this AI space for about two years um and I've seen some interesting things and as I talk to colleagues and and other people in the security field about it they always find it super interesting so two takeaways from that for for for everybody here if
you haven't presented at a conference like this um you should do it things that you do are interesting to other people even if you don't think they are because this is something that I had to be convinced of um so I recommend that you do that and then two uh AI is a weird space uh and people are very confused about like how to deal with it so um you know we've we've had kind of this explosion in in uh what's been happening uh in the AI space over the last couple of years when I started Jasper chbt wasn't out and it was really hard to can like talk to my family and friends about what I did they
had no idea what I was talking about it's much easier since chbt is out which is nice but since then there has been this like explosion of of AI systems that have come out for various use cases um and more and more we are seeing these systems come into play in the companies that we work in uh and it introduces a lot of potential issues we have to do kind of review as Security Professionals that's kind of where I'm speaking from we have to do reviews of all these systems and make sure they're safe and data is not being leaked through them um and all this stuff and like this was a headline that I just pulled uh this week
that you know these are like a hundred new partners that Google announced at next um this year so with all of this proliferation of of AI systems everywhere everywhere what do we do well our responsibility is like to do third party vendor assessments right with a show of hands who has done this who has like evaluated a third party that your company uses Cool who likes to do that is that is that a thing okay listen I I've talked to people who like to do it I don't know I'm not in that boat but I've talked to people who uh who do it so evaluating third parties in general I feel like once AI came into the mix it
not only became harder but more confusing to figure out how to do these reviews um but I don't think it has to so uh let me jump in really quick um this is just another explanation third party risk is all over the place um it's difficult to do in general uh fourth party risk is also like a thing this actually came up in a conversation um that I was in a few months ago that you know there was a four fourth-party breach how do we even do four party reviews and I also think that there's evidence that there is fifth parties that you might need to review also but we're not going to we're not going to
get into that okay so let me jump into a little bit about first how to do um uh vendor risk assessments um and then we'll talk a little bit about kind of how we apply that to AI stuff I recognize it's 230 on a Friday so just keep the snoring to a minimum during like the vendor risk assessment part that's all I ask um so the first thing you need to do in going into one of these evaluations is what kind of third party you're are you looking at uh and and then consider ranking those if you're looking at a Financial Service Company um you know like a card provider or or a bank provider it's going to be very different
than like your swag provider now we have learned through difficult times that both of those systems can be exploited but you're probably going to look at the risk of those systems differently um and then look and decide what what types of risks you want to look through so this was like a a list that I just pulled off the interwebs about like the some types of of risk that you might want to look at um and again this is going to be different right you're not probably not gonna have a lot of Downstream risk from a swag vendor but if you if you have a large language model powering a you know an HR System that you have like there
could be some some serious Downstream risk uh to that getting exploited or or having problems there um same kind of thing if you're if you're using AI to power um even a development experience right like if that if there is injection that happens there there is one serious operational risk if you can't deploy code anymore um but again Downstream risk to other other areas okay I re I highly recommend that before you get into one of these reviews or especially into like a meeting with a third party that you're reviewing know who you're talking to like so one of the motivations for doing this is I've been on like the receiving end of these reviews like hundreds of times now after
two years and you know not only because it's an AI system but because people come ill-prepared I've heard this phrase from like the security person on the call to their marketing counterpart what are we what are what is this what are we looking at what does this do you know they often enter these situations knowing nothing and so uh you know don't don't be Andy dwire here know what you're looking at um the other thing that we see a lot and we see as I'm doing my own like third party vendor reviews is these AI companies are kind of all over the place right and I recognize that as one of them that there's the two guys in the
garage problem um that many of these companies are literally are just two guys in a garage building like an AI system from an available model so know the company you're talking to uh know what potential risks you're going to see that kind of thing okay on once you kind of have an intro uh you need to assess the controls the best companies are going to have an available security portal um I'm not going to like uh shill for one of them um but they you know there's a bunch out there and uh they're all great for this kind of thing um and then if you are the vendor if you have people doing reviews of your
company have a security portal it's it's really simple it makes the whole process significantly easier uh and makes everybody happier so often you're GNA you know have a have a process you're going to follow collect reports talk to ISO that kind of thing the next step I think is like kind of obvious but has to be said because I've seen the opposite you you should actually like look at the reports that you collect um I I've I've seen this that people collect these reports and literally don't look at them all at all there's no follow-up questions um follow-up questions from this kind of thing is are great and a lot of times they can get you to a
better place uh if you have concerns risk concerns about a specific company um and then if you if you've kind of done the previous step that we talked about and you've kind of teared your your vendors and you know what type of vendor you're looking at that will kind of help you know how much to dig but I did want to take us a minute here I'm sure I'm on the right slide I did want to take a minute here because I realized up until like I don't think anybody ever like told me how to evaluate a a security report um I just kind of made it up and found a lot of things I was doing wrong so I don't have
time to like go through this in depth that's probably a whole other talk but let me just talk a little bit about sock report because uh it's the most common one it's the one you're going to see the most um first is a sock 2 report is not a certification there is no certification tied to it it is simply a report on controls that you create so it's important to rec recognize that okay aak 2 report is made up of four primary sections the first one is the independent audit report um and this is uh the the the description that the auditor writes um at the very beginning um and often uh there's going to be
terms in there about the kind of the result of the report like unqualified means the company kind of passed the audit with no you know observations that they want to make um qualified that the company did well but there you know might be a couple of areas that you want to pay attention to uh the next one is adverse in that one the company has failed the audit essentially as far as you can fail a stock to audit um which is not great and then uh there's actually another one that they you the term they generally use is disclaimer of opinion which means the auditor does not have enough information to to make an observation or to make a
conclusion okay the next one is Management's assertion and this is where you as the company that gets audited gets to write kind of your uh your take on the report so you get to explain what the report is uh you get to kind of talk about what your systems are what the scope is that kind of thing that this is kind of your opportunity to talk about what you want to in the report uh the third one is the system description um it's a system description of the controls and of the environment that the controls apply to uh so in here you're going to have like System Scope you're going to have uh system components what infrastructure is used
you're G to list out the control framework um there is going to be a list of any system incidents um and then um any any additional information about user roles user responsibilities that kind of stuff and the last one is a detailed list of controls um and then what the outcome of the testing was so in here you're going to have like all of your control criteria listed it'll this section will tell you the the trust the trust criteria uh that was used there'll be a control number mapping um all that good stuff and then there'll also be results um for each of the controls whether there was observations or or issues or whether it
was an unqualified report so I wanted to go over this because I've never seen anybody do it I never had the opportunity to do it I hope this is helpful all right sorry we'll get to some AI in this in this talk about AI uh so let me go over just really quickly some some kind of questions that I think you should consider if the if the vendor that you're evaluating is heavily focused in AI right if there's a if there's a large language model backing the the system that they use consider some of these and again just like with some of the other stuff it's going to depend some companies are just kind of
shimming AI into their solution and it's you know used for one little thing over here some are based very heavily and wouldn't function without it so take that into account but you're going to want to ask is this a hosted llm that they're using like the API to open AI system or is this something um sorry I had that back is this a public uh system like open AIS or is this you know like an open source llm that's they're hosting in in their environment there's going to be some like follow-up questions to that um like what what specific version of the model that's being used the models are being updated very rapidly uh I I
think I saw an announcement that open AI released an updated version of Chad GPT like this morning and but it's happening like constantly so make sure you know specifically what it is what versions being used uh talk about how it was trained right so this is going to take into account a bunch of these questions around ethics and bias and and these are big problems that AI systems have that that they are uh I think I think the argument that I hear people make sometimes times is the world is biased right and so because of that the the data that gets fed into AI systems is um sometimes um unintentionally biased so make sure that there has been
some some care taken to remove some of that bias consider what ethics has looked like for these companies um we actually did an Ethics related podcast for for Jasper that that I was a part of it was fun um if you want more information about how we do ethics internally or or like an example of how a company can do an Ethics board for AI was pretty good um and then how you ensure privacy I've got a slide where we're going to talk about this a little bit here next but uh privacy is a big thing around AI models there's also a whole new host of security tools for llms they'll protect you against prompt injection uh against
uh poisoning of the data set those are things you could you should take into account um and then there's a there's a whole question around indemnification especially for systems that generate art and video uh are you getting any in indemnification from the provider that you're using you know if some company comes and decides to see you for allegedly stealing their art are you on your own or are you getting any kind of protection so I'm sure there's a lot more of these um but consider the questions that you want to ask to specific uh AI vendors all right oosp has some great information this is the first piece I'm going to share from them they have a
whole um AI security and privacy guide now I'll point out these privacy questions are kind of the same that you would consider for any other system um but you're you're going to want to understand that you know when when the system that you're using is collecting uh pii that that data is is only used for the purpose that it was collected um that the data is fair that you have data rights that you have the ability to delete data these are some of the questions you're going to want to ask also um go to oas's website there's a huge detailed um explanation of all these it's a great resource the next one is uh more applicable if you're
evaluating a large language model in and of itself or if you're going to use a large language model internally um but OAS releas they are top 10 for for llms and there's some great stuff in here um I talked about prompt injection earlier there's uh a company Lara that created that uh that cool gandal uh game that everybody played um they actually have now a tool based on the research that they did with that game that prevents and you know is there to prevent and protect against prompt injection um there's a bunch of these um and I'll I'll kind of let you dig into it but I want to get on to the next stuff um
to talk about so there's a bunch of regulations in this space Also that that I think you need to be aware of the first one that I want to talk about is the White House memo on AI that they released I don't I think it was late last year that memo had three kind of core uh features to it one goal was to strengthen AI governance the next one was to advance responsible AI Innovation and then managing risk for AI now a lot of this stuff applies specifically to government use cases but uh I I know there are people here who work for government agencies and some of these principles can be applied and I think it
won't be too long before we see kind of General um legal requirements that that require some of this stuff so I wanted to call some of the um interesting things uh in the in the memo itself one is that uh developers of the biggest llm systems um have to share their safety and test results um with the with the government there are also developer guidelines for federal agencies where if they're going to use uh llms in their systems they have to uh evaluate effectiveness of privacy controls that they have in place um and then there's a couple of specific actions one that I thought was kind of cool especially if you're like looking for a new job uh every organization um
every the head of every agency has to appoint a Chief AI officer within 60 days of the issuance of this memo so maybe it's too late if you're looking for that job but um and then also every agency has to develop plans for responsible use of AI sharing information that they find from it if they develop an AI system they actually have to share the data sets and the weights that were used to to generate the AI system so this is a developing space but there's a lot of requirements that are kind of coming down the other one that was also passed recently is the euis act um that that just went into effect but I think the way they
approached it was pretty cool they took a risk-based approach to AI systems they have these four risk categories that categorize AI systems in uh in one of these four categories I'm gonna go through each of them really quick we're because we're almost at time here um the the first one is the unacceptable category so this is these are applications that have the potential for manipulation through like subconscious stuff some of this stuff's pretty pretty crazy um through exploiting vulnerabilities um and exploiting people with disabilities or or um older folks um also in the unacceptable category are social uh any any kind of social scoring system using Ai and then they have also completely banned the use of remote
biometric identification systems and uh there's a timeline for when all this stuff has to be kind of in effect I'll I'll go over at the end the the high category is uh you know kind of high-risk stuff like Biometrics critical infrastructure education um and then any uh any of these high-risk uh AI systems have to be registered with the EU so they can be used but the EU wants to know about them and then the limited category is where most of your kind of general purpose AI systems are going to be this is where chat GPT is any kind of chat bot that you use is probably going to fall into this category couple of the rules they put in
place you have to ensure that your users know they're interacting with an AI so you can't hide it they have to know and then if you are generating images excuse me im images audio video all of that has to be disclosed that it is AI generated content and then there's a minimal category which is like stuff we've been using for a long time I kind of consider this more like ml stuff but it's like video games spam filters that kind of thing and then the uh the timeline for these uh this stuff is the prohibited stuff has to all be U phased out within six months uh within 12 months all the general purpose rules
go into effect for you know all the general uh chat GPT style stuff and then within 24 months all the regulations take effect okay that's that's what I have today I wanted to leave a little bit of time for questions if there are any um but thank you for your
time yeah
yeah complimentary user entity controls right yeah so the controls that are only in effect if if the user does the right thing I guess is good way to describe it yeah uh yeah there's there's I don't know is that do you know if that's an optional part of a report it's it's in all of them okay um yeah because there's some stuff that like some audit firms do differently than others I um yeah it's a good it's a good thing to point
out the the top one the unacceptable um yeah these are forbidden by the eua ACT um that's my understanding not not not a lawyer not a
lawyer oh uh so you're asking like what happens if you do it
or yeah I yeah I think the question is kind of what happens if you use it if if you do this kind of stuff my I I don't know first of all I'll start there I I did hear a commentator said that there's likely there's a likelihood this will be enforced similar to how gdpr is enforced um that it's enforced against companies that do this kind of thing so uh again very much not a lawyer but I suspect the EU is not going to care as much about individuals that do it but where they draw that line who you know this just also passed so I don't know if there's been a lot of discussion about
enforcement
mechanism yeah I don't know I don't know um yeah the White House one was kind of an early stab and it was also uh the approach was just to cover federal government agencies essentially but I would expect that's probably a model for any future legislation that does come other questions oh you're gonna have to talk
loud if I sorry you're kind of far back there but if I heard it right the question is how do you figure out what data was used to train a model yeah uh sometimes you can't uh that's just uh the reality of some of the models some of the model makers make it kind of public what type most of them make it public what types of data were used like I I think open AI talks about you know scanning some percentage of the internet and and kind of where their data sources come from but uh it's going to be model dependent the open source ones are probably more likely to talk about that um yeah does that answer your
question okay maybe it's not satisfying but it's it's an answer okay this thing says I have I'm over right is that what this says okay thank you