
Great. Thank you. Um, a lot more people here than I imagined at this stage of the evening. So, thanks for hanging around. Um, so brief intro. Um, I've worked in IT and information security last 15 years. Um, I'm currently working with Crisisk. um were uh recognized global experts in cyber risk quantification. Um so I've been working with them for the last uh 3 years and I guess primarily today I'm going to focus on cyber risk quantification using the fair methodology. Um so I guess briefly what is cyber risk quantification? Um I have a couple of uh definitions here but uh from Gardner Isaka and her own definition but uh the primary um or key message I suppose to get
across is that it measures risk and um explains it in business relevant or financial terms. So uh one of the key benefits is to be able to communicate with the business teams in your company. Um I guess going back the last speech uh you mentioned you know getting that senior management buy in being able to speak their language is one of the key things right. Um so briefly on fair um started in 2001 um Jack Jones he was CISO at uh nationwide insurance in the US. uh he started developing fair uh 2013 he joined forces with the open group to create the open fair standard uh taxonomy and risk analysis um in 2014 published a book um oh
sorry not too far uh published a book on fair uh 2016 the fair institute was formed um it's a nonprofit to promote the use of fair um across the globe. So there's different chapters uh across Europe, Asia, US uh growing all the time. Um it's uh a standard framework kind of similar to CIS uh NIST uh works well with all of these frameworks. Um it's used by a number of international uh organizations and it's increasingly been included in uh regulations across the globe including um the SEC requirements in the US uh DORA in the EU and uh some German regulations as well. Um so what is fair? Um it's a standard taxonomy. um uh provides a common language for
communicating risk. That's very important that you know um it doesn't allow for misinterpretations or people uh as we often see risk there's different interpretations or definitions. So this has a a defined taxonomy. Uh it includes a methodology for modeling, analyzing and quantifying cyber risk. Uh uh it's complimentary to other models like I mentioned like ISO, NIST, uh EBIOS. Uh it's open source, vendor neutral. Uh there's currently over 16,000 uh fair institute members internationally and 50% of Fortune 10,000 companies uh are covered in that. Um so fair defines risk as the probable frequency and problem magnitude of a future loss. Um so using this definition, the fair methodology sets out the steps of how we can do this.
Um so the first step and probably most important is defining your risk scenario. Um properly defining the scenario uh means everyone understands it. Uh avoids confusion and um you know scopes has a scope set out from the start. Uh the next step is we quantify the scenario uh in the analysis phase. So there's two two parts to this. Uh the first is the likelihood or what's called the loss event frequency and then we evaluate the loss magnitude. I'll go into a bit more detail on these as we go. Uh final step is then using those results and presenting them back um to senior management or whoever is needed. Um so why use CRQ? Uh a lot of
people, well I guess a lot of people here probably seen something similar to this heat map before when it comes to measuring risk where you know red, yellow, orange, green, what does it really mean, right? Um if if senior management come to you and say we have three red risks, which one's the top risk? You know, quite often nobody can tell the difference between them. Um so one of the key moving on from that key use cases uh for cyber quantification is communicating to the board and executive management. Um that's not to say heat maps are wrong but using quantifiable data we can better define what we mean when we say something is you know high risk red. we
have defined ranges or a defined timeline of uh when a risk scenario might occur and what that impact would be. Um it helps organizations understand information uh risk and business terms. Um like I said I've listed multiple use cases here. I'll go through a couple of them in slightly more detail but um these are probably the most common cases we see today. So understanding your third party risk exposure and the impact of a merger and acquisition on your company um how material an instant would be uh optimizing your cyber insurance risk coverage um optimizing your information security budget uh facilitating regulatory compliance like I like I mentioned earlier we're seeing increasing mention of quantifying your
risk um in regulations in the last number of years um and then choose using efficient risk uh reduction strategies. So often you know first question is where do we start? We have all these risks. Where do we start? Um so why do it? Um kind of falls into three areas. Understand, discuss, communicate and decide. Um so I won't go through through these in all in full detail but [Applause] um the key first step is we want to understand what our main risk scenarios are and how much uh risk we have with those in financial terms. Um we want to um you know communicate what activities would help reduce that risk the most so we can prioritize control improvement or
mitigation uh and then decide what we're going to do about it. So that's you know can be optimizing your cyber insurance coverage so that you have more cover for specific losses um or how we can communicate with our regulars to let them know we have analyzed our risk and understand where we're vulnerable. Um so quickly a couple of use cases uh these are kind of like the questions we see and how CQ would help. So um you know how can I get management to understand our top risks uh by explaining the scenario in business terms that get you uh you know first step there with explaining it right. um how can I explain the need for
additional investments like you mentioned earlier you can show the potential impact of uh risk scenario and by what you can do about it you can show how that can reduce over time um you know how do I demonstrate the true impact of a data breach on your company everyone's worried about data breaches but they often think it won't be too bad if it happens to us maybe they underestimate the number of PI records you have or something like that so Um it can help you prioritize that those risks so that you know which ones to focus on first. Um when it comes to third party risk uh this is obviously an increasing topic. Um we see it with kind of supply
chain risk as well. Um what happens if one of our providers suffers an outage? We can quantify if that if that provider was gone tomorrow, what would the impact be on the business? Um what if they suffer a data breach? you know increase needers share data uh across multiple companies um and how do we determine our most important third parties. So by doing this kind of analysis we can determine we can assess our third parties and decide which third parties maybe we shouldn't work with or which one we need to we or they need to implement additional controls to protect both themselves and us. Um and finally the third use case is we see often is
cyber insurance coverage. So uh first question a lot of companies ask should we invest in cyber insurance coverage or how much should we invest? Uh by determining uh the loss and the the impact. You can determine how much coverage you need. Um if you have existing coverage, are you actually covered for the cyber events you think will affect you most? Um, so how much of your potential losses would actually be covered versus, you know, there's a lot of uh a lot of clauses in these insurance policies that uh uh exclude a lot of things that you think would be covered. Um, and then you know what type of losses affect us and what's our coverage level. So again,
quite similar. If you understand them and map your policy, you can see maybe where you're missing coverage or where you need to uh you can maybe cut some coverage and add it somewhere else. Um so how to use sphere? Uh sphere uses a value and risk model. Um allows for the use of uncertain information. So we use estimated data ranges. Um alongside levels of confidence. So for every figure or estimate we use we use a min, a max and a most likely figure. Um the aim is to to be accurate not precise. By being precise you can get a false sense of um of your results. So it's get getting that uh a useful level
of precision as we say but using accurate data when we determine those ranges is key. Um once we have our ranges I'll go through the fair model itself in a minute. uh make use of Monte Carlo simulation uh can run thousands of scenarios that will evaluate the the estimated loss ranges and then we can use probability distribution to look at potential future loss amounts that gives us you know our most likely and average loss amounts and likelihood. Um so go expanded on what I showed earlier the four steps. So we define our risk scenarios. uh key things to to know or to do to to get this uh understand the business context in the value chain. So what
what's important to the business, how the business works. Uh doing workshops with key stakeholders across the business is key. Uh if there's an existing risk register, a findings of uh control gaps or issues, we can make use of that. This all helps us identify crown jewels or assets. Um when it comes to quantification of the scenarios uh like I said we evaluate the loss event frequency which is the likelihood of scenario plus the loss magnitude which is the impact. First thing this can seem daunting. First thing to know is you have a lot more data than you know. I'll go through some examples of that there but in a company everyone suffered incidents. business teams have some idea if there's downtime
in application uh what that impact would be. If you've done disaster recovery testing, you know what your recovery times are. Um they're just some examples. Uh when you use the taxonomy that helps you break things down so you can explain it to the people you're meeting. Um like I said, estimate a range of values for each type of loss and then use Monte Carlo to calculate the the annual loss exposure. Uh you can then interpret present the results. Um to do this crafting clear risk statements or clear risk scenarios is key. Um you can tailor the message to the audience. So if it's senior management that you're looking for more investment from. If it's maybe another
team that you want to highlight a potential security issue like like we discussed or like was discussed in the previous presentation. Um you can recommend actions whether that's control improvements um mitigation risk transfer um and then you tie that back if there's a decision being made. So if it's the case of cyber insurance uh review tie that back to decision do we need more coverage? Yes, no. Um let's go to that first step defining a risk scenario. Um the fair model breaks this down to question how much risk do we have from this scenario. So for every scenario you there's three key features you need a threat agent. So that can be cyber criminals can be privilege insider
malicious accidental uh can be an a nation state um vector you can include as well how they're going to access your asset. You know fishing is mentioned a lot today. That's probably the most common one. Uh assets. So this is your crown jewels. It can be an application. It can be data such as PII. Um it can be intellectual property. Um and then that results in the loss event. Uh and we would often or way we do it uh most time is using the CIA triad. So it's either confidentiality, availability or integrity. That's uh the lost event. Um so I just listed a brief example. Cyber criminals attack uh PI and database and that leads to a loss of
confidentiality. Um so using the fair taxonomy this uh hope you can read it. Um this breaks down risk as likelihood and impact. So as you can see here you can go down different layers in the in the model. Uh I'll go through each in a bit more detail, but you can see here it's a mixture of financial data percentage likelihood versus a number. Um so when we look at likelihood um this is broken down loss event frequency start at the top level that's number of loss events you'll experience over the next year. Um this is made up of thread event frequency which is um made up of contact frequency probability of action. So if you think contact frequency is uh the
number of times a thread actor will come into contact with the with the asset. Um if we think about this in fishing it's how many fishing emails make it through your email security system each year. Uh probability of action is maybe what the how many times employees then click on the link in those emails. Uh just as an example. Um once you combine those you get thread event frequency which is the number of times a threat actor will attack or attempt to cause harm to an asset. Um on the other side we have vulnerability. So this is the percentage of attacks that will cause loss to be successful and this is determined by threat capability. So this is how
capable are your threat actors. Um especially when we talk about something like privilege insiders that's going to be pretty high because they have valid accounts already. they can they can um they're well capable of doing it. Uh your resistance strength then is tied to how good are your defenses to protect you against these things. Um so one of the key things here is you don't have to start at the bottom. You can start at any of these levels depending on the data you have or need. Um most often we can use we we often work with the loss and frequency stage. um companies either don't have some of the data that's here, maybe don't want
to dig into it that carefully. Depends how experienced they are doing uh risk quantification. But I I'll go through an example and we'll we'll do it at the loss and frequency stage. Um on the impact side, uh again like I said you you this is the area where you have a lot more data than you think. So it's broken into primary and secondary losses. Primary losses are a direct result of the incident. Uh secondary loss is secondary is stakeholder reaction to the incident. So this is often your uh regulators, it can be shareholders are not happy. Uh it can be if there was a PI data breach, it can be those affected you know launching a
class action lawsuit against you. Um this is then further broken down into secondary loss and frequency. So how likely are they secondary stakeholders to uh to do this and then what that secondary loss magnitude is. Um so sticking with loss magnitude there's six primary forms or six forms of loss. Um these are broken down into the most common primary and most common secondary losses. Uh I won't go through the examples in detail but productivity at a high level is you know staff are unable to do their job. They come in ransomware attack all their endpoints are locked out. They can't do their job. Company's losing money paying people to do nothing. Uh your response costs are you
know um forensics, legal, instant response teams, all those teams both internal and external uh response costs. Uh response can also be a secondary uh loss. Quite often if you have something like I mentioned if there's regulators or a court case uh your legal teams are going to be involved due to that secondary uh stakeholder reaction as well. Uh replacement losses are tied to again tied back to ransomware. If your um laptops or servers or anything are uh are locked and they have to be replaced um this can be replacement cost. Uh secondary losses are um primarily driven by competitive advantage finds judgment reputation. So competitive advantage usually occurs if you something like an intellectual property is lost. you know, you have a
new product coming to market that's been leaked. You lose that competitive advantage, fines and judgments, uh, you know, regulators, can be contract penalties if you suffer downtime and your, um, your customers are affected. It can be contractual penalties. Uh, it could be people taking legal cases in the case of a breach. Reputation damage then is, you know, negative perception. Will some of your customers leave you if you suffer a data breach? Will future customers stop or not use you because they don't trust you as a company? Right? Um once we combine these, you can see here uh this is how we'd model the risk scenario. So you can see across the top we have our threats assets the loss
event. You can see the the loss event frequency side table how likely that loss event is to occur. Um I've just listed some example controls at each stage uh across um the event occurs. You have your primary and secondary losses. This is where your uh response controls for example come in to help reduce that loss magnitude. Um so just using an example uh I don't over time but um again I'll bring this back. In this example, a customer is concerned by the potential impact of malware attack could have on their business uh and has defined scenario in the next slide. Um so their scenario is cyber criminals deploy malware via fishing uh propagates to a large number
20% plus of their workstation results in a major incident uh impacting the availability of their Windows workstations for 1 to seven days. So like I said that min max uh range that we use we use that the whole way through. So we have our our min and our max uh days of outage as well. So our asset is our employee endpoints and in this case we mentioned windows uh attack vector is fishing impact is availability. The thread actors is cyber criminals with a high degree of capability. Um so I mentioned here I think I forgot to say it earlier but it's key to to limit the scope of your scenario uh where possible if you're worried about
multiple thread hackers you should run multiple scenarios the likelihood and impact uh is likely to vary if you have multiple assets again it makes it more difficult uh to to measure that when you spread across assets so it's best to break down into individual assets and impact is the same right Um so estimate loss event frequency um you're primarily going to get your data from both internal and external sources. So some examples of internal sources is you know the past incidents are near misses you know in this company they said they had a laptop infected two years ago didn't spread any further they managed to contain it. the three fishing incidents um that reached user mailboxes
um users clicked the link but didn't input anything in the last couple of years um what controls have in place they have edr they have m MFA in place they have strong user access controls um known gaps or weaknesses so you know they're they're um uh their patching SLAs are often not met they don't do fishing simulation testing um say internal factors we gather from workshops or other data sources that we can use. And then we also tie that together with external data points. So that could be industry research papers um tread intel. Um there's there's lots of good research papers out there. I just mentioned one um they do really detailed analysis on broken down by industry of how likely
companies are to have cyber instance. Um so for example we estimate the loss event frequency is a min once every 20 years which is 5% chance most likely every 10 years max once every 5 years. Um then I think mention as well but down the bottom it's always very important document assumptions and uh your rationale and share that with the stakeholders. um especially when you're doing these interviews if you don't include the source it's very very difficult to defend the data and that's one of the key things with cyber quantification um when you're using data it has to be defendable it has to be backed up um so it's very important that if we make an assumption that um
something is 10% likely here's all the data points we used some people might disagree with it and you know that's part of the calibration at the end when you meet different stakeholders but once you have that data that's there defendable makes things easier. Um so in this scenario you know we're talking about malware um what losses are affect productivity is going to be affected employees can't do their job response to the incident placement of of uh hardware secondary losses there's likely to be some sort of response maybe dealing with regulators or dealing with uh customers through contracts competitive advantage in this case no fines and judgments in this case no because we're saying that there was
no uh no data breach reputation damage most likely. Um if you're out of action for 7 days, some customers probably going to move to um an alternative or if for example if you're an e-commerce website, someone's going to go and buy something somewhere else. Um once we know what losses, we can you know work with the work with the internal teams um to get that information. Some examples, you know, how much would it cost per hour if staff were unable to work? What's the cost of our internal teams doing this? Do we have contracts with, for example, forensics companies to come in and help us? What's their costs? Um, you know, hardware replacement, we know how much
our laptops cost, so we know how much that's going to cost us if we have to replace them. Uh, you know, secondary response, we know how much uh our legal team or legal fees are. Reputation damage, we can speak to, you know, some of our customer service teams, how likely they think our customers are to leave. If if we lost three big customers, how much would that cost us? We can work with business teams to establish these these ranges. Um once you do that, we we like I said we before uh we run it using Monte Carlo simulation. We can get our loss per event and our likelihood. We can present these are just some examples
of how we present results. Um bottom two are kind of as we analyze multiple results, we kind of chart that to show which ones are the most likely, which ones are going to have the biggest impact. You can go more detail like the one in the top left where you break down how you can reduce the likelihood and impact um by maybe implementing or improving certain controls. Um other scenario examples, uh I kind of broken this down through the CIA triad. Like I said, availability. It can be anything from a system outage. Like I said, it can be a hardware failure. It can be malicious internal employee. Maybe got got fired and decided to shut
down everything. Uh could be natural disasters. Um could be like a DOS attack taking down your e-commerce website. Uh confidentiality, you know, multiple data breaches, PI, customer sensitive customer documents, intellectual property, um stuff like AI risk. So, you know, we're increasingly seeing people are worried about uh what happens if staff upload uh data to a free AI tool like chat GPT or something like that. Um all these things can be broken down and and quantified. Uh integrity then is kind of your fraud, your financial fraud um using the likes of deep fakes and AI. Um included uh can be data misconfiguration um affecting you know the the quality of what's been used. you know AI training data that's something
as as more and more more companies use LLMs and use more develop their own AI products you know what happens if that training data is uh is compromised or if there's biases introduced to it um so then just to summarize um there's lots of benefits to quantifying your cyber risk uh communicating risk in business terms uh help you make datadriven decision decisions, help you prioritize your risks. It's an open source standard. Define taxonomy. I think they're key. Um, anybody can go on to like the links at the bottom of the fair institute or the open group. Go win, read those, um, start using them. Um, there's a great tool. I forgot to include the link here
called fair as well. Fair-u if you Google that if you want to use it. Um, you can input the figures and does the Monte Carlo simulation for you. Um key things to remember, define and scope your scenario. Every scenario has to have a threat, asset, uh and an impact. Document all your assumptions and rationale so you can defend the data. Um if you're thinking about this in your company, you always have more data than you think. Um so it can seem daunting at start, but there is the data is often there. It's just a matter of finding the right people to get it from. Um and it's about accuracy, not being precise. Nobody's nobody's coming out predicting
you're going to have a you're definitely going to have a ransomware attack in 5 years. Uh it's all about you know this is the percentage you're likely to have it in the next 5 years. Doesn't mean that in 5 years time it doesn't happen. It's wrong right? It's uh it's estimation not uh predictions. Um yeah and I've linked there fair institute open group find lots of great information uh case more case studies more detailed case studies there as well. Um, yeah, that's it.
So, have anyone any questions or eager to take over? Is there any open source platforms or implements? Uh, yeah. So, so like I mentioned that fair fair is an an open source um training tool. to kind of introduce people to fair. Um there's also uh I'm not I'm not a cod or anything like that, but there's there's repositories on R and Python that you know you can develop your own tool using those as well. Um there's a few other platforms that kind of give you free trials, but they kind of you know want to get you more used more. But there there's a lot of free Monte Carlo tools out there that you can including in
Excel that you know you can just input the figures in Excel. It'll run the simulations for you and use it that way. Yeah. [Music]
So better data more
[Music] accidentally vulnerability
[Music] system how yeah uh so sorry so the I guess the question is um I'll try to summarize uh question is where where you're lacking data how do you make those estimates. Yeah. Um so I guess that that's the key feature of using the the ranges for each each part of the model, right? If if the more uncertain you are, the wider the range is going to be. Um so if you're like if something's never happened before, it's often going to be a wide range. And when we run the simulations in the Monte Carlo analysis, uh it also provides ranges min max most likely in its output. So it's never it's never like a single figure 5% chance
it's going to happen. It's going to say 5 to 15% most likely seven you know depending on the like I showed the distribution curve that that's where the most likely figure come. Um in regards where you say you don't have the data that's where I guess the external data sources is good. Um there's a lot of like even if you're just looking for cyber instance there's a lot of great reports done every year on uh how many instance are uh so whether it's via fishing whether it's like you said SQL injection tax all these things there's lots of good reports out there and they're usually uh done in a range as well like 7 to 10%
chance based on your industry or your company size by revenue etc etc so um I would often recommend use those external data sources as a starting point if you're unsure then work with the teams inside to say does this sound right if not why you know doesn't sound right because we have control X here that you know most companies don't have or you know we have a basic control like MFA missing our likelihood is way way higher right um so if you're missing data I' I'd always kind of start with the externals give you that base range if like I said you could research for days and hours and range can come in 1% each side right um that's the joy of
cardo it gives that by running that simulation you know 10 20 30 40 thousand times it runs every kind of case in between so it gives you that uh that range in the results as [Music] well. Okay, thanks everybody.