
So, good morning everybody. It's still just about morning. Uh, thank you all for coming to my talk today at what is admittedly the end of global insider threat awareness month. Um, there you go. Uh, I hope you all enjoyed your coffee, tea, and pastries and hopefully you're not going to be too bored by my talk. Uh, next. So very briefly, who am I and why do I think I've got the right to stand up here and talk to you all about this? So I've got over 30 years experience in cyber security and IT um vulnerability management, identity and primarily insider risk particularly last 12 to 13 years. Um so my role is I help architect
inside risk programs for clients and one of the things that I do is I make sure that I'm not just there to sort of um say buy this technology because actually if you buy the technology without putting in the right people and processes to support it you actually might as well you would have been better off just saving your money. Um it does require a variety of skills. So I do have a technology background although I have been sort of um elbows deep in the tech for quite a while. Um I'm very familiar with privacy concepts and I'm also quite good at um [snorts] psychology. Particularly if you ask my daughter there she'll tell you that yeah
I can always guess what people are up to and think of really interesting punishment sometime. [snorts] No typical day for me. Um you can see my challenges there and I am a mom of five. I am an enthusiastic singer as I have inflicted on many of you and uh love to travel foodie and fast cars. So that's who I am and why I'm here. So what I'd like to do is really just try and um get everyone on the same page so we understand what insider threat is. And everybody thinks, you know, insider threat is bad people. And actually bad people can be an insider threat or an insider risk. But it's actually far broader than that. So the reality is for
inside threat, you get probably about 5% of people are malicious insiders, people that deliberately want to do something bad or do something wrong. But the majority of problems with insider risk relate to um people trying to get their job done and making mistakes or people whose credentials are compromised. One key trait between the risk of an insider is the fact that it's people that have the right reason to they got the right entitlements. They need to be in these systems and that can be either used for bad purposes or compromised or mistakenly. So just to clarify there, it's not always bad people. So that going on to the first myth that I'm going to talk about. Um, and I found
this at a number of companies where I've set up programs trying to convince people why an insider risk program is important for them, where the value is. A number of people that have said to me, we don't have insider risk here. We don't hire bad people. And I think I have to say to them, you know, I get you have that impression and I have to say it would be a very interesting corporate strategy if you did go out to deliberately hire bad people unless you were like um trying to set a bank robbing firm or something like that. However, the reality is you do hire people and people can make mistakes. People are human.
um you tend to find the people that do make the mistakes as well normally your best performers because they're normally very busy and they're trying their hardest and they're trying to do more with less. So actually having a good insider risk program not only protects the organization but it can also be really useful as well for in protecting those really strong employees because you're putting those safeguards in place to detect when things might have gone wrong or broken business processes that may need fixing so that you can help them not become responsible for a data breach. So the bad people as I say are generally a very small percentage but when they do strike it can be quite devastating.
Um you've seen situations in the press where somebody's particularly unhappy about being fired or laid laid sort of laid off. Um so the first thing they've done is logged in because their account credentials haven't been cancelled and reset the entire firm's servers to factory settings which can be a really interesting experience. So how do you identify potentially bad employees? Now there is a perception that um they are people that's quite often in a desperate situation um and they think about you know maybe and and some companies will do this. They'll constantly credit check or regularly credit check their staff because theory is if you're starting to get into financial trouble that can be when you can be compromised.
Now that can sound like quite a good idea but the reality although it's not a very nice thing to do but the reality is that different people deal with stress in different ways. So actually it's not a very good way of finding a malicious insider [snorts] because one person might be upset that they owe £100 whereas another person couldn't give a damn if they owe thousands. So actually um I'm sure you've all heard of the MITER attack framework and they bought out this really good attack framework that's used for so docks etc for working out where threat actors come. [snorts] So they did some research and they established that for insider risk the MITER attack framework doesn't really
work. But what they did identify is the fact that um one of the biggest trends for a malicious insider is somebody who feels undervalued. So [snorts] maybe they feel their contribution isn't recognized. Maybe they feel they haven't been given a pay ride or a bonus when they deserved it [snorts] and that's actually the biggest trigger somebody being a malicious insider. So I talked as well about the person who made redundant unexpectedly or who was fired that can be a trigger. So quite often one of the things to look at as well is that in addition as well somebody who's underperforming and rule breaking. So for example, if you work in a bank, people who are on trading desks and
constantly exceed their limits or don't keep to those types of limits, that could be a sign of a malicious insider or somebody who thinks, you know, actually the rules don't apply to them. People who don't regularly do compliance training because actually it's boring as hell, but the reality is there is a regulatory reason why we need to do it. Whether we're helping clients, doing it to keep our clients happy, regulators happy, or actually just because it's the right thing to do because we aren't experts in those fields. So people who sort of go, well, I don't need to do the compliance training. The rules don't apply to me. I don't need to b in. You
know, silly little things like that that can be an indicator of a malicious insider and somebody who thinks actually the world owes me a living, not I'm here to do a job and get paid a fair wage for it. One of the best ways that might establish to mitigate something like this is to make your staff feel valued. Now, it might be that um sometimes you bring these programs in to make people feel valued and they go, "Oh, yeah, you could give us a pay rise or something like that or pizza lunches." And I can see that perception, but actually saying taking the time to say thank you um and making sure that good behavior is
recognized and spread evenly throughout an organization. that can bring people together even when you can't necessarily afford to give big pay rises because it's a difficult market or something like that. Making people feel like a team makes them less likely to be an insider because they're going to work together and they know that they're going to be hurting their colleagues and the people that they work with rather than you know it's not the company it's this is my friends the my colleagues. So that's definitely and it's good for the rest of us who are employee, but that's getting that message across to managers and senior um like the sea level is a really really important thing. Make your staff feel
valued, you're less likely to get a mission insider. So the next myth you get is it's a technology problem. Um [snorts] and people always think insider risk is a technology problem and this is a common misconception and the reason behind this is so much data that we have today and so many services that we deliver are actually delivered by technology. So it's easy for somebody to think well I'll let it deal with that. But the reality is if you do that they'll go away and they'll buy some really expensive tool but it will be as much use as a chocolate teapot because you haven't got the right people and processes around it. [snorts] Like I said earlier if you don't
implement the right controls and the right people and processes around this technology you might as well have saved your money and not bothered. [snorts] So actually people and processes can be more important than the technology. [sighs and gasps] And it's not just a technology problem because the data doesn't belong to it. It belongs to the rest of the business. It belongs to the individuals that work at the company. It belongs to your clients. And actually the people that are impacted by any negative data breach or anything like that are those people. It's your customers. So actually isn't a technology problem. In order to have effective insider program, you have to right from the get-go engage with other
stakeholders. So HR is an obvious one. The business data owners again obvious. But [snorts] one one of the people that tend to get forgotten are legal department. So employment law, privacy, particularly if you work in a large global corporate. And by getting those people engaged so they know what to do when you've got an issue before you even start rolling out technology. It feels a bit of nuisance and it can be a challenge because lawyers do think differently technologists. So being able to communicate what you're doing and why can be an interesting experience um trust me. But if you can manage to do it, you will reap benefits because these people will be supporters when maybe
things don't work quite right or when staff say, "Oh, there's a problem here. I don't really like this going on." These will be the people that can help you navigate what you need to do in order to deploy that technology. You'll never and that goes right on to you'll never deploy this in jurisdictions like Germany. So there is a a perception that in places like Germany, France and things that you can't deploy monitoring. Now this goes back to they've got very different attitude um in some of the European countries and this actually sounds dreadful but it goes back to the war. So in the US for example you have whistleblowing wars than you do in the
UK whereas in places like France and Germany they get really upset by the thought of whistleblowing. And actually what you can do is you can report something but you have to go through independent auditor um so they can go and check there's no ulterior motive behind it and it does go back to the war like culturally because that's when neighbors would report on neighbors and things like that. Having said that, they do also recognize that we have a responsibility to keep data safe, whether it's our intellectual property or even more importantly, it's the personal data of people that work for us and the personal data of our clients. So the steps I've always taken to be
able to deploy things like this in Germany is to make sure going back to what I said earlier, you do all of your processes and you justify what you're doing before you even start and then you work with the workers council. So the approach that I've taken in the past is to very much go from the data. So we've got PCI data, that's credit card data, and we need to be able to protect that. There's a regulatory driver behind that. So, what do we need to do? What does it look like? What's an acceptable use of this data? What's unacceptable use of this data? And then how am I going to monitor for just that specific piece of data? I
don't care if anybody's going to see their mom that evening. I don't care what anybody's doing on their holiday, but I want to know if somebody's sending credit card details externally. So, that's the type of approach to take. And if you can take that and make sure that you're just not going to turn on monitoring rules and then that's it, they're there forever and a day. Make sure that you review them regularly and that you make sure that um they get refreshed and that they are still proportionate in what you're doing. So you again going back to working with the privacy and employment lawyers and they will help you understand what you cannot do in different jurisdictions. So for
example, when I deployed something in Poland, what we had to do was update the the workers document. It's called regulassi. And my pronunciation is dreadful for anybody who's Polish. I apologize. But that gives you the ability to go and deploy monitoring. So you will be able to deploy stuff like this. You need to take the steps first of all. You can't just go like some American companies do and go, you know what, I'm just going to push this out. and they've signed a conf confidentiality clause. They're just going to have to live with it. That does not work. Coming on to here now, this is a really important one. So if you go back to what
I said earlier about making people feel valued, one of the biggest challenges you can get when you start to deploy monitoring is the fact that people feel that someone's going to sit and read all their email and isn't this a bit like big brother? And that can make people feel nervous, which is why you have to be completely transparent with your staff. Explain to them what you're going to do and how it's going to work. not too specific because otherwise they can find ways to get around it. You know how BR people are but explain the advantages to them. So if you think for example about the fact and go back again to what I said earlier about people who you
wouldn't want to be responsible for a data breach but you're working really hard and you're busy and everybody wants more done with this. This is to protect them as well. And sometimes you can see there's whole teams of people that have been working through a particular business process and they've been doing it for years and years and because they've never realized there's a security impact there. Good monitoring can allow you to spot that before it becomes a problem. [snorts] When anybody ever says that to me, the first thing I always say, yes, they're going to hire me an entire room of elves to sit and work 24/7 reading all of the emails. The reality is, particularly in
large organizations, you couldn't do it if you tried. But what you have to do is explain some of the really specific steps that you go through and bring people aboard. Don't try and cons because otherwise people are going to say this and they'll make up answers and they'll make them up at the coffee machine. And once somebody makes a decision in their mind that this is the way the world is, that will be their perception. a perception may not be fair and it may not be accurate but it will be them reality and it will be something it will be a barrier that you have to get over. So really try and be completely transparent with the staff.
It has to go in the privacy notice but not just a privacy notice. Um give notes to email and explain why you're doing it. Make sure that you have um teams meetings where people can just come on board and ask questions. >> [sighs and gasps] >> I always try and make it I'm a bit more like big sister than big brother. And quite often that can be quite impactful because if I can build a trust with people, they are happier letting me monitor what they're doing because they they start to realize that actually I'm in their corner if they're not doing anything particularly malicious.
So the other final u misconception you find is that somebody goes okay we're going to roll out an insider risk program um and when we deal with the incidents oh what should we do we'll just to the sock to deal with because they deal with all the other cyber incidents now the reality is the sock should absolutely be involved in the triage of are risk incidents, but they shouldn't be the main people manage it. And the reason for that is even if it's somebody who's had their credentials compromised, the people that are generally impacted by these types of incidents are your employees or your contractors or your third part, but they are people that work for you. [gasps] Whereas if you can
see somebody maybe scanning your exterior work and maybe trying to get in and you might want to as a sock person that that alerts in your scene and you want to take it take action to try and prevent that. If you do that for an insider, you can actually compromise your ability to take action if you need to later on. [snorts] So, if it is a malicious insider and you want to do things like maybe take them to court or go down the disciplinary route, you need to be able to capture the evidence in a way that's forensically sound. And when you're rushing to pick things up and and solve the problem, sock analysts don't tend to
think in that way. Sometimes as well, you might find you need to deal with law enforcement. It's about capturing the data in the right way so that you can share that with them. So if you think about police forces, they don't have the same people go out and do the forensics as the people that do the investigation. They have detectives do the interviewing. And quite often that can be a skill where you, you know, you need to think about how people are and how they react. It's about asking open questions. Whereas if you're doing forensics, which the Sockan is typically be quite good at there, you need to be very precise. It's very binary. But trying to find out exactly
what's happened is for a different skill set. Now you do find some people um these unicorns that have got both but there's not that many of them and generally having that level of segregation of duties so that I'm managing an insider risk incident and I need to go and get more information on this so I need somebody to do forensics for me. I shouldn't be able to just go do that. What I would need to do is get approval from employment law and privacy and then I can ask the forensics guys to do that. And that makes sure that then it's got that level of control so that nobody can take advantage of having excess
entitlements or being able to spy on people. And again, when you go back to your workers councils, that can be a really effective way of reassuring them controls are in place and their data is not at risk.
So in summary, work hand inhand with key business units and key stakeholders like privacy employment law when you set things up. So when I built the program at Thompson Reuters, before we even did anything, I wor with a privacy team and we went out and and this is not something I acknowledge that every company can do, but we went out to um local council everywhere we where we had a significant impressence globally and we asked them the same questions about what are employment law regulations in the country, what privacy regulations apply and we renewed that information regularly and that's important because it makes sure that you're not going to break your laws by deploying inside a
risk monitoring because if you don't do those steps the reality is instead of reducing risk you can actually end up increasing risk. So, by having done those, carried out those steps and done your due diligence, you can actually find it's much more effective when you roll it out and you can answer people's questions and if you need to go to court, you've got that to back you up. Always develop your processes and procedures well in advance and review them regularly. And this again is really critical because going back to the first slide where we don't hire bad people. The minute that first incident comes in, everybody will be going, "Oh my goodness, we thought we didn't, but look
at this. This is dreadful. The sky is falling and everybody wants to hang the person." Whereas actually when you do the investigation, it might be their credentials have been compromised or they've simply made a mistake. And the last thing you want to do is disenfranchise people because you treat somebody badly and excessively. In addition to that as well, when the incidents first start coming in, and they do quite a bit when you first turn on monitoring, you'll find that that you'll identify a lot of broken business processes. And the reality is what you want to do is work with the departments to fix those. You don't want to go a hook you're bad and you're bad and raise it up to senior
level and you can alert fatigue if you keep going to see people and going well we've had another one we've had another one. So the idea is make sure work with the company's risk appetite. Establish what it is that they would say is a major incident and then make sure you've documented that and you've in the same um do a tabletop exercise do one for insider risk as well so that everybody that's involved knows what they need to do in those situations and it makes it far more effective than having played catchup. Um and I did learn that the hard way the first time I got involved inside a risk and as a result of that
now whenever I do anything I always make sure those processes procedures my use case uh my policy life cycle everything is in place before we even switch anything on. And then last but not by absolutely no means least always assume positive intent. And this is a lesson that I learned a couple of years ago. And I find it it really resonates with me in everything that I do from a work perspective. Even if you sort of bump into somebody in the street or you've got somebody who's a bit of a pain at work and you think, "Oh god, they just don't like me." But it could just be they're having a bad day or they're busy or something like that.
So when that incident happens, particularly if you may be in a situation where you need to take legal action or something like that, it's really important that you triage that incident in a way that is completely objective and you do not assume guilt. Everything has to be evidence-driven. And this is absolutely critical. One, because it makes you more powerful if you do need to take action because you can demonstrate you haven't made a supposition about what somebody's doing. Two, under regulations like GDPR, etc., you're supposed to have a person look at anything that's automated and check that is actually what happens, that what's looks like it's happened has. And not only that as well, by making sure you
assume positive intent and sharing that impression, it makes people far more likely to come to you and people trust you and that can make it a far more effective inside a risk program. Um, other than that, there we go. Nearly dead on time. Has anybody got any questions? Here we go. I'm desperate for people to ask questions. I caught it. Um, what's the worst or the weirdest insider threat that you've ever dealt with? >> Um, there's probably been a couple. So, one was during CO. Please don't look up where I work during COVID. And uh we found some really sens sensitive spreadsheets going externally to somebody's Gmail and we looked into it. It turned out his
son was having problems in work experience. So he emailed spreadsheets to himself to give some work experience. The other one was the first incident I ever got one of the companies I worked at and somebody had and you know what people are like they leave and they want to take stuff they've created. So literally we turned this on and within less than 24 hours we had this alert come out about credit card numbers escaping company. And it turned out this person had sent a spreadsheet hunting themselves which had an embedded spreadsheet which had another embedded spreadsheet in it which had credit card numbers in it. He didn't even realize that was what he'd done. He just wanted
to take the intellectual property. Now, neither of that is good, but this the credit card numbers was a real problem. Um, so there's a couple they're really good examples of people just being stupid and they do happen a lot. >> If I can ask a follow question. >> You mate. In terms of percentage, how many have children?
>> I believe and I've got a feedback here because there was a ponum study that um Pro Point sponsored a couple of years ago and I think it was something like 55% stupidity and actual malicious insider is about 2 to 5%. It's really not very common at all. The problem is when you look at the cost of data breach, stupidity versus malicious, the malicious decider costs a lot more whereas stupidity quite often you can pick it up. Um the other one that's becoming really um a lot bigger is where people get their credentials compromised and that's when you get things like business email compromise and people can change account details. That can be really expensive as
well. So that's why you do things like fishing training and stuff like that. Here we go. Let's see if I can throw this. I think you might have to throw it on after it's very soft and doesn't hurt very much. Um my question is um um thanks for your for your um very lovely talk. um how do you deal with for example in a global organization where you have countries where you cannot monitor users and you're speaking of um incident risk management how do you balance that out in such places >> so if you really can't monitor and I I've not found many countries where you can't so maybe Russia for example they might get because they're too busy doing
it themselves >> Netherlands one of >> well actually no I have deployed it in the Netherlands as We work with the workers council. So most places in Europe if I can't think of any there's there's certain nuances. So for example in France if somebody puts personal in their email you can't monitor it. you have to put a rule in say that you can't. But um if you I've always found if I work with the workers council and I do the procedures first of all so where I might not normally share how the use cases work generally with the workers council because they've got a legal obligation to keep everything they get confidential. I generally find if I
share the use cases with them and I share the processes, if there's going to be an investigation, we make sure that they're informed. Um, and as I say, that's worked and doing that in advance can be effective. If you think about um, under GPR, it requires you to have and I need to think of the phrase reasonable technical and operational processes to protect the data. So executed in the right way actually monitoring can be one of those technical and operational processes but you have to do things like go through um uh justifiable processing and you have to do business impact analysis. So this is why I talk about working with the privacy team. So that's
how I've dealt with that. Just to follow up on that because I know it's um when you have an eiscocovery request for example um that impendes other people's rights or maybe not rights there's a sort of gray area in that how do you navigate such >> again I navigate it in advance so I rolled it out in Poland that's why I said did the research we had to change the the workers contract so there was a case a couple of years ago where somebody took their employer to court because they'd fire them because they caught them doing something but they allowed reasonable personal use. [snorts] So my recommendation to most companies is don't allow reason personal
use. Everybody has a smartphone these days. Maybe if you go to some of the poorer countries are not the you know but in places like Europe and the UK and the Americas most people have got smartphones so they can access their personal email and stuff like that. they shouldn't need to do use firm system. Now you can choose to ignore it if they do. But actually taking that right away allows you to take that action but and that sounds a bit draconian and I find a lot of people don't like doing that because they say we've got people that work really long hours. They should be able to order their shopping and stuff like that and that's fine.
>> I define it instead. >> Yeah. >> I define reasonable in that sense. >> Exactly. So that's why I don't and the other thing as well is you'll find a lot of the information commissioners prefer it if you block rather than allow reasonable use monitor. So that's another way to look at. So whether you block stuff um and tell staff it's not going to be accessible. Um there's a lady there and a lady a gentleman there.
Sorry. >> Hi. >> Hi. >> Doing tabletop exercises. Could you give an example or scenario you don't? Um yeah, just literally say, okay, we've seen this information going externally. The same as you might do with um an incident that where you've got social engineering event or something like that. And just literally work through. But whereas you'd normally technical stakeholders, make sure you get privacy and employment law in the room and HR. Well, thank you everybody. [applause]