← All talks

2015 - William Knowles - Yes, penetration testing might need standardisation...

BSides Manchester55:50199 viewsPublished 2015-09Watch on YouTube ↗
About this talk
You’ve read the title and you’re panicking. Don’t. This isn’t a talk about having a standardised methodology for conducting penetration tests. It is, however, a talk about the way the industry offers services and what clients receive in return. Standardisation here might be good not only for clients, but also the individuals delivering these services (such as yourself). This talk is based on a market analysis (involving 54 stakeholder interviews) by the British Standards Institution (BSI) and Lancaster University.
Show transcript [en]

You made it to the last talk of the day, so congratulations on that. And you're either still awake or you're doing a very good job of pretending you're still awake. We saved the best till last. William Miles is an academic.

So he clearly knows fuck all about what he's talking about. So please do feel free to call him out on any claims you may make and tear apart as to how they won't work in the real world, because obviously we know how it works in the real world, he's just an academic. I told him he could say that. I did ask him if I could say that, he said yes it's fine, it's the last talk of the day, we can have some fun, we've had a good day haven't we? Yes we have. So please give a warm welcome to William and over to you sir. Thank you very much.

So this talk is about, as you might have guessed, standardization in penetration testing. The title is somewhat apabolic. Perhaps there should be a probably. No, it's probably not what you think. But it was born out of the process of trying to find people to do interviews with at the beginning of this project. And everybody seeming to think that it was something to do with standardization and practical methodologies, which is absolutely not the case. It is a study which was conducted to look at service models in the industry. How is the industry defining services? How are they delivering against those definitions? Are clients getting what they want? Is there consistency within this? And if not, do we need standardization? So I

thought what I would do before I start, I would do some disclaimers. So disclaimer number one, I am an academic. I don't work in the industry. I don't have that first-hand experience of some of the potential challenges. I'll tell you in a second about how we address that. But I am a PhD student, as I say. This was an academic bit of research. I have about six months left. So if anybody is willing to give me a job after I finish, after I've said some of the things which I'm going to, I would be absolutely delighted. Disclaimer number two, BSI were involved.

Yes, so there was a standards body involved in this project. It was made very clear from the beginning. So I should say also that although they were involved, they did not fund any part of this project. This was funded by a UK research council. BSI were an industrial partner within that, but they didn't fund anything. And BSI made it very clear from the beginning of the project that

standardization, they were quite happy with the answer that standardization may not be needed or standardization may be needed but not from a formal standards body such as a BS or an ISO. And this is important because when I'm going through this talk and I'm mentioning standardization, It's important to think about the different types of standards that there are. Yes, there are these formal standards. There's the BS, there's the ISO. But also, CREST and CHEC are private and consortia standards. So if I'm saying standards are needed, perhaps it might just mean that CREST and CHEC need to do something in a slightly different way. Of course, it's a bit of debate. My opinions. Generally, the results of this have been reasonably well taken.

Although somebody did get quite angry a couple of weeks ago. And they started questioning, how dare you criticize the industry? Where did you get this data from? And the answer is, this isn't my opinions. Of course, I am aggregating data, so I am somewhat influencing it. But it was a stakeholder interview-led study. of which we did 54 different interviews. So we have this nice graph here. We split the people that we interviewed into three categories. The providers, the people delivering services, clients, those receiving them, and industry bodies. So other providers, there were 32 individuals, I think, and about 22 organizations. And I think of the providers, I think 18 were correct and 10 were checked. certified. For the

clients, it's quite difficult to get people to speak about their experiences in having a penetration test or other simulated security assessment on their environment. But we managed to get 15 people and we tried to make it a broad representation of the market. So of the 15, there was a couple of micro-enterprises, so less than 10 people in the organization. There was a few small and medium enterprises, five people from local government, and three large enterprises, so more than 1,000 employees. And out of those, one of them was a financial institution. Industry bodies, we interviewed CSG, we interviewed Crest, we interviewed the Tiger scheme, we interviewed government bodies such as called Business Innovation and Skills. We interviewed people from the different accreditation bodies for cyber essentials.

And unfortunately we couldn't get anyone from the CYBER scheme, the new CYBER scheme that's established, which has check equivalent qualifications. But we got quite a number of people. So 54 in total. In terms of the study's aims, what were we trying to do? What exists there for penetration testing? What organizations, what accreditation exists for organizations or certifications I should say. Certifications exist for organizations delivering tests and also for individual qualifications for individuals. And is there a need for new standards or to modify what is currently there? The talk today, I'm gonna talk about individual qualifications first. And then standards for organizations. I'm gonna go over this quite quickly. I think you'll probably find the latter stuff more interesting, where we talk about the engagement. I'll split

the engagement up into three phases, pre-engagement, practical aspects of it, and then post-engagement. And then talk about the recommendations for standardization that were made.

So, individual qualifications. When it comes to individual qualifications, The way that it's structured within the UK is that it's heavily influenced by CHEC. So CSG and the CHEC scheme. And they, as part of this scheme, they provide organizational certification and individual qualifications. For the individual qualifications, they define two levels. You have a team member and a team leader. So the team leader, they don't do assessments themselves. What they do is they define equivalent qualifications. And those equivalent qualifications come from three different technical bodies. CRESS, the Tiger Scheme, and the Cyger, the Cyber Scheme. And you can see those mapped across there. So heavily influenced by Czech. CRESS is the only technical body which does something higher than

this. And they do that through the qualifications for the STAR scheme, which is the threat intelligence led red team type of testing for all industries, but for the CBEST, which is their implementation for the banking sector. which is the predominant implementation program. Two things about this particular table. The first is the CCP and the equivalent qualifications. This is quite an interesting one because I was told by what is a reasonably reliable source that CCP was potentially going to be a prerequisite to get part of the check scheme. But equally I've been told by another equally reputable source that that's absolutely not the case. So the only thing I can say is that this is potentially going to happen in

the future. And CCP may become a prerequisite. And then there's also the thing about non-UK qualifications. So when this study was conducted, it finished at the beginning of this year. obviously changed in this area, given the news in the past couple of weeks, when it comes to CRESS and OSCP. If anyone hasn't heard, OSCP, the Qualification from Offensive Security, is now an equivalent qualification to the CRESS registered tester. That's the intermediate level here. Unfortunately, that particular equivalency does not, you're not able to get check equivalency, you're not able to claim check, that is an equivalent example check, accreditation, check qualification. And also you have to take some additional assessments within six months. But it does provide another rubric

into getting that qualification. So as you might expect, there's a strong opposition to any form of standardization when it comes to individual qualifications.

The obvious reason is just the difficulty of capturing skill requirements in a very static document in what is a very dynamic industry, and doing that in a way which is peer reviewed. The formal standardization process, of course, is quite lengthy and time consuming. But also it's because of just the success of the UK technical bodies. There are problems. The interviews, people describe situations where although the UK industry does successfully assess individual qualifications, there was a question of whether these exams are quite so dynamic enough to adapt against the amount of information that goes on within organizations. So there are issues, but the UK industry is arguably leading the world in this area and continues to improve. One question that was

asked, so we asked people who are doing this study, and we asked about the internationalization and the professionalization of the penetration testing industry and how they see it and what they think needs to happen going forward. And there was somewhat of a consensus that there needs to be an international body for penetration testing in the mold of how you have in other disciplines such as medicine. If you look at how they are run, they have strong international standards. And you have bodies which are able to revoke the right to practice if those standards aren't being met. And the feeling was this is something which needs to happen as the industry moves forward. And also that that needs to

be clearly independent from the industry. One of the more interesting things about this, I personally felt when you looked at the people that were asking for this particular industry body. It was predominantly people in positions of management. So they have a natural proclivity of one or four control.

people in the room or practitioner roles were slightly more wary of it. You can kind of see the reasons why it's obvious. But the main thing was that if this is to happen, there needs to be a clear distinction between practicing penetration testing and security and doing research in this area. You know, these are completely different things.

So organizational standards, there's three different types of organizational standards. There's those which apply to those delivering services. So I've mentioned CHECK. In case anyone isn't familiar with CHECK, this is a government initiative. So you can become a certified organization for delivering penetration tests to various public bodies. So as part of that process, you need to be a certified company, CHECK company, and then you need to have qualified individuals. At minimum, I think it's one team leader doing assessments. So that's what CHEC is and CREST, of course, is the predominant commercial sector alternative. And then there's the standards which apply to clients. So whether it's in the form of mandate or security control which requires a

penetration test or a vulnerability scan or or some other form of security assessment. So I mentioned cyber essentials there, it's got a lot of attention recently. There is the vulnerability assessment requirement in that. ETI DSS has the vulnerability assessment requirement, or the vulnerability scanning requirement, and also more emphasis on penetration testing in the more recent version. And the third category is the formal standards. So this is somewhat of a mixture of the previous two,

common criteria, we're seeing a lot of laboratories popping up recently for doing common criteria assessments. And then there is also a somewhat minor role that penetration testing plays in ISO 27001. So in 27001,

penetration testing, vulnerability scans, whatever you want to call a security assessment, it is a control in itself and it falls under the category of compliance. So there are many different controls in 27001. And security testing is just one control within that.

So the findings here. The feeling was that Cech, Crest have done a fantastic job in pushing the quality of business processes within penetration testing providing organizations, particularly when it comes to things such as quality management systems. and the Check and Crest were seen as the main reasons for the uptake of 27,001, which is the ICMS, and 9,001, which is the Quality Management System standard. There was some criticisms of a lack of independence from the industry, whether these bodies were independent. However, at the same time, there was the feeling that this situation has improved significantly in the past two or three years. And if the situation is improving, you would potentially hope that it would continue

to do so. Common criteria I mentioned about the laboratories. And applying it in any other context is extremely, really, it's a very big standard. There are additional standards about trying to apply to operational environments. A lot of people, when we were doing the study, they seemed to suggest, well, perhaps you can standardize a methodology, a basic methodology, a high level methodology for penetration testing, when it comes to sort of reconstances and so on. But in my personal opinion, there's not really so much value in a standard such as that. That's standardization for standardization's sake. And I felt like it was only being suggested because of BSI's involvement in the project. If they're involved, well perhaps they could do this. So penetration testing, and

I'm using it in broad terms here, I'm terribly... Security assessments and

compliance assessments. So I mentioned 27001 and it's isolation to a specific control. A lot of people within the study felt that more needs to be done to look at how penetration testings, vulnerability scans and so on contribute to a wider variety of security controls. How they can be used to assess that, how they can be used to generate evidence of assurance within these environments. And this was at the moment disparate and poorly documented. And then there was also, at the time, a lot of confusion about cyber essentials implementation. I'm not gonna go over the cyber essentials ecosystem, if you will. There's a number of accreditation bodies. The accreditation bodies have taken slightly different approaches based on the basic standard.

But there was a lot of confusion, even from people providing cyber essentials identification about what other accreditation bodies were doing. And if providers are confused about that, clients must also be quite confused, which in the few cases where we spoke to them, that was the few that had experience with it. So this is a very sort of high level flow diagram of some of the things which we covered. Basically what I'm saying here is I'm going to talk about three different areas of the engagement process. I'm going to talk about the pre-engagement, the practical assessment, and post-engagement. So this is a bit of a wall of text. It's a quote from a client that receives IT health checks and CREST assessments. So I'm going to draw

your attention to the subsection of this. What they're saying is it's a marketplace that's shrouded in mystery. And it's difficult to assess providers about what they're offering. CREST and CHECK provide some assurance. But it seems like you need to be an expert to actually gain quality assurance about the services that you're going to buy. And that needs to be consistent in the marketplace. So there is arguably an issue about procurement. From the perspective of clients, it's obvious. It's this, it's how you know what you're buying. For providers, also, this seemed to be an issue. And this was somewhat surprising because some of the providers especially those providers where the people we were speaking to had a role

in procurement, they were saying that this is increasingly becoming a problem when the tender process is involved. So clients are putting work out to tender, and then providers are saying, well, we're losing work to other providers that are providing a lesser service that can't possibly be meeting the requirements that have been set forth in that tender. So they're losing work as well because of inadequate services. So I thought what I would do is I'm going to do a bit of a straw pop. Of the people in the room, do you feel that services in the security assessment industry have consistency? So the way that these services are delivered, do you think that penetration tests are being delivered as penetration tests? you think that vulnerability scans are being delivered

as they are. Can we do a hands up with people that think that's the case? Yeah, there's not a single hand. And that's a bit of a problem. Vulnerability scans, I would say so. But there's not really approved vendors. But even then. I haven't been a client for a number of providers.

I'd say there's no consistency either in how they do it or how they report it. I'm not convinced they should be. I've been a bit more out of the side of the press or industry body. My mind has been in a different way. They'll be used methodology. They'll be recognized on that chart previously. And my clients usually need to be done in between because they have used standardized approaches in the past that people come through and say, process of the rest of the industry and they're not satisfied with it. I don't tell you that. So I challenge the group you get up there. That would be more about the client, not understanding the values they need,

than anything else. Because you don't understand this approach. And so the bad guys don't operate that way. And they change. I think there's a difference between...

But you're talking there about, you know, it sounds like you're talking about a very standardized approach to doing assessment. I'm talking about that it's sort of a much higher level, the way that services are defined. You know, of course that includes an aspect of the practical assessment. But as soon as you get to that standardization, it doesn't matter what you make. But is this not what Preston-Checker doing already? No, it's not on the commercial level. And also, you know, we have one price that we offer everybody in that we have. certain you know, it's standardizing methodology. Well, when I've been working in pre-sales for penetration testing, I had potential customers come to me and say, well, you're saying that

penetration testing is going to take this long and cost this much, and this company is going to do it in a fraction of a time, a fraction of a day. I said, well, what are they actually promising to do? And they sent me over their bump, and once I read through it all, they're just going to run a VA scan. send out the people for what they've reported, and they're not going to do any of the analysis or exploitation for pages or anything like that. And then you have to sit down and talk the customer through what the difference is between what they're calling a penetration test and what we're calling a penetration test, and

why ours is taking longer and costing more, and what the different levels of assurance they're going to get from that are. And that's very time consuming. For me, being somebody in the industry trying to do my job as a salesperson, And it's very time-consuming and confusing for the customer who is getting told two conflicting things where two different providers then doesn't necessarily have the requisite information to make an informed judgment of what they want. I would echo that and argue that we're not talking here. There's a difference in going by car. You're in a buyer's marketing, he's aimed marketplace. But at the same time, the client should have some assurance to be able to, you know, they look at bills, they look

at prestige, look at chaff. Well, not necessarily. The client's fault would answer them saying we don't even understand what we're buying because that's basically what you're saying at that point. We do not understand ourselves what we're buying in the first place. And there's a step there missing out there. and that's to actually understand what they need and what they're buying and then approaching it. It sounds like they're missing out there. Is that an obvious reason why Check and Crest exists to give assurances out there? Well, I like to compare it to the campaign. No, they don't exist. They exist for the benefits of the members who are here. They're not a benevolent sort of organization that needs to make sure everybody gets the right value. They're

there to represent the interest of its members. That's a stated aim to represent the interest of your members. Going back to the car analogy, though, if you go to a Voxel dealer and they sell you a 1.4-liter car playing as a 2.0 fuel injection, then you get them done on the trading standards. I don't see that. If somebody tells you a VA scan and says it's a penetration. There is a product. If you get the car and you've said it was a 2, and you've got it to 1.4, you're going to have to be money back. The customer gets a product, they get a report. If that's clearly not what they asked for in the first place, then

they've got redress on the contract. And if they haven't, then they've negotiated a bad contract. Then under the code of contract for Christ,

then that clerk can make a complaint. I'm currently doing three investigations right now. But three, given how many tests are procured. It's not a lot, isn't it? On one hand you're saying it doesn't happen, and on the other hand I'm saying, yes it does. And I'm doing three investigations. right now. They're just going to say, we're going to say, we vote by that property and we use someone different next time. It's as simple as that, isn't it? You're presuming that they have the education to do that. You're presuming that they're in wide open to an answer. Is it our responsibility as an industry to registrate the buying client? Because when you're talking about cars, actually, it's not a 101.4 or 101.5,

it's I want a car. That's what a lot of clients are asking for. They want a car. They don't know how to differentiate between the VL and the VL. And is it down to the industry to educate them or the champions to deliver what they believe is a car? Yeah, we sell the Bentley. Someone walks up to me and says, I want a car, I'll sell them a Bentley. If they go to one of the competitors, they might buy a Ford Focus. Is it down to us to say, well, that Ford Focus, and Ford Focus, and this is a Bentley, and let me show you what the difference is? There's only so much that a provider can do. A long time ago, when Pentest was

first in the marketplace, we spent a lot of time in sales, because I'm salesperson, I'm commercial, educating the client as to what Pentest is. Unfortunately, you choose, as a provider, to go after a client who is educated, or you go after a client who is mature in their understanding, because your model is mature also. And that's a commercial decision. Standardizing industry is a good stick from a methodology point of view. But taking away the commercial advantage is kind of suicidal because the industry exists just like a better than four projects. Some people want to help, some people want to help. But now, the metaphors of the automotive industry are well understood by people who buy cars, even if they're not

experts. I'm not a mechanic. I am useless in servicing my own car. But I know the difference between the poor focus and the benefit. Whereas the metaphors in the security testing industry aren't what underserved by customers. And I would say that as ethical providers of commercial services, we have a duty to make sure the customer understands what they get. And there is a problem when you're trying to compete with companies that don't do that. I think this is similar to programming, where the first thing you have to do is take what the client's ask for and ask you about some popular requirements that are real. which is not what the client's actually asked for, it's what they need. And we've got a similar issue with penetration testing. Well,

the first thing you have to do is talk to the client, and they'll go, give me some security fees. And you have to find out what it is, extract out of them what are the requirements, what are the threat model, and use that to sell them the appropriate thing for what they're actually trying to achieve. That doesn't mean you're going to end up with one size fits all. That means you're going to want to have a range of products and try to choose the right one for them. So you may have in your portfolio the core focus and the entity,

say, well look, you've got no budget, have a focus. Or you've got plenty of money and this is really important, so that would benefit. So it doesn't have to be one or the other. It can't be both within the same company. It also might not come down to whether you've got a lot of money or a benefit. It might be that I own a car, you might want it to get into A to B. And it could rely on your personal experience. But if you want to get to A to B in luxury, you want to get to A to B. There are providers out there who operate a It comes back to the gentleman that I said about the metaphor. If you're buying a car, you

understand that. It's clear. You gain assurances in yourself. But even in this industry, they're relying on third parties such as Creston Check to do that for them. Both of those cars are going to be roadworthy. Whereas if somebody just had the vulnerability scan, and they think, oh great, I have a secure, I have a security tenant, don't necessarily know whether it's actually safe or not, but it's a normal thing. I think that they look fine as well, because this is on a basis that all of those things, one, good security testing, but this one is where they have the requirement that they can do their business, and they check that, so if they do a company that just passes them, then they'll pay them, or give us a big

company, just ethically, and gives you a fail, and they're not going to use them again, because they're failing, and they have a good job.

Well, that's the thing is that promotes an inferior service. And I think a good case of where we have that in the UK market is with IT health checks. So the cabinet office sets forth scoping guidelines. It's four pages long. Two pages of that is content. It's very open to interpretation within the IT health checks. I've been told a lot of stories about people narrowing the scopes within this. Because the punishments are quite severe within the IT health check scheme. It's very laborious, the processes that you have to go through if you have issues in your environment that have been found. So clients, they are narrowing the scopes, as you would probably expect them to. They're doing it for that reason, they're doing it because their peers

are doing it, they don't want to look bad compared to their peers. So within this you need to ensure that these security assessments are conducted with a degree of consistency. Because in this particular, you're saying about the check mark, in this particular context, these security assessments are being used as a benchmark of security which is part of the accreditation process for councils, governments to be connected to the public services network. UK penetration testing bodies are delivering inadequate services, inferior services, knowingly in order to get work. I would argue that's moral of the dubious. You're putting systems at risk. But then you've got the accreditor, who's the person who's accepting risk on behalf of the government. That's what the accreditor role is.

Who's meant to come in and review the scope, understanding what that means. So there is some checks and balances associated. I think generally there are some bad accreditors, but it's some bad pentatops. But generally, I think they do quite a good job. So that balance of having approved companies, qualified individuals, having some recompense to argue against if there's bad products that go out there, with somebody doing some form of accreditation. So in other words, looking at the scope, making sure it's fit for purpose, before you're allowed to connect onto government systems. I don't see that anywhere else in the world. But I think we are street to hell. of anything I've seen in the rest

of the world. So you can't just narrow the scope on an IT health check, because the accreditors should be done. And if you do do that, you've got to be quite heavily criticized. I mean, of course, and this is one of the things which I was going to cover later, but CSG do audit reports. And they are, I mean, they're not the accreditor, but they are the people which are being used

to analyze those reports to see if the scope is what it should be.

the testing, the requirements are being met for the test. Well, I'm telling you, pen testing would never just be the only check to do. You've got to look at your risk profile. Of course. And actually, your risk profile might be telling what's wrong. And then another one might be, and fix it. Just to complete two opportunities, knowing what's wrong might be sufficient for someone to make an issue about what they did next. Some people might say, well, until it's, you know, all the wonderful people have got on, and that's, you've got to look at that in every system, it's got to be different. And actually, 10 testing to me is more technical, but actually it's the whole business context as well, which is not something to

do. That's, you know, the question to me is just the gauge, one of the gauge of competency. And I think, you know, that's, you know, probably, that's why I'm saying having that,

having an independent competency network I think is a good step because that has credibility like engineers. I think that's a big step forward. I suppose you can, I completely agree with what you're saying. But I think there's two different types of competencies. There's the organization and there's individual. So before I mentioned that, I was talking about the individual. And I think that the UK industry does an exceptional job in assessing individual

Whether those organizational services which are being defined and delivered are consistent, I don't know about that. It certainly suggests that's not the case. And from the penetration testers which I've spoken to, they feel that tests are being delivered at a lower level than what they're trained for in many cases. So you can have the competencies of the individuals, but you also need to ensure that competencies and consistency within the organizations. And I'm not saying that's an easy thing to do. I'm not saying I have the answer of how to do it, the specific answer at least. But I think that it's something which requires discussion because I believe it to be a fault within the UK market, as did many of the people involved in this study. I

think the only solution is to deliver pen tests to far greater frequency.

Therefore, if possible, if I suppose to be a general manager, what the truth is, a far greater comparison of different providers. You should switch them. Rather than using the same personality model, you should switch them to either of a printer. I get motivated by this, just by offering three things. The last job we ever did it, I'll do it again for nothing. It's as good as them using it next time, so if she can take words. You just switch them.

As regards to standards, the housing industry has been far more heavily regulated. Of course, there's never going to be a panthea. There's always going to be people with your gaming assistance. That's always going to be kept in the case, but in this case, is it quite a significant widespread gaming on the system? It really depends on what the client's motivation is. Is it a policy thing exercise or do they want to increase security? Well, I mean, that's the other thing because this is one of those unique industries which arguably incentivizes bad results. And I don't just mean that in the sense of why we found a lot of vulnerabilities. I mean, that sense of why you found nothing. That's absolutely

brilliant. Thank you very much for that. So I would imagine that quite a lot of clients, bad results are . I've heard that not many test reports get read by any . I've certainly had a test report found back by customers who say they want us to not report on vulnerabilities to be found during the test because they'll have to fix them.

But I think that's part of the understanding what your customers' requirements are, requirements that we're talking about, is saying that do you actually want to be secure? What level of assurance do you actually need? Do you want a box ticking exercise that means you satisfy the minimum requirements that you're not going to get prosecuted for breaching the DPA when all of your council data gets ridiculously owned? Or do you actually want to stop that from happening?

You can't have part assurance. Well, there's no sort of... No, you can't have part assurance. You can't say something is secure. I'm not talking about secure, I'm talking about official. It could be coming from a different level. You get a lot of organisation where they go, you have to have a pen test before you can launch your project. And the project team will go, well, we just want to get it out. We want to achieve our objective. a blocker, we don't want it to come back with things. But if you're a pentest company, you started gaming it, you probably wouldn't get much long-term work with that company, because as a governance activity, they go, that's

not worthwhile. Don't use them. But if they're happy, yeah. But I think you just need to be explicit about it. If the people you're talking to and the customer are saying, we want you to do a very basic scan and just give it a quick kick the tires, and you write a report that says we came and did a basic scan and kicked the tires and we didn't find anything worth reporting, then I think that's perfectly fine. I think if you go and say, yes, we deployed our full pen testing resources for the greatest imagination possible and we didn't find anything when you didn't actually do that, I would say that is deeply unethical. But there's nothing wrong with adjusting your penetration testing style

and depth to the customer's requirements as long as the customer knows that's all they want. Isn't that against what you're trying to achieve penetration testing? Because if I, as a tester, say, I've kicked the tire and it's fine, that will, some clients may let them go, well, we've had the tire's kicked. It's fine. The guy from place X isn't going to get the kind of data. So it's not sure of that. It's not actually testing the security. It's just saying, you may as well just hire Jill Bolton down the street and just try to do the pen test. I mean, I've had a requirement where pen testers to check. Have we done what we said we're gonna do? That was the purpose of it. You put all the security

articles together, we said what we're gonna do, what we've got, et cetera, and he wanted someone who depends. Have you done what you said you're gonna do? That's all he wants to do. You set the requirements and you say, I'm gonna check I've done this, this, and this. You don't ask unit testing, functional testing, or organization testing. This is a solution, because it's more than tech.

That was all, it's just an assurance process. But you've got to be a check or a crest to limit who could come and do that thing. I think one of the big problems is there's still a lack of understanding of what they're actually buying and they have a certain amount of money that's been set aside to do that. And what we as responsible, or the pentest provider, are supposed to do is keep them the most bagged up. And it depends how big that luck is. If that person comes to me and says I've got two grand, there's only a fine, you know, a large application test. Otherwise, it's only down the road, you can. And I think there's

a disconnect between actually what the shoe is spending. You are seeing a pen tester, what's their big scribe? And they'll say, scope's being narrow. It's the amount of time they're given to carry out the test. is getting sweet, the sweet and sweet. You ask the salesperson what's their biggest right, is the fact that they're competing against a market that is dropping in value? And is the client pushing that value down? They want less to look constantly. And that's the amount of function is business, and that's all we're up against is that the certainty in the industry is . From a line perspective, the customers are expecting us to be doing the right thing anyway. So we all

know that things are happening in the world. It should be good. So we should be expecting to be getting even good, even like about five years ago. It should be, you know, being good enough. I think the market's expectations, you know, CSG has to be just trying to do that. You should try and say, are you managing risks better? Rather than, okay, a population's version of, So I think the thing is, you know, we have to recognize that the emphasis is changing and CSG should be setting those expectations for the public in the UK. Now that might be different if you've got international businesses, which a lot of people have. And that's probably where there are issues. I think,

you know, UK standards are insufficient because actually you've got the mistakes and you've got all this stuff. and how aligned are we if you want to try and have full-on-line deployments? I think the analogy with airlines is actually because people understand the difference between flying Ryanair and flying BA. They're both getting from A to B, but you have a very different experience. And if people just care about getting from A to B, then they'll fly Ryanair and go cheap.

And the problem is not necessarily helping, it's not just helping the customer understand whether they want to go Ryanair or BA. It's the airlines that are Ryanair who are pretending to be BA. When we're talking about consistently,

There's one thing to say to the customer, okay, this is Ryanair, this is the AP pick, this is what you pay, this is the level of assurance you're gonna get, these are the risks that we're gonna consider, that's fine. In terms of service delivery and consistency, I think what we need to look at is where companies are saying they're gonna give you this nice boutique pen test service and they're just running a basic vulnerability tool and dumping a bunch of HTML for you.

I think this is basically wandering back towards the point of this talk, which is about standardisation in the pen testing industry. All of those airlines have quite white phones. They all have any neural networks. They're all going through certain standards. And if they don't do any of those things, then that's a coming up. You can ground the whole front. I think there's a more interesting point because Crest will cover the competency of the people to do the task. But I've seen examples of quite big pen test companies where very similar requirements have gone in and very different proposals have come back at different times from the same company. The interpretation of the requirements by the companies is not consistent in the types of service they will

come back and offer. The other difference is that customers can

They can see the difference in an airline. They can see if the seat's got pads on it. They can see if they can take too bad. So much of the information is not visible to the customer. So they can't really see what you've done unless they actually fully understand what it is. But be sure. There should be an end report. There should be an end result which should report which should state exactly what it's done, to what level, what rigor, and what the results were. And you can get two pentest companies to do the same thing. You can compare the two. That ain't rocket science. because of the moment and the point of the fact that that's a

lot of people and and the point of the fact that that's a lot of the people of the world

what you want to you should be demanding more from your suppliers. I know you're just like someone who defines the customer doesn't necessarily understand how that relates to their business and that's a different thing. I can be very precise to say I ran these tools against these subnets and then tried these manual attempts with these pieces of software blah blah blah and I'm very specific about what the scope was and what tools they were using and everything like that but that fundamentally doesn't tell the customer what do I need to worry about. A better analogy might almost be getting car serviced. You could go to one place who will change the oil and check the

tire pressure and you go to another or recharge the air conditioning, do all these other things. And they'll probably both give you a report. There'll be a lot of customers that wouldn't understand the difference. Now I'll go to the cheaper one, save money. Yeah, I wouldn't understand the difference. Nothing about that question. And MOTs, yeah. At the basic level, this is what we're talking about. What is the minimum level to be expected? That's a good analogy for the security. Yeah. At the same time, you even get people gaming in the system with that. But, you know, we have a car. You've basically got four wheels, you've got windows, you've got doors. You've got some pretty common things. And you can check.

But this is more of a... It's not a product. It's a service. Pentesting is a service. You don't get a goods. So something like car servicing or airline, you know, by now, by some better analogy, You can't say I've got four doors, what is the definition of what they're doing? What's the definition? I'm going to have to move on because I'm very happy about the discussion. That's what the entire project was.

I have actually. It's one of the chapters in my PhD. Yes, so terminology issues are ramping, what we mean by these. Standardization was widely supported among the people that we spoke to. 18 of the providers, five of the clients, and not everyone was questioned about this. So the industry will have penetration testing execution standards. Their next version will include some form of levels within that. The extent of the visibility of that particular community standard is questionable. I'm going to skip over that because we don't have much time. We've been talking about regulations and punishments and scoping issues. Yeah, we've done punishments. Yeah, the practical assessment. So the practical side of the assessment wasn't an emphasis of the study. It wasn't looked at in any great depth. But we did

question people about what methodologies they used for their testing. And you can see some of the answers there. And I believe that I was just being told what I wanted to hear. And the answer is there is no peer review accepted methodology for what a security assessment is. all organizations can have their own stuff which is mixed together. And you'll notice that even though some of these don't even make sense because I'm not sure what aligns or what it really means. Presumably it's the testing guide. So I mean this was some opinions of providers and other providers reports. It was a pretty shocking stuff. No one had any particularly high opinions of what everyone else was doing, except some people would say, you know, the top-end assessment, you

know, there's some consistency. The clients, again, you know, it's the same story, again, the clients have the same opinions. It is a lot of negativity. There's a lot of variability in what it's coming back from the virus. So it was actually one of the interesting things is that it was the larger enterprises that were the most issues. It was the people with the in-house competency that were actually saying, you know, we're getting a lot of variability in what we're seeing being offered to us. And those that were most happy with what was being offered were the smaller enterprise, the micro enterprise, where they didn't actually have an in-house IT capability, and they were just giving us second-hand opinions about what their developers were

saying about this. They had to have certain parts of the continued development.

So in terms of the structure of penetration testing reports, the management sections were widely considered to still be far too technical, which is not so hard to believe. defeats the entire point of having a management section in the report anyway. And they were saying that if they have to pass it to any non-technical people within their organizations, they were needing to reprise it. The technical sections, clients who are looking for more best practices. And I would say, I would pull one out of this, given our current discussion, and that is engagement narratives. Engagement narratives are exactly what happened during an assessment. If a provider is doing penetration tests, They should have absolutely no problem in narrating that engagement. But if they've been delivering

a lesser service, they're going to find that very difficult. So beyond the high level issues, there was a number of sort of deeper ones. Certainly when it comes to metrics and everyone, I believe that the metrics were their way of providing value to clients, which may be the case, but clients were also saying to us that it's very difficult to track performance over time and between providers. Then there's the quality of recommendations. They were often felt cursory best practices with no real attempt to understand the organization's risk posture. And then when it comes to validating fixtures and that sort of question about whether we should be including proof of concept despite how that is and how difficult that

is to educate and inform clients and to facilitate that process of testing any fixes which they do without having to purchase a reassessment. So are providers doing enough? I think that's a good question. We've had lots of discussion about it today. Do we need to audit reports? We discussed this as well. But CSG do audit. Crest don't do audits, but they do the sort of recertification of all its own methodologies. And I think it's a good question. I think that auditing, from a personal perspective, auditing is the best way to ensure consistency in services which are delivered. What you write in a document is not what you are delivering as part of a service. And

there are obvious challenges to doing this in the commercial sector. Who would do it for long? You know, we see Crest member companies auditing other Crest member companies. So where is this technical expertise going to come from? So the recommendations. Someone within the study of providers described to me as a bit of a wild west. We do have things such as Crest and such as Czech, which provide benchmarks of assurance. but there still needs to be done a lot more to ensure consistency in services. So the recommendations we'll put forward first was about standardization of terminology about what we mean by testing. The form that that is and who enforces that particular standard is up to debate. My personal opinion

again, this is, and we're going to talk from a personal perspective, but I feel that that is best done through consortia and private standards such as CREST.

Then reporting guidelines, so not strict standardization about reporting, but guidelines to educate and inform clients about what they should expect to receive within reports, and also to provide some sort of benchmarks for providers to deliver against. But at the same time, it's not strict standardization and there's no requirement to meet it. And then the third one was about penetration testing auditing guidelines. So I've mentioned ISO 27001. So if anyone isn't familiar with it, in terms of how it's implemented. It doesn't actually mandate security controls. It mandates you do a security risk assessment and you implement your security controls against that. So you effectively opt in and opt out of security controls based upon your risk posture. And the recommendation was about greater

auditing guidelines about how security assessments can be used as evidence for

about whether the controls that have been implemented are consistent with that risk posture.

So I guess this is a good question. Change in the right and the positive way can only occur if the people on the ground, people doing the assessments of people such as yourself, care enough about making that change. And I personally believe that you should, beyond the purely selfless reasons of actually wanting to help people improve the security of their environments. There's also a lot of selfish reasons why you might want to do that. The way that the industry evolves is going to affect what you do on a day-to-day basis. So if you want to do more interesting engagements, you need to try and encourage people to deliver the services which they are being marketed as.

So thank you very much for coming to the talk. I'm sorry to run over a little bit, but we had a very nice discussion.