
hey everybody hey can you hear me on the mic okay awesome hi everyone um I'm Jesse augus I am a software engineer at a cyber security company called cybersafe which is based in the UK but actually our co-founder is from South Africa um I come from a psychology background and love to explore and understand the intersection of human behavior and Behavioral Science in relation to cyber security and secure software development I'm also the co-host of a podcast and a Avid member of The Oaf security Community really really excited to be here talking to you guys today so thanks so much for for coming to listen today I'm hoping that we can explore some of the uh threat landscape
changes in 2023 and also explore the challenges within web development and security I want to really focus on like the human aspect and put a behavioral lens on that and explore some of the concepts that we're going to come across in from a slightly different angle so similar to this morning's keynote I don't have the answers to a lot of the problems I'm going to be presenting I've got some ideas but it's definitely something that I'm hoping that we can explore together um I'd also like to discuss some of the strategies that we can use to overcome some of these issues so if we try and imagine the world a world where web applications are
Rife with vulnerabilities where data breaches become rampen and exposing personal information and even uh confidential corporate data skyrockets identity theft skyrockets criminals exploit credentials to wreak havoc on individuals and financial fraud escalates if we can imagine what this looks like and the consequences as how they and how the consequences extend beyond the digital realm to affect and disrupt our critical infrastructure and our transportation our health care and even National Security unfortunately we don't have to imagine too hard um because a lot of what we're going to discuss today is is already happening but I do want to stress that I don't want today's talk to be all doom and gloom I want us to explore some of the reasons why cyber
attacks are becoming more prevalent and what we can do to hopefully um to hopefully remediate some of those so according to it governance there have been a 953 declared incidents this year and the total number of breach records has just come over the total of 5 billion so web application attacks are involved in 26% of all breaches um making it the most the second most common attack strategy I really want to think about or explore what we can do and how we can feel empowered to tackle these like a a seemingly insurmountable challenges so I want to talk about our responsibility as the Builders of technology I think that if you truly care about the the planet
and and your role as a builder of Technology you need to be user focused and thinking about the end user and their protection when you're creating applications and security is is the shared responsibility between all of us to protect people and their data from unauthorized harm I know I'm preaching to the choir I know you're all aware of this but I just think it's really important to Center how we are all part of the solution to resolving what is quite a scary time we are ultimately as Builders of Technology we are the um front final frontier in terms of being the people that protect the world of apps and the world of the web and it's all of our
duty to ensure that we're doing all that we can to to fight the good fight so the reality is that the threat landscape is vast we've had a number of talks today and we'll continue to have a number of talks today that go into the very specifics and kind of different ways that this can appear um and one of the things I think can often be a barrier to us feeling empowered is that the threat landscape is constantly changing it's really hard to keep up to dat with those changes and it's really easy to be overwhelmed with the Doom and Gloom perspective so we're always having to adapt we're always having to make impactful decisions without necessarily
understanding how those are going to play out um and how how yeah the impact that that will have in the future so I think it'd be really helpful to have a look at some of the research that we've had come out over this year and try and see what we can pull from that and learn from that one of the things I'd like to talk about is the S Cognito report the state of external exposure management report so This research team aggregated and analyzed 3.5 million assets across its customer base between 2022 and may 2023 so it the um sample data spans multiple industry versicles and has a mix of small medium and large Enterprises across the globe and one of
the findings one of the key findings I think is important to pull out is that 74% of the assets with pii are vulnerable to at least one major uh one known major exploit and at least one in 10 of these web applications have one easily exploitable issue on top of that there was 70% of the applications in the survey were lacking um WF or um encrypted connection particularly https and 25% of all of the applications surveyed lacked both and then also I think one that um we'll discuss in a little bit more detail is that for every easily exploitable critically severe issue there were 133 easily exploitable high medium or low severity issues and we'll
talk a little bit more about why that's important in a bit but I want to I want us to focus on how um I think with this one that the every security team is always looking to prioritize and focus on the highest risk most explosible critical severe issues of course that makes sense but equally we all all need to understand the importance of prioritization and the context around this and we'll get into a little bit so the start of 2023 we had chat GPT Unleashed onto the world and the release of these Mass Market Market pre-trained um chat bots in 20202 2023 has been remarkable businesses have been eager to harness the effects of llms and are rapidly integrating them
into their operations and client facing offerings yet with the speed at which we've been implementing these we've not really had a moment to pause or at least some organizations haven't had an opportunity to pause um and think about the security imple implications of implementing this technology so a lot of applications have vulnerable to um highrisk issues this week it turned one year turned one year old and we've seen incredibly heavy adoption but how what have we seen in this year from a security perspective there have been a lot of high-profile incidents where uh companies have been seeing leaks and people putting personal data into chat GPT and similar large language models um and we've seen that there's been a kind
of reverse in being able to um people have kind of had to take a step back and realize that we're not necessarily um thinking Security First when we're implementing this new technology and with the sneak state of open source security in 2023 we saw that AI code generating tools have achieved blanket penetration and are now being deployed by 92% of organizations so with more research into how llms have been used in Tech development cyber Haven did um have detected and blocked requests to input data from 4.2% of the 1.6 million workers at its client companies because of them leaking confidential information what that looks like in one case there was an executive who pasted the firm's 2023 strategy document into
chat GPT and asked it to create a PowerPoint deck and in another case we had a doctor input his patient's medical name uh patient name and medical condition and in order to draft a letter to the patient insurance company now I think these are great examples of kind of the failing of making sure that everybody's aware of issues and also there's so many things to dig into here but um I do think it's important that right here we see this as an example of the system as a whole it's not these two individuals that have caused a problem it's it's um what could we have done as a whole as this organ how what could
these organizations have done to prevent this from being possible um and also it's not all bad we have seen some great security developments from llms there are some positive um movements I've seen um aankia sha did an incredible keynote at the oos um conference earlier this year which was about how we can use llm and generative AI to fix software vulnerabilities and we saw oos themselves release the top 10 for llms so it there are some good things happening it's not all scary um but really with that context of all the things that are going on all the increase in in attacks the the number of breaches the um imp mentation of Technology before we've necessarily got
the guard rails in place to to um to to prevent the the catastrophic impacts that they could have how do we as Security Experts make sure that we um are doing the best for our users and the people that are using our technology we hear the term shift left a lot and it's something that in I think the term originally started about 20 years ago um just quickly it's it's considering and implementing security practices earlier on in the development stage so if we think of um typical release cycle making sure if we're shifting left we're ensuring that we're considering security implications from the beginning from the design stages before we've deployed um before we're monitoring things however
um despite this we're still seeing a rise in in issues and it's not it's clearly not the kind of way that we can solve all of our issues we've been doing this for a long time and the stats are showing that we're still experiencing all of these problem problems and even this morning's talk on offensive security and um kind of continually building up your defenses there is a diminishing return in imple implementing this I believe most organizations have shifted left or are continuing to shift left and we're still seeing similar problems so what else can we consider um there's a gitlab global survey from 2021 found that only 27% of developers saw security as a critical part of their
role and even despite all of our efforts in the security Community to shift left I think that's quite a worrying statistic it shows that people still aren't quite feeling empowered still aren't feeling part of the solution and are ultimately developing things that um yeah don't they they're not seeing themselves as an agent or responsible for de delivering securely so um what are some of the reasons that this could be the case if we look at the sneak state of open open source security from 2023 we see that false positives and automation over Lo 61% of respondents Saw automation has increased false positives and if we think about what those false positives can do to developers and people that are
trying to um kind of work to deliver and innovate and build quickly we can see that this alert fatigue the exhaustion the mental health challenges and the team turnovers can be a reason that people are feeling less um willing to engage in security practices so alert fatigue in cyber security is when yeah professionals are overwhelmed by the number of alerts and it results in decreased productivity and um a waste of of time in terms of having to filter these things out so in these cases where we do see the false positives I believe that we are missing genuine threats and we could be negatively impacting the security of our applications in trying to highlight things that we need to
focus on we can kind of overwhelm our teams and make us feel less able to focus on the things that really matter um so I think that the issue here or one of the issues that I want to explore is about making sure that we're as security teams aligning with engineering and product being embedded into engineering and product but not just in a way that we are putting our restrictions and our guard rails for them to passively adhere to but in a way that means that they feel like they are agents of the change they feel like they are able to contribute to building securely and they feel a part of that um so if we one of the things I
think need to do is try to empathize more with what a developer and a product team's goals are if we think about the DOR metrics that's quite often what a a developer development team is kind of um marked up against like that's how their performance is measured how quickly are they deploying what's their lead time for changes what's their change failure rate how long does it take them to restore uh a service and so all of those things mean that they are um incentivized to build things quickly rush things out and you can see how there's a friction against what it is that we as Security Professionals are trying to do we want things to be
considered we want there to be cohesion with best practice and make sure that we've considered all the the aspects that are important for remaining secure so we can see already that there's kind of a disconnect between our goals and their goals so if we have that pressure as developers to iterate quickly and we have the pressure on speed then what are some of the things that we can do um in security and I think this is where we come back to making making sure addressing that 27% of developers that don't feel like it's their responsibility making sure that they feel um they feel that they are a part of the solution and that they feel that
they are um responsible I also think that we should have a lot less of an emphasis on shaming developers and end users and anybody who happens to be that day's cause of the issue um it's not enough for the secure for the controls to be in place and I don't don't think that the adversarial approach can really lead to Lasting change there's tons of research out there that shows with fishing simulations and um compliant uh yeah security awareness training that is uh kind of remedial all of those things lead to guilt and shame and kind of disengagement with best practices if you're going to make somebody feel bad for what they've done they're much less
likely to be able to empathize with you and understand what you want to do and and want to help you to do that um and yeah one one example of this is a report from avas which found that 40% of employees at small and mediumsized organizations who mistakenly click on a malicious link and know they will be held potentially liable um are much less likely to report an incident and I think we can extrapolate that to development I think that people um who are responsible for breaking things in production causing issues in production if they feel that they can't report that if they feel that they will be like reprimanded for that they're much less
likely to report it earlier and they're much less likely to want to understand what's happened and learn from it um yes so again the people who who are responsible for these things can often feel well responsible is a strong word the people who are involved in these things can often feel the guilt and shame and um in in this same study by AAS they found that if there was shaming there was lasting negative impact for the employee well-being and for the damaged relationships so we've gone through some of the things that haven't worked um I want to go through some of the concepts and strategies we can try and Implement to um understand the people that we are
trying to protect that we are trying to involve and what the barriers are that they're experiencing so we spoke about uh a little bit about the implementing guard rails and that's kind of adding more and more layers to defense and maybe that's not the thing that is working so um I really want to talk about the peltzman effect and I love that there was a car example in this morning's um keynote as well because I'm going to kind of extend that a little bit so the peltzman effect was introduced by an economist called Sam peltzman in his study titled the effects of automobile safety regulation and what this was was in the 1960s there were a ton of new automobile safety
measures such as having to wear your seat belt improv car safety Technologies and pelsman wanted to study how this impacted the number of deaths in in the automobile industry but ultimately what he ended finding was that there was no decrease in death rates and it led to the exploration of the theory that because drivers felt safer they were therefore more likely to take risks which consequently increased the likelihood of a car crash occurring so even though these people were safer as a result of the Technologies introduced they were aware of these Technologies and it led them to make riskier decisions and take bigger risks so well safety measures like guard rails and and kind of the the
preventative measures we putting in place the defense systems were building while they themselves can certainly help to lower risk the peltzman effect suggests that when these safety measures are implemented people tend to increase those RIS risky behaviors so what does that look like in security if we think about the the more tools we int if you think about more tools we introduce um this is a study which is not yet been released but is due for publication in 2024 by aronal um they found that security tools were positive related with a computer being infected and the risk level of the computer and the malware infections they found that the thing that reduced risk was not tooling but
actually the security activities that the organization engaged with so there was a direct correlation negative correlation between a number of security activities within the org and the risk of the computer being infected and the risk level of that computer and then similar to the pelsman study the people people that have more tools exhibited more risky online activities and the risk level of the computer was was increased so the overall conclusion of this study which I really recommend you go and have a look at um it's it's fascinating it's just that the security tools are not sufficient to protect users from malware um and there's things we need to do instead of just blanket applying security training uh kind of uh yeah the
the guard rails that prevent people from building things things and then if we go back to the examples we were talking about with with chat GPT and the people that were inputting that sensitive information I think that also comes down to the the whole systems thinking and system one thinking we've got our brains fast alter automatic unconscious and emotional response to situations and stimuli these can be in the form of ABS absent-mindedly reading text on a slide or um knowing how to tie your shoelaces without a second thought and then the system to the slow effort for the mode in which our brains operate when we're trying to solve complicated problems this is when you're trying to park your
car in a tight space or you're trying to determine the quality to Value ratio of your lunch so something that I think we need to do is introduce the well make it easier for people to encourage engage in system Toth thinking and in order for people to feel like they can make those slow conscious decisions they need to be informed about the risks they really need to understand the implications of the behaviors that they are are making and they need to feel a part of the solution if somebody doesn't feel like they have any responsibility why would they slow down to think and why would they consider and why would they engage in that that um system to thinking that
will mean they won't input their entire 2023 business plan into chat GPT um the next thing I want to talk about is community I know that are we preaching to the choir here because I have seen and heard of some incredible initiatives done by the people in bsides and the bsides community and then the security and Tech Community in Cape Town but um I really think that we all have a responsibility to drive more people into the cyber security industry I think that we can kind of engage and Galvanize a lot more interest if we get involved with Community initiatives and um especially if we try and involve those from diverse backgrounds because I think
that as we all know um the more the more schools of thought and the more alternative perspectives that you have in these in um technology the better more resilient and and more uh Innovative that technology will be and it's also coming back to that issue of making sure that everybody feels like they are an active participant in security so yes um we're going to be moving I really think it's important that we focus on moving people from people in society from feeling like passive recipients to active participants um yeah forgive me for these slides I thought there was nothing more dystopian than using AI generated um images and they do their faces are quite scary um so now I want to talk about
some strategies we can use to engage people I think that education and awareness for too long now has been like a beating stick something that we use to tell people off with and um is a tick boox thing that can really rarely work and I want to talk through a couple of examples later that of ways that we can make it a little bit more fun a little bit more engaging something that hopefully helps people to feel like it's not such a scary thing to get involved in and then also no Cod no code tools if we truly want to shift left and we want to be involved in design and we want to be involved in earlier stages of the
process in terms of Building Technology then we need to have non-technical Advocates we need to have security Champions within our product team and our design team and so we need to meet them where they're at and make sure that we're engaging them with the sources and resources um we need to Source resources ources that are entry level and then again coming back to responsibility this step further than shifting left rather than just inserting your presence in those places actually help them to understand what their role is within that within that part and then context I think so often within the security Community we're great at coming together and sharing what we learn from um massive breaches and incidents and
notable um examples of of kind of attacks but we don't really often share that beyond our own community and I think context is so key to making people making people feel included making people feel interested and giving them that opportunity to explore things in a more like current and um yeah interesting way so strategies for engaging people and I think another thing that we often Overlook is the impact of early career security or development professionals I know that a lot of people now are coming into software engineering from boot camps and I'm one of those people I did a boot camp quite a few years ago now and I remember when I did that there was no
talk at all about security I did not know what secure web development was um we didn't touch it and I think that that is quite often a little bit of a scary thing when we join an organization and we're expected to have all of this knowledge so I do think again there's a responsibility for us as engineering teams as security teams to make sure that we're meeting them where they are and empowering them early and then also part of this is making it safe to make mistakes I think we want cyber security to be something we can explore and I don't necessarily mean you know making mistakes as in pasing in your entire business plan as a chat GPT but I do
mean that you can have the the opportunity to um yeah explore and and have the safety of the people around you and the systems around you which mean that you can learn from those learn from those mistakes um and finally I'm just going to talk about a couple of tools I think are really cool for getting people involved in getting people interested in cyber security um uh for non-technical individuals there's threat Dragon I think threat modeling is an incredible thing and it's massively underused especially um in those earlier stages by non-technical people and threat dragons are free open- source threat modeling application built to be simple fun and engaging and it's got the cutest mascot
I've ever seen so that's a bonus and then um the oos Juice Shop so this one I love for I've taught courses around and also just introduce this to Junior developers it's a great way to introduce the OS top 10 um it's it's a intentionally insecure vulnerable app um that you can kind of play around with and break and attack and it really helps to learn about the practice of security testing so it's got loads of real world real life examples and um yeah like they're intentionally planted and the user is supposed to be able to exploit them and find the underlying vulnerability and I think this is an example of something bigger which is that you want
to make security a bit more fun you want to have that that kind of light-hearted nature to it because as somebody entering the industry as somebody looking from the outside in it can seem quite impenetrable in terms of um accessing it and being involved in it and feeling like you can do it so um anything that makes it easy fun a little bit more appealing in terms of I can be a part of this I think is really important um so to summarize I think that shifting left alone will not address the fundamental human issues that we're facing I think we need to address the human element of secure development and um I think the main ways we can do that
is by is empathizing engaging and understanding um the people that we're trying to involve and help um to overcome the challenges that we're facing uh thanks so much for listening [Applause] yeah I don't know if we've got any
questions can you tell us more about your um the way you engage with boot gamps or can you tell us a little bit more about how you engage with boot camps and with um previously excluded people and for example uh run non I run a nonprofit organization and would like to engage on that for South Africa can you tell us a bit more about that yeah absolutely so I've worked with an organization called coding black females and have created um like courses around the oos top 10 and I think one of the things that was really really impactful was making it relatable and bringing it back to real world examples so for every concept we were
trying to teach we had a notable example of an app that they were familiar with like um the the Facebook hack where they could use the elevated privileges to become an admin of a group and kind of understanding that all that was was modifying a URL um just like bringing things back to their most basic example and in a way that is gr is like planted in the reality that they know the context that they understand and I think when you humanize mistakes and you show them for the simp the simp like the more simple ones at least for how simple are to solve as well then that that makes people feel like oh okay I can
understand that concept now I'm not so scared of approaching the the more the more complex ones yeah but I'd be really happy to talk to you more about this afterwards as
well cool thanks so much for the talk um this is maybe a bit less of a a question more of a comment or my my opinion cuz it's something this is also something I've thought about um I've worn a developer hat uh for a long time and um I think half the problem is the education aspect for developers but I think there's a lot of developers who have a very good idea that you know leaving exposed boxes online is probably not a good idea but there's that rush to get a product out and um and they're coming up against their their goals not being aligned with true security of the of the organization um and my basically my
opinion is and I understand there's some issues with this but developers need to push back and actually say hang on we can't we can't develop with this um infrastructure as it is we need to actually include security controls and it is going to delay the project deadlines and what was promised um but the issue is developers feel like that's at odds with job security so I don't know I think that's really the challenge that needs to be solved is and I think you've kind of touched on this is it's making that okay to push back right um yeah I don't know if you have any other follow yeah no absolutely and I think it that that's a
struggle of like communicating having the business need understood for why security needs to take precedence when you're building and developing and iterating on features right and I think that's a that's also an example of a communication issue because the people who are putting the pressure and and making those decisions aren't necessarily understanding the implications so 100% I think there needs to be more advocacy higher up um and also yeah from the engineers and developers to be able to stand up to that but yeah that's that's one I'm yeah not sure how we can how we can solve always yeah that's a really good
point so I have a question more to to your your question I'm not a developer but if there were tools I mean you in a lot of talks that talk about guard rails I think you got you mentioned as well guard rails if you you know developers as you said you want to get your application out as quick as possible and sometimes developers I've got friends who are developers and you know they lot of them introvert and they're too scared to put up their hand and you know because of fear of jobs if there were tools that you could really integrate into your tool into your tools that you use that aren't forcing you to code securely but guiding
you maybe introducing information to say guys you know we've picked up that what you've developed isn't a secure you have some training information that you can click on potentially that you can go and investigate at your own time or potentially say well you know with maybe not chat g chat GPT or um large learning stuff but if there tools that you could help without you having to understand security click on something Implement a secure code component and then go on does that make your life easier versus you going to have to study and become a security expert because I mean you guys aren't Security Experts I mean everybody I can see everybody's not in developers
aren't Security Experts you're not there to be Security Experts but tools that can really integrate into your platforms so while you're coding guide you give you some insights popups that says you know guys you have some training information because we've picked up that youve maybe left us in your own time go and investigate why you should be considering coding in a different way or potentially inputting secure code does that make life easier for developers versus is going become Security Experts yeah I think I think there's a balance between that right and I do think there is as much as I did say that a lot of the time false POS false positives can be detrimental and there's alert fatigue
there is an importance of automations and like vulnerability scanning and things that can be used to support and and catch the catch the things that developers not won't necessarily see um and that can be used as well to support a developer use case and you know it took us longer or we're going to need longer because there's Y and Zed that we now need to remediate and and think about um yeah but that is that's that dichotomy right that constant like tradeoff between those things yeah awesome thanks very much [Applause] everybody