← All talks

BSidesSF 2025 - State of (Absolute) AppSec (Seth Law, Ariel Shin, Lakshmi Sudheer, Ken Johnson)

BSidesSF · 202545:1789 viewsPublished 2025-10Watch on YouTube ↗
Speakers
Tags
StylePanel
About this talk
State of (Absolute) AppSec Seth Law, Ariel Shin, Lakshmi Sudheer, Ken Johnson Join Seth Law (@sethlaw) and Ken Johnson (@cktricky), co-hosts of the Absolute AppSec Podcast, for a panel discussion on the current state of application and product security for 2025. https://bsidessf2025.sched.com/event/64158b8c85d31006f446a5384fa5fc41
Show transcript [en]

All right. All right. Uh, good afternoon everybody. Welcome to the 13 and to our special panel state of absolute absack with Seth Law, Ken Johnson, Lakshmi Shudir, and Ariel Shin. Well, thanks everybody for coming. Uh, if you guys have want to do the uh Q&A, please take a picture. We're using Zlido and there's also one for uh feedback. uh after the talk if you guys want to have any questions uh please go to the city view they will be there for uh after the talk so you guys can talk to them uh and with no further ado thank you very much and we're live with an in-person episode of absolute absack I'm Ken Johnson at CK

tricky on social media joined by my co-host Seth Law at Seth Law on social media Seth say hi hey everyone welcome back to another episode. Um, we're super happy, super happy to be here and my uh adrenaline is going because of the lights. So, just just be kind today. Be kind. We are super excited to have Lakshmi and Ariel with us today. Um, they are both in AppSseac and have opinions that they would like to share. Hence the reason that we invited them to be on the panel with us. Um, but we don't want we don't want to spend too much time talking about us because we want to get into the state of application security, what's going on in

the industry. Um, you know, obviously there might be some new like AI thing I've heard about that we we will discuss a little bit, but we don't want to overdo that. So, the first question that I'm going to ask, and this one I'm going to start, we're going to start down the line with Ariel. Um, I'm just going to put her right on the spot. Um, have her jump into it. Um, but the question is, what's working in product security today and what's still broken despite our best efforts? Okay. Hi everyone, I'm Ariel. Um, so for me, what's working today is the move towards secure by defaults. We found that um, you can reduce entire

classes of vulnerabilities by moving towards secure by defaults. Um, there's more to a secure by defaults program than just having that default available that you still need partnerships and you still need to invest in making the tooling work. Fewer false positives actually reducing the different edge cases. But as a whole, I think we're trending towards the right direction. In terms of what's still broken, a big one for me is prioritizing risks well for a couple of reasons. One, there's still a lot of unknown unknown risks where we may have some known unknowns that we can put into our risk register, but there's a lot of unknowns out there. And second reason is that a lot of our most like

imminent incidents don't come from those that we've identified as our top risks in our risk register. Our top five could do nothing with the next incident or breach that we're a part of. The last piece is that we have a lot of recurring and or persistent risks that may be rated as a medium or a low in your risk register that we don't have all the resources or the coordination efforts to actually get large- term meaningful remediations. Okay. Now, and I want to like I want to stick with this for a little bit because you know Yeah. Anyway, so secure by default, right? Like let's let's start there. Um as I so as a third party, right? Like I go into

a lot of companies um and I do see secure defaults. Um we've had discussions on the podcast in the past with people that talk about like kind of the one true framework or the one true way of doing things and LMI is specifically I want to talk about where you're at and secure defaults because that's kind of where it started. Um but um I still don't see a huge change from like a startup perspective in the secure by defaults world right yes people might be using a secure framework or something else but as devil's advocate I the the the adoption of secure defaults just isn't hasn't happened as fast as I would have expected it to happen. So what can

we do there to actually keep pushing that along but also like how's it going in your world Lakshmi with secure defaults and then you know we'll we'll dig into the risk prioritization there as well because that's a good question sure great question let me start with a quote from Jason Chan who actually paved the word or created the word paved roads right he says that we create paved roads it's our responsibility as security engineers that it doesn't lead to a cliff and I think that part the second part of is is probably something that we as an industry should work a little more on. Why does it work or why does it not work, right? Like I think we created

paved roads or secure defaults to ensure that we eliminate certain systemic issues and also raise the bar on that foundational standard in security. That doesn't mean everything's gone. That doesn't mean that it just stays there and we don't evolve it and it's still going to be fine at that point. I think today in the industry as well as for us here at Netflix, one of the uh consistent challenges we are facing is more around like we have newer domains we are getting into. The world is moving towards AI which obviously we have never heard a word of in the last few days in this conference but we are progressing at such speed as organizations and from

a technology perspective right and of course these pave roads cannot catch up. So what happens is you have a few trails here and there with like you know these rudimentary fences which we assume are guardrails and I think that is probably the root to like why these secure defaults are probably not delivering as much value as they do. Um and again coming back to what Ariel was talking about like risks don't translate to incidents. Do we actually like learn from our incidents? I know we're going to get into that probably a little later as the conversations like leaders, but I think those are probably some of the challenges with like secure by default. Can I be a devil's advocate and also say

that I still think that we've raised the bar a little bit in all of our organizations by thinking about secure by default today? Yeah. Yeah. I mean I mean you know as a developer as well like I want to have that I want to give that to like the the organizations that I am consulting. Um, but a lot of times I mean my hands are tied as a third party, right? Like I can only recommend so much. But that does feed into that risk prioritization question as well, like how do you how do you identify and maybe we'll we'll kick this one to Ken and make him actually, you know, say something. Um, but how do you actually

prioritize like which class of vulnerabilities you're going after for secure by default, right? Like that's part of that prioritization. And is that the strategy that makes the most sense also? Yeah. Yeah. So, Ken, oh, you're asking me I'm asking you a question. Yeah. I don't Yeah. Well, I think it was successful when I was at GitHub, you know. I mean, it's been a couple years since I've been in a prod style role, but yeah, I mean, I think uh eliminating certain classes of vulnerabilities absolutely worked. Um, we did a lot with cross-ite scripting and content security policy and secure, you know, a lot of secure by default. So, if you are writing like HTML pipelining stuff, you

know, we did a lot to secure that. Um but my question actually is more you know because we're talking about yes uh secure by defaults in our organizations as prodac members but do you all think that we could be doing more just as a general community to put ourselves into like I mean like nextjs right that that became kind of a thing people latched on to that's a perfect example of somewhere we could go and actually start contributing to an open source framework in a meaningful way to actually attack you know the secure by default problem in a open source framework and so that's just one example and I'm I'm curious if you all have, you know, feelings in a

more broader sense of the community community like how can we actually contribute in a meaningful way to to secure defaults and yeah, just improve things in general. So, I'm going to take the devil advocates approach again where often times I feel like an a classic appseack team member can't meaningfully contribute to security defaults in terms of building. you actually want to partner with your platform engineering team to really pave that way for you and that um you kind of act as like the glue in partnering with the platform team and the engineerings and identifying what those priorities are. But if your platform team doesn't want to prioritize the secure default that you're pitching it it's not going to move forward. So

you have to work with your platform engineering team and building that relationship in order to get something. I was going to say that's perfect relationship building right there. It's a great opportunity. Um, yeah, you know, I'm curious how how much time do have you spent inside your organization? I I I think I know the answer, but relationship building and how did you go about that? Watch me. Okay, so I think one of the things um just for funsies, I did something where I went and analyzed all of the Bides SF like talks from 2015 to 2025 and what I noticed as a trend was we've been talking about empathy with developers and partnership for a

long time. When I say we, it's not Netflix. It's like the industry itself, right? And at Netflix, I think one of the things that I have done or we as an organization have done is like trying to be more embedded there without it being more of like give and take relationship but actual partnership building. There have been situations where or quarters where you have just worked with the team to understand what they're doing to genuinely out of curiosity understand how does database security work. work. I mean, how does how do databases work? What are the challenges that those teams are facing? And how do you go about understanding and like empathizing truly with them to get their world so that

would make it a little more easier for you to collaborate and like work with them and influence without authority. I think a true partnership, the success of a true partnership is where I walk into this, I walk into this room, engage with them, but then I walk out and they still talk about security. Lakshmi doesn't have to be in the room to actually you know have that security conversation. So sometimes it may look like none of the OKR is getting done. That's also a problem with how we define OKRs. That's a whole topic for another talk I guess. So I think it comes back to like how do you treat these qualitative like metrics? How do you think about like

relationship building as an organization? Can you support that kind of investment from your security engineers where they're actually building a partnership that would lead to really long-term maybe platform investments so we can have secure by default succeed because it's a two-way street, right? Like without understanding the platform, you cannot have the secure defaults in there. Yeah. So for them to understand security, you should it should go both ways. You also should invest in understanding what they're working on, how how the product works. That makes total sense. Yeah, it makes total sense, right? Um if you're embedded in an organization, right? Like I I always go back to like just my lived experience over the last decade and most

of the time I walk into these organizations um security is a compliance requirement right it's not necessarily hey built into the product from day one um and so that's where like I I I'm trying to understand the best way to actually put that foot forward when it's not a daily relationship right it's a you know and um if anyone has any great ideas, let me know what that actually is. When you are, you know, trying to build those relationships very quickly in order to get them to trust you. Um, you know, I've seen some success in, you know, just walking in instead of saying like, "Hey, guess what? Your baby's ugly." Right? Um, but coming in and saying,

"Hey, what is it that you're having? What is your biggest concern?" Right? Like we talk about asking questions and being interested in what the developers are actually doing. And that has led to better results especially on a year-over-year where we're only engaged for a week or two weeks every year. Um better results over time and we actually see reductions in vulnerabilities because we are approaching it that way. So um yeah but if anybody has any better ideas for that you know maybe some like vibe security or vibe coding right like that I I would be open to it right I do want to say like I feel that there has been a big shift towards um engineers

will make the right security choice without even knowing that it's the right security choice right so let's say uh they're building project and they're thinking about what language to use there are a lot of reasons to use rust and they may not be thinking about it because it's a memory safe language but they're going to choose Rust and they get all of those security benefits from it. And so it's a lot easier to make that security choice without all the trade-offs that you have to think about. The other thing is that feel like all of us have required developer security training for engineers and when they hop around different companies, they kind of already know the different tools and

they're excited to use those security tools or they're aware of kind of general security knowledge because they've shifted companies so many times they've had to do that mandatory training every single time. and some of those concepts start to th um stick and then they they start to find themselves thinking about security a little bit more or accidentally making the right security choice. Yeah. Any any more thoughts on that? Right. I think it's a good segue into our our next topic. Our next question. This one? No, the other one. Go back. Okay. Go back. That one. All right. We had a big discussion last night about vibe coding. Uh for those of you that came Yeah. As Paul here. Woohoo. There

we go. Yeah. All right. So, let's talk about vibe coding. And I'm going I'm going to actually going to start this off this one off with Ken, right? Um and then we'll we'll get everyone's take here. But how do AppSec teams deal with AI generated code specifically like you know those of you that don't know Ken is doing an AI first uh you know company, right? Um but how do apps teams deal with AI generated code that gets merged with zero human eyes? I would like to start with defining vibe coding. Okay. and then I'm going to hand it over to you all. Okay. My opinion I say it's my opinion. It's really Wikipedia Paul's

opinion uh or the internet uh states that it's you know AIdriven development in the term in the sense of let me prompt it to and guide it to build some of the code I need sort of like a template generator with the understanding and I know this is this is not part of the definition with the understanding that there is some review I I will just say my own opinion here with the understanding that there is review by the part of someone with knowledge whether that's senior whatever or whoever's an approving authority on those code changes I will say I don't feel like it's that and I'm I'm sure I'm going to like there are going to be people who

don't agree with this but I think it's it's it's a net benefit I think it's a net positive but going back to the definition of what is vibe coding because I'll give you the alternative uh here uh argument that vibe coding is purely about people who don't know how to code who wouldn't be competent in a review of the code that's being generated by AI AI and you know just kind of build their apps or shovel code up into uh try to anyways into production. So what are your thoughts on let me start with Ariel like vibe coding the definition first and then the the question which we can restate. Okay so vibe coding for me is really when you

once again like low code no code solutions instead you're relying on genai to help produce your code. Um and my biggest concern about this is just at times there can be a lack of like critical thinking and understanding the trade-offs. I think this goes back to secure by defaults where you don't have to really understand why you're doing something. You're just going to run it and that means you can also just introduce a lot more vulnerabilities. Um, so there are a lot of concerns, but that's with anyone who's uh submitting any unreed code, right? I don't think it's something new, but it's just this is I think we're talking about this is the new shiny thing that um has come up

and has gotten a lot of attention, but the problems are still the same and we just go back to basics, building our foundations, adding those guardrails. Well, and and and I like I want to highlight that like this. You talk about secure defaults and you talk about like you know the you know the basics that are in there. I I almost feel like vibe coding and the problems that we see with it are a maturity issue, right? Organizational maturity and also model model maturity. If the model is trained on, you know, like good code as opposed to Stack Overflow, it's probably going to generate better results from a security perspective or better defaults. So, if we can build a low code, no code

vibe coding solution that is trained properly, yeah, of course I'm going to have my engineers use that. Like that, that's what I kind of go to and that's what I foresee as the future. But anyway, would you have your engineers do this? Slack me. I don't know. I mean, I'm very cynical, so I think yes, I would have them do that, but I wouldn't interest it completely either. So, that would just be like wipe double check coding, I guess. But that's about it. It's still going to stay with wipe. Um, I think the biggest thing uh changed from like what it was before to what it's going to be now is the scale, right? Like the scale

at which like the scale and accessibility of like any um LMS like you know, you can just like grab code, copy paste it anywhere. So I think the scale is one of the things that I'm worried about. While the problem is foundational, I think the scale is something that we as an industry need to figure out how can we actually have the right guard rails there. Yeah. Yeah. Sheer volume of code being generated is you know Yeah. I mean it's AI code slop at some point, right? And but it's coming. So we've got we've got to have a strategy to deal with it. Um, but so what I'm hearing, sorry, and I'm taking over from Ken here, but what I'm hearing

is like secure guardrails, secure by default, and also like do away with this idea that there's no human eyes that are on it. Yeah. Can I just add I want to say this too, like, you know, we we had a we had a moment where it was like, is this like uh, you know, me being optimistic about vibe coding being an okay thing long term? And I think like what it comes down to for me is we're talking about ways to to we're already talking about ways to improve uh security. Of course, there's going to be ways out of out of necessity I discussed to to improve the the functional requirements and to make sure there

isn't downtime. And my whole point is that there's going to be a forcing function for that because it's like, well, if my website can't stay up if I am the engineer who's submitting all the security and functional bugs because I'm, you know, using purely vibe coding with no knowledge of what I'm doing, I'm not going to have a job and I'm not going to have a business. So that's your forcing function to correct these things. And again, I'm very optimistic. I will say I'm optimistic about the fact that you're already talking about secure guardrails around these systems. So, however, I'll I'll just say I'm very much a realist. I understand that in the meantime there's going to be a lot of

problems um introduced. I mean there's there's no doubt about that. What if we get more like vibe security programmers? Yes, exactly. Vibe sec dev ops. Yeah, exactly. We should let it in the dev ops program too, right? Description sometimes. Yeah, exactly. Yeah, wipe coder. So yeah, I mean along those lines though I I do feel like we are going to see breaches related to this. I mean, we had the, you know, ex Twitter whatever guy that built his whole SAS and then it got compromised, even if it was a, you know, yeah, just a troll, whatever it was. Um, but it's going to lead to breaches. It's I mean, but we have this with every new

technology as it pops up, right? When you think about cloud as it first started and no one secured their S3 buckets and what happened there or masterard breaches and other things that actually like introduce vulnerabilities. But I it always goes back to those secure defaults. I mean, you know, if you listen to the podcast, you know I love the Crocs and socks and that's where I I like Crocar. Yeah, Crocar. There we go. Um, but the basics of security that we we we fail at. And if we don't introduce those same sorts of concepts into AI, into Vibe coding, or into anything MCP, right? We're not necessarily going to talk about MCP servers right now, but it it's all it

all goes back to those basics. Um, and yeah, we don't want to talk about them because they're not very shiny or it's not super interesting from an attack perspective, but that's where the flaws happen and that's where we lose data and organizations get attacked. Um, but along those lines, right? So, we're going to move on to breaches unless we have a couple of questions yet. I Yeah. Are there any questions that have come in yet? So, we we do have more, but I do want to address a a couple questions from the audience. Um, so, you know, we can keep the All right. A little interaction, even if it is just with a big light in

the sky. It is blinding. Yeah. All right. So, I can't see the boats next to it. Okay. So, so while we're looking at these questions though, um, we'll kick it over to Ariel. Um, are modern breaches exposing new blind spots? like so in your organization what you're think what you're thinking about um in our security pro programs or just showing we're not doing the basics. So uh we're chatting a little bit about this earlier and I think a lot of the modern breaches like uh pre the previous iteration of this question was like name a specific breach and our response was there are so many like uh they they've all kind of become the same. It's just like it's a lot of

different noise and it also happens to be a lot related to supply chain attacks. Um, and it really does just go back to basics again. And it's um, like a couple of the instance that we were thinking about was just through credential stuffing or um, stealing credentials and it it's the front door. And so it's it's really hard to get customers to turn on MFAs. It may be easy to turn on for your employees. Um, so it it goes back to basics and ensuring that we can educate customers, that we can turn on logging and monitoring, that we can turn on MFA if possible. Um so a lot of it's back to basics but I want to hear more from

Lakshmi as well on your I was looking at the question we're trying to sort this breaches okay so I think going back to a little bit what I heard was more around like secure I mean the foundations like all of the almost all of the breaches are actually not novel vulnerabilities right it's mostly keys exposed keys exposed secrets and otherwise uh I think it was more around like CV being exploited And yesterday I think there was this talk about like AI apocalypse and it was more about like oh how can we like how has like everything been the same since tomorrow from ages and ages ago before some of us were born I guess so I think coming back to AI and the

threat landscape changing right now how do we actually like fix why is our foundation so hard I don't have an answer by the way I'm looking at all of you why do we find it so hard to do the foundational stuff today and we've seen shifts Yes, they've been foundational, but we've seen shifts. Initially, it was more around like cross-ite scripting. We've seen credential stuffing. It was more on the front end or like you know more on the application like entry exit points, but then it's now shifted to CI/CD pipelines. What happens next with AI in the picture because the bar is uh much lower for entry. A lot of obvious things are taken care by even the

current LLMs, right? like so really novel attacks are probably going to come up and how are we prepared as like a security industry to actually face those. I think that's one of the things that keeps me up a little bit. I know you all are smart and we'll all figure out something. But that definitely keeps me up at night about like the upcoming breaches. Don't worry, tomorrow there will be about 4,000 vendors willing to sell you their their solution for this problem. So my concern though is like do you think those novel attacks distract you from focusing on uh because that's like the new shiny thing and maybe your like CISO comes to you they heard about

it and they say prioritize your resources towards securing Gen AI but is that really where you should be concerned especially as an appseack team where you are focused a lot on foundations and your relationships with developers. Yeah that is a great question. It's just another technology, you know. But I mean it goes back to your initial the initial discussion that you had or the point that you made on risk prioritization, right? Like unless you are in the room making a decision on what is important for a company, um it's hard to change that focus, right? Um as a as a consultant, I can't tell you the number of like chat bots that we've looked at in the last year, right? And

you know, I'm like this is a chatbot for your marketing site. Yeah, we Yeah, we really don't need to look at this. You don't need to spend this much money securing a chatbot that is just rag, right? Um at most, you know, there's a little bit of reputational risk there, but everybody's concerned about that because it's the new shiny. Meanwhile, the you know, a breach has occurred and they're pulling everything via your API because your credentials got leaked on GitHub, right? Like that that that is the, you know, the the pattern that we're dealing with. Um, and that the risk prioritization to your point is off. Um, yeah. And I I don't know how to change that, right? Like as

an external party, I can point out all day long where the critical risks are, but that's as an external party. Yep. I think it's it comes back to the question you asked about what can we do as an industry, right? Our OAS top 10 a lot of guidance is moved around like these are the risks, these are the things, but what is based on exploitability and like real world attacks? Are we actually informing our strategies by what's being exploited out there or are we going with just like what's supposed to be the best security practice? Of course, we all want to go play with the new shiny thing and like you know explore I mean the

damn one vulnerable like MCP thing that we were talking about as well like but really how are we allowing exploitability exploitability to be the vector to inform our risk? I think it comes back to that question like we have EPSS scores right now right that's definitely an improvement from what we have had before so I think that's where as an industry we could do better is like defining those standards and shifting our mindset around like how do we come I mean it's a complex problem but shifting our mindset around like exploitability and like how has that changed like our security roadmap our security strategy being real being honest with ourselves like walking away from some of the meetings with our

developers where we like this doesn't need to be reviewed this is not a highest risk have we done that as a security organization and it's a bit of a spicy take but I think that's what builds trust going back to I think the question you asked about like what builds trust let's go and say we don't add value walk away and let's see how that actually changes like our perception well to that end about those conversations there's a good question around that which says many engineering organizations prioritize shipping code over over discussing what to ship so like even being in that conversation to begin with so should security teams be staffed to implement the platform

changes themselves I'm going to just add on to this. Are there any other alternatives we should be thinking about, you know, to get into those conversations. Um, you know, this this question kind of points at a solution of like more staffing, right? Like more people like that's specifically that those those folks kind of job. Um, do you think that's the answer or do you think there's a different approach? Um, curious. I'll start with you, Ariel. I think it really depends on your organization because I I've seen it happen two ways where an absc team will partner really closely with the already existing platform engineering team or the appsac team will become a absc platform team and your

absc partnerships team. Um I think both can work. It really depends on how you adapt it to your org. But you you do need a partnership a platform team to partner with or to to become one. Like that is a must I think in modernday security teams. Yeah. Anytime I ever I will say this like anytime I've ever gone into an org to manage an ABSSE program, the very first thing I did was spend time just getting to know everybody. Literally just having meetings with like the key heads of teams and different departments just to understand like what is your job like you know maybe bond with them a little bit and get to know them and create some

endearment between our teams. But I think that was like it's like get to know the people and get to know what your assets are, you know, and that's kind of like where you I felt you go from there. Yeah. I'm curious um how do you get into those make sure that you're in those conversations um yeah make those connections. I think sometimes it is reactive and I'm okay with it to be honest. I don't think everything needs to be proactive. Uh sometimes it's more about like the long-term partnerships that you've built uh the people you've spoken to and especially with AI right now the you know tasks can be done documents of course with your thoughts

can be like written more quickly I guess. So I think investing on those partnerships beforehand when there's no return has kind of helped as well and showing value. I think I go back to my point about like I've been to some threat modeling sessions where when I walk out I'm like did I really find something critical or high that actually changed something for them? And if it didn't did I really have to be there right so I think that's one of the things that I have gotten more intentional about which helps them invite me because I'm not a troublemaker at that point. I'm an adviser. How can I change that? shift their mindset into like you know hey security just comes

and makes us do 15 tasks because we make them do it we do right without understanding probably sometimes the estimation of how long that's going to take so shifting that to like how can I be a trusted adviser where I say nothing needs to be done let me walk away just an internal consultant at that point just there to help basically just there to help yeah well and I mean you know okay so you know all three of us have basically dealt with uh you know I would say shining examples of security orgs Seth on the other hand has consulted with just about every org in different, you know, shapes and sizes. And so I'm

curious, do you see a lot of friction or a lot of or or just a lack of engagement between security teams and their engineers on the whole or what's your experience there in terms of Yeah, I I mean it it goes back to that discussion that we had on kind of secure defaults and what's actually be being deployed in the maturity of an organization. Um it's it's fairly obvious when security and development does not get along. Um because the vulnerabilities that we find as a third party and we come back the next year, they're still there. Yep. They just don't get they don't get addressed. Um and that's one of the hurdles that we end up working with

security teams on and it usually comes from either a lack of integration, a lack of discussions, a lack of relationships between those teams. Exactly what you're saying. Um, and the developers, their first interaction with security is typically negative, right? It's the, hey, here's a list of 25 hundred vulnerabilities that came out of dependabot that we need to go fix or dependency check or whatever it is or sneak or like I I which vendor do you want me to call out? Right? Like all of them. All of them, right? Um but instead of actually providing some value all they've all the security team has done has added stress and complexity to a developer's life. Um and you know if

they had taken a different approach they had actually curated that list. They had prioritized. They looked at exploitability. That's that's typically where we will come in and help repair that relationship or at least you know give them something that is more palatable to the developers. Um, so but I would still say right like probably 75% of the organizations I walk into ha struggle with that. Um, and and I don't know is that is that something that you were still struggling with even in these large organizations from a cultural perspective. Yeah, I think I spoke a lot about how we can help. There's probably another element of how can we make security a part or how can we surface

security as a part of an application like health thing just like resiliency scalability and security right so I think there's also a part of like accountability that we need to build into like our engineering partners and how can we build that right like I think we speak about feedback a lot of times we speak about like every relationship there's the part where you know you build trust and trust also happens by sharing ing genuine, honest, clear feedback, right? So, I think having those conversations uh to make it a stronger partnership versus more of a customer service kind of a mindset shift I think is one of the things that is really important. So, we still face

that, but I mean these are some ways that we are trying to um make it more productive. Yeah. Oh, and I feel like there's just so much fatigue that engineers faced where we constantly report vulnerabilities to them and we always tell them the sky is going to fall, this incident is going to cause x amount of damage and we're not going to be able to recover from this. Um, when I was a consultant one time, uh, I had done a pentest one year and did a pentest the second year for the exact team. Uh, and they forgot to mute themselves as they complained about how I had just reported the same exact results and that they actually wanted to

see more critical things cuz they want to focus on what they think is number one most important. They're like, "Ah, that medium vulnerability issue like we've heard about it for years now and it didn't really cause anything bad for us yet." And that's why I feel like when we have just a large volume of findings, that human in the loop is still really necessary. And that's why you still see ABSSE teams move towards the partnership model or try to reinvigorate their partnership models because you need the humans in the loop that have those relationships that can create that meaningful change. You can get a critical vulnerability fixed really easily, but that medium vulnerability that's persisted for years, no one's

going to look at, especially if it impacts multiple teams. you need to partner with your platform team in order to create like a central authent framework that they can use instead of just telling them one by one please fix this vulnerability please fix this vulnerability and trying to figure out a better way with like these all these AI laden tools that like oh we have XYZ features it's that doesn't really matter you need that human in the loop to really convince your engineers that this is something that we can work on together we'll coordinate those efforts to create meaningful change I do want to ask a question um because it made me laugh when I wrote it or read

it. Uh, discuss why security training in companies is a waste of time. Thank you, whoever that was. I disagree. I think trainings are just bad. I think if you were to have cuz I I've done the trainings where it's just like we we talk about general findings and talk about general protections, but uh the realism is missing like that tangibility is missing. I feel like bug bounty findings are like widely underutilized and that uh we we take once again it's like the sky is falling but if you provide them like here's we paid $1,000 for this vulnerability and this is how the attacker was able to use this low or medium vulnerability to chain it

together that's actually really really impactful to engineers and if you surface that early up front and you frequently change your slide deck to have those recent vulnerabilities I feel like that matters a lot more and then you get them into using like developer tools or like burp suite so that they actively engage with the vulnerabilities. It becomes a lot more real. Um we did that at GitHub actually. It's funny you say I totally forgot. Uh we put together training presentations where that's what we did. We go through like the uh critical high-risisk bug bounty reports and actually walk through like a technical deep dive of what was the root cause of it and then like the

remediation steps and then not only that but who all was involved with praise of the incident response and like that was way more engaging and way more interesting. And the first one we had like, you know, these okay-ish numbers, not not great. And then as time went on, every time we did it, the numbers grew and grew and grew. So I I mean, I would agree that my experience or I would say I agree because my experience was really positive doing it that way. Well, and I mean, as somebody that gives training, right, like you know, I'm going to say no, it's still good, right? Um but like but we want that engagement, right? Um the best trainings that we end up

doing are the ones where we have we've done a test of the application, we have findings or we've seen the bug bounty findings and we're integrating that in exactly what you're saying, right? It has to be relevant to their dayto-day. If you're just going and looking at juice shop or you know one of the other like vulnerable apps without some sort of context and they are Rust developers, guess what? They're probably not getting that much out of it. Um, yes, they can go and exploit XSS after they get done, but it doesn't necessarily change their day-to-day. And it's all about that context, right? Hey, if I can use this the next day, I'm going to go use this.

Or I can go find a, god forbid, a SQL injection in my application right after I do this training because I now know what those payloads look like. It's going to be more relevant to those developers. What are your thoughts? Yeah, I think I was listening to all of you plus one to like context based real training because security vulnerabilities being real that is a real problem right so I've seen more engagement when we've spoken about breaches or like vulnerabilities that have happened within Netflix for us as you all were speaking I think one of the things I always think about when I think about developer education is why why are we trying to educate developers

what are we trying to educate them about are you trying to like build a security first culture Is it more of a cultural shift where they care about security? Is it a gap that your tools can't fix and you need education to fill that gap? Right? So I think that's just one of the shifts that I think about is like if we go in with an objective which is very clear to us just like any meeting any like I mean this panel for instance what should be the audience takeaways right so I think that intentionality definitely adding on to what you all said breeds like more interest into security and also actually gives us like

reasonably important outcomes u one of the things that I've been thinking about is like how can we reach them where they are do they have to actually come out to a presentation and like you know listen to this I don't know if this is possible and it's still something we are toying with is like can we have those security vulnerabilities or like whatever we find through our threat modeling be there close to where they code they write a line of code I mean it's a pipe dream at this point but can that just show up something around hey if you do this here are some of the consequences not to scare them but just a little bit around

like we found a similar vulnerability with a similar structure of code I think that would be more useful and would embed into like people a little more and help us right as a security industry raise the bar. Yeah, I think with AI it's definitely possible. Exactly. I think it's Yeah, I was going to say like another training trend that I've started to see as well is teaching engineers to use AI securely. So if you're going to use co-pilot, how do you question it so that you're using it securely? And I'd love to see more trainings where because often times when we do threat models we ask the engineers to to have like a at a

couple of threats and I I had a feeling nowadays that like I think they're just using chat GPT to generate this which is great. So like how do they take it the step further because now they have all of this assistance and so how do we make sure that they're using chatbt to do a great threat model and that they actually understand the artifacts that they produce and then our role is just to advise them in a much smaller capacity because it's really hard to threat model from a blank page of paper if they can get the output and they just need to figure out what's right and what's not right that's something that we can easily train and help with. you

you make you both make such an interesting point and I think if you combine the two like a concrete example of how to implement sort of what you're talking about is like you know cursor wind surf they have these concepts of memories where you can store bits of information that help when they prompt it through to do vibe coding uh it references that information right to like change the way and tune it specifically for that application but in the sense of what you're saying I could see there being a world in which security is building like um threat modeling style prompts jumps into those memories so that when they run through vibe coding that's sort of baked into

their their workflow. So I don't know just the the possibilities out there I feel like endless at this point but just one example of you know I think we we can achieve this we can achieve this come back in one year and we're going to see where that's actually at. I did want to point out I think we're are we at the five minute mark? Yep. Yep. Okay. All right. Yeah. So we're at the five minute mark. So we're we're going to we've got one final question for the panelists and then if you would like to see or talk to us further we will be around um we'll be up in city view or we'll you know make

some contacts there. So the final question is a prediction for the next year in APSAC or broadseac one year from now. What ABPSC trend do you think we we will all be sick of hearing about and what trend do you wish would take off? Um and I will start. Who wants to go first on that? Yep. Launch me. There we go. Yeah, you got volunteered. It's easy to say what I don't want to see. I don't want to see powered by AI on every single thing that is actually not powered by AI. Um, so that's probably something off the list. But my prediction is I think for next year we're going to see a little more

investment in secure by default again uh platform engineering but more for AI. It's not a surprise this is not a spicy take but I think that's an obvious next step that we as an industry might take. So, next year in Bites, maybe we'll see a lot more about like defending uh VIP coating um and have some cool glasses for it probably like white shades and stuff, but uh I think that's probably what I would predict for next year. Okay. Yeah, that makes sense. Yeah. So, I'm terrible at predicting things. So, I did aspirationally what I'd like to I have very low confidence in my predictions. Um, but something I'd like to see less of is the type of security

champions model where you just put a lot of burden on engineers where you say you are now the new security proxy. You are definitely trained to do all of this and now everyone's going to go to you for questions. Um, I think that just puts more burden on the engineer and as abstract practitioners, we always strive to reduce the security burden and those models don't do that. So, I'd love to see less where we ask engineers to do less and the small things that they focus on provide a lot of value. Um, and the trend I'd like to see is just a lot more frameworks or discussions around how we prioritize and score risks. Um,

I'd love to see different companies come up with ways in which they use their bug bounty data in order to prioritize what their risk register should uh prioritize first. Ken, I don't know. CVE ecosystems a bit of a tough one right now. Where's Jerry? Is Jerry here? Where's Y? There we go. Yeah. Well, you know, I'll leave that to Jerry to discuss. But I mean, I think we have like some Yeah, there's the AI stuff, but I do think we still have like our our fundamental challenges and just like because you talked a lot about standards and scoring of risk, right? I mean, it's like so did you. I mean, it's a problem obviously. Um, I also think

like the better the CV style data, um, I think that we'll see maybe perhaps better, uh, reachability analysis. I know that's like kind of a word and I know some people cringe when they hear that, but it is useful to know like, hey, is this library I'm including actually calling something? So, I think enriching that data, making it uh better. I know uh again, I'll leave it to Jerry to talk about some of the improvements and enhancements that could be put out there, but I think that's that's one. The other thing, too, I think there's going to be just like I think everybody's going to be inundated with a lot of tools. I think this next

year you're going to see an explosion of tool sets and companies sort of coming out to solve some specific challenge. And I think that chart we all look at that's huge with all these different vendors is just going to get even bigger. So that's my what's your prediction? So um I mean we're all we're all going to be sick of hearing about AI tools obviously, right? Like that's um but specific to apps um I think the SCA like hype that all of the SCA vulnerabilities and everybody's building SCA into software composition analysis into their tools. Um it like while it's important like from a vulnerability perspective and exploitability, right? Like you know um the priority on that is

one that I'm I think we're just going to get sick of hearing about it, right? Even more. Um and the trend that I wish would take off is actually the sharing of prompts for AIS and LLMs. Um because it it feels like it's like everybody thinks that it's this um this magic sauce, right? like that. Oh, my prompt is the best. And so therefore, we have to keep that secret. And yet, like, you know, any college student with 10 minutes can go in and write a prompt that does just as well or better. And I feel like we're missing this opportunity to actually like build those together to really make it effective. Um, and that's

I like I mean, we've been sharing it back and forth like we've been there's been a lot of that in our Slack channels and other places, GitHub instances. Um so if you're looking to do that you know please jump in and have a conversation. So and I I think with that we are out of time so we will take other questions you know or do we have time for questions now is that no that we are getting kicked out. Thank you so much for coming for doing this. We really appreciate it. Thank you very much. Appreciate that. Thank you for the great conversation. Yeah.