
Hello. Uh hello everyone. Uh hi. Uh so I am Barrett uh uh staff product uh security and AI engineer Adobe. Uh uh a quick show of hand fans. U how many of you have used any uh AI tools be chargit copilot without your security team blessing it uh at your work? Right. Yeah. so many folks right so I think u that's what I'm going to really talk about uh this talk is not going to talk about which model is best or which a tool is best it's really about how well-meaning security teams keep failing to adopt AI securely and what actually we can do about it
uh just a quick disclaimer uh most of the talk is most on my personal views and public research uh not related to Adobe. Uh please don't interrupt as an endorsement or official statement by Adobe. It if I reference any tool by name, it is just to illustrate the real risk, not to promote or criticize that. Uh let's uh let me paint a picture. Uh uh you probably have lived uh lived with this. Uh um uh tell me if any of these characters feels very similar, right? uh you have your analyst uh who figured out this new AA tool which which kind of cuts your uh triage time from 40 minutes to 4 minutes. So they're trying to use
it every day copying the live incident data to the new tool. They're not trying to create a problem, right? They're trying to do their job well and they're not alone. And you you have your most senior most engineer in your or organization who kind of blocks every AI pilot for the last 18 months. Every new AA tool is a liability to the organization. The default answer is no. And they have the credibility to make that stick with the or right because they are seniors in the group. They have that authority to make it stick with the leadership. And you have you have the leadership on one side wherein uh they promise the a board of a a road map for the next
quarter. The pressure for visible wins is very real, right? it down it it falls downfall very fast. Uh and you have your legal team who kind of uh shockingly figured out that there is this uh charges used or said some other new a tool is used during a no during a routine audit. Nobody told them. Uh but the meeting which might follow after that might be unpleasant, right? Because they might be shocked to know that this new UA tool is already been in use and um uh every one of these personas is acting rationally, right? Uh nobody's wrong in this old picture. Uh but together they have created shadow IT, stalled initiatives and real compliance
exposure. The missing piece is mostly the cultural alignment. That's what we going to really address, try addressing. I want to really touch on uh uh this uh interesting six uh a failure patterns, agent failure patterns. uh before I uh walk through them, one stat that frames everything is this isaka's 2025 Europe union research report which found out that 83%age of IT organizations are already using AI at work and only 31% have a formal um formal AI policy. That's a 52 point gap of unsanctioned AI activity with no guardrails within enterprises right? uh and think talking about one of the starting with shadow AI right your best people in the company are mostly likely to find workarounds with any other new
AI tools uh they have and you have as an organization you have zero visibility of what data is leaving your organization and most dangerous because it's invisible uh it's happening at scale right and there is also this uh concept of productivity um Mirage wherein with this advancement of AI lot of developers save lot of time say somebody saves 10 hours a week because of using AI for more productivity but those 10 hours vanish nobody told them what where the 10 hours needs to be invested back right um and in in the engineering world the a does doesn't eliminate bottlenecks it just moves upstream developers now start to ship faster but the bottleneck bottleneck now goes to the PRQ right
where you have you have a lot of PRQs and it kind of backs up the real individual gains does not show up as a team output the net through net net throughput remains the same because you have a lot of PR in the in the bottleneck and sometimes like you have this C engine senior engineer window wherein uh they are they are kind of like we talked about they're skeptical about new any new things the problem is like those might be legitimate concerns, right? But that shouldn't become a blanket band for everything which happens in the org. It should be converted as safety requirements. Sometimes skeptism is an asset but obstruction we can't obstruction isn't an asset.
And we also have uh this middle managers who are caught between uh showing a events for this quarter and from the above and I'm not doing this from the bottom where the real developers does not have the right tools uh and they set and they set the expectations too wrong right this breaks people who kind of work from an individual contributor perspective and always this complaint scramble is there like uh uh legal gets always looped in after an incident. and not before deployment. It's kind of avoidable but that's the very common situation we see in enterprises where they they are they brought in after the fact in most some cases right and the reverse reversion is something
like uh lot of organization start uh with high confidence they make a wrong call in terms of going from enthusiastic adoption and to near zero within within months it's more on the trust once broken you can't rebuild It's very hard to rebuild that rest. Right?
Let me talk about an from a developer perspective. If you're an engineer or organization, those numbers are kind of real at the top, right? Netscape Net Scope found out data sent to gen apps are increasingly 30%age increasing in a in a single year, right? That's going to keep on growing going forward too. and cyber uh events telemetry report says 27% of corporate data entered into AA tools are sensitive and 98%age comes through personal accounts right that's very much scary like people use personal AI tools at work and most of these data entry happens through copy paste right not file uploads that's a traditional way where um that's kind of one of the biggest DLP strategies uh strategy gap
where traditionally you might do file classification and tagging copy paste bypasses all of it. Right? If you are in charge GPT the tendency is to just copy paste there rather than doing a file upload otherwise your DLP or your ADR might catch it. So those two screenshots mostly reflects like let's say you have cursor or cloud code right those are very popular coding assistants today. If somebody downloads this from their personal account, they do have to make sure like um they have to obviously have an enterprise license in case if they mistakenly have a personal account. These are certain hidden hidden configurations uh which developers have to be aware of wherein they have to make sure they do
not share data, turn on the privacy mode. If they use cloud code, they have to make sure they don't toggle that to be on. So you don't really pass your sensitive enterprise data to the uh to the AI model providers right obviously like if you have an enterprise offering admin can configure all of this but we have to also uh make sure this is really configured on the enterprise side with your admins right if you're in a big enterprise if you if you have few admins who kind of monitor the enterprise uh tool of these platforms you have to make sure they really configure this tool for safety
And let's do talk about the change framework itself. Right? As I mentioned on the previous slide, research across thousands of organizations shows that the single biggest problem of agent readiness is isn't technology at all. It's the culture. This is the framework which addresses that. A quick credit. This change acronym and structure was developed by Nofur Gasper at super intelligent. I have tried adapting each pillar to security team specifically and kind of these six pillars. Each one of them map directly to the six failure patterns we saw on the previous slide. Let's look at communication. Right? Communication is something kills shadow eye. When people understand the why and and they feel safe, they stop going around the process which is existing in
the org right and there is this human oversight when a is there uh it's an important to answer who is accountable for what it does right most of the thing most of them becomes uh most of them is kind of we have to make sure somebody approves it somebody denies a kind of responses and attitude is something how you are uh how you get your skeptics uh into building guardrails instead of blocking them. It could be your senior leaders. It could be your senior members in your group. And network explains why the CIO email or CEO email does not work and what actually does. Right? Governance is nothing but a fast path approval wherein uh your it shouldn't be a blocker. You
have your legal team. They shouldn't stop you from um doing innovation. It should be more at the speed of innovation not complaints timelines. So um I think um uh the last piece is enablement which means you train people for the actual job where a not just writing prompts. I think few years back people said prompt engineering and all that. Now things are constantly shifting now we need to find different ways to enable developers too. Let's look at communication, right? Uh I think we talked about the the most common failure in AA rollouts isn't technical. Leadership never addresses this uh job security question, right? Leaving that unressed and people go around the process. Correct. The a I
think this really is like coming up with an a manifesto of a onepage manifesto, right? I have seen examples in the wild where certain CEOs like Shopify CEO or Dulingus CEO publicly stated that a proficiency is required for the job. they came out and publicly announced who or does not have AI proficiency they might be out of the job right and they they neither nudged but they are upfront very frank about how they are going forward right I think that kind of a uh that kind of a thought process should be from every leadership too there should be more directness so employees will not quietly undermine adoption they don't undermine adoption because right they are not they wanted to have a
self-preservation So they we have to make sure we promote that and be honest about uncertaintity right in certain times we overpromise with a in place we say we will do 80% reduction if we can't really back it up set the bar little bit u set the bar more realistically if you if you have a real 30%age improvement since you set a 80%age uh reduction your actual improvement does not stand you still land as a fail failure because you set your bar too high which is so unex unexpected bar and define your boundaries right what data is off limits when we use this a coding assistance or any uh chart GPT tool all your developers or engineers in
your org should know what data they should can enter and cannot enter right and what needs human human sign off and what does not need human sign off those things to be really need to be put on paper that where that people can read about it it's not assuming can I click this to approve or doesn't approve right it should be part of the formal document which which can be a simple one pager manifesto and if possible this might be a bit controversial in some sense like if you can commit job safety in writing and also tell them how they can reinvest their saved time right obviously with the coding assistance now taking a lot
of thing your complex logic becomes like a 10hour job 10 hour coding logic now becomes a five minute logic. If you send it to Opus, it kind of complexely solves every problem. Right? Then you save this 10 hours. You should really tell them how they can reinvest in deeper investigations or any other additional research, right? That should be well written, documented too. And let's talk about uh human oversight, right? The next big thing is uh a lot of deployments get stalled not because who who's responsible when something uh when a gets it wrong right people are feared if when a makes a mistake who's kind of accountable if you can't answer that clearly before you deploy you shouldn't
really deploy right if you can answer it clearly most serious objections disappeared for agentic workflows this question becomes more urgent uh wasp calls this prompt injection as a top risk and miter atlas framework has documented cases where prompt injection propagates through rack pipelines. I think uh this three tire autonomy model is something you should try to consider at your org right wherein you start in a more um more sandbox state. You can have a AI tool and you should make sure that AI tool is just used for enrichment and added classification but not it cannot take real real actions right it has to go through this three tires uh and the next tire is supervised
state wherein you started resting your sandbox state now you're moving to a supervised state wherein you can let it perform certain automated actions not complex sensitive actions but now you're doing a sand You are now in a supervised state where you're a bit more comfortable, not too comfortable handing handing off everything to the AI, but you still do this pilot for a period for a 30-day period or until you feel comfortable this sandbox is proven, a supervised state is proven and then you go to this last autonomy tire which is this full full blast, right? You're going to give the full control of it. Obviously, this kind we this is something you you you call it I'm fully
agent ready. I'm not a I'm full I'm not I'm 100% agent ready right where everything a A is doing you give it a task it kind of goes to the from from the drafting state to the sending state it kind of takes care you const you you just do a random audit once in a while just to make just to ensure that things are going in the right way right apart from that you have given agent all the all the handoff
I think uh the next one is uh really about attitude. Uh I think these three phrases uh really gets it right. Most of the time the wrong approach is you telling your organization or your employees let's get on board or get out of this approach right like you'd start proving this the data proves but they doesn't a works can we start doing this effort I think uh the engineers who push back on these kind of concept aren't wrong right they understand the risks better than anyone they that carrying is an assert not a problem when they say something is wrong. I think the conversation that ideally will work when you approach with your senior managers who kind of uh start uh
debating on AI technologies and all that is can if you have use these four words can instead of having this misalignment if you tell them help me make this safe we have this new a tool which we are planning to adopt can you tell me how how can we make it safe right those might be more engaging and you can let uh they start asking the right questions after that they could ask like what what might be the data this might touch what's the roll back strategy then you get to convert this that to a safety requirements um I think given your loudest critique in your company becomes your uh credable safety advocate right work with them not
around them right in case somebody denies you have to still make sure you're going to bring them on board so that they take the responsibility and they help you design a safety requirements for the platform they said right
I think uh the the next one is network I'm going to say like u there was some research wherein a wellfunded um a initiative sponsored by the executives and uh there is a there is a CEO email which really talks about great about those initiative and we have a old playbook died within few months but when it where they saw a research where in a team with none of them they did not get executive sponsor and all of that but they had a 70%age adoption in just six or seven weeks right the difference is one practitioner who had an very good practitioner showing peers what worked and what did not work so people don't change behavior because of a companywide
email they change when somebody they trust shows something that's very useful and very practical Champions are people your team actually listens to not the arc chart in most cases if you have arc chart I think we need to build champions around these capabilities and um they say I tried this here's where it worked and and finding those people is critical so you could make sure like um and you can also make sure those uh those you those uh individuals are recognized in team meetings for what they contribute Right. And we and the last part is builders make safe to AA in production, right? They are they they are the rail builders. They build integration patterns, data handling rules. We build
a lot of internal uh security guard rails to distribute to developers. So the part of our job is also to make sure that gets widely adopted and we need that community to be strong. They might be your former skeptics but they they know how to uh how what what what might be the best guardrails they can build for the developer community there. I think the I want to really touch on governance, right? Governance is something every organization struggles. It can be damaging in two different ways, right? In one state there will be no governance. Everything goes to a production without somebody even legally reviewing it or privacy even reviewing it. That's one part of a damaging
stream. Whereas this other part of damaging governance is there is a six month approval process for any new tool. Right? both is really damaging for the org. I think there's something should be considered a 48 hour or whichever is more operate a short time window of sandbox approval wherein when you go to a legal or privacy it should be little bit more faster. It doesn't it shouldn't stop your innovation because right now innovation is the key where people are trying to develop faster. Your governance should be around the fast innovation. You do have you you can have a fast approval with conditions. Not saying blanket bands, but having fast approval with conditions makes it much
more stronger. You can still trust the governance process, but still continue to innovate.
I think uh the last part is enable, right? Enablement. Uh most a uh uh training today stops at prompting. If you're good at prompting, you start to develop a lot of sophisticated tools and those became started to became table stakes. A year back, prompting used to be something so crazy and developer were how do we prompt this better, right? The real shift is how your now the uh trend is how your team members can become agent managers. Uh that's the real skill set, right? Uh nobody is formally training for that. It's starting to boom. And critically you need some uh your team needs to have some protected time to build that muscle. Uh if you are
in a if you're buried in existing work workload by trying to prompt and do certain uh certain woded projects, it may not really help downstream right. I think things are constantly changing. Everybody's going into this evolving agentic world. I think this three setup uh this supervise, escalate, validate. When I say supervise, it's really about recognize when your A output is wrong or when it is hallucinating or drifting. Right? That's the agent manager. Agent manager, when the agent runs, you should know what it does and what it is not supposed to do. Do you need to really look at the log traces and they should really have a good understanding of what how can we tune this agent better?
Your team must know what good actually looks like, right? What's the normal flow of process and is the is is the agent right doing the right time right thing at the right point and when you see like uh exactly which signals you have to escalate when something goes wrong and if it needs a human sign up you have to make sure it really triggers a human sign off at the point right I think the last point is um uh when you have this time saved through AI we talked about it you should use it for more deeper investigations threat time threat hunting, skill growth, uh not headcount justification, right? That should be like a common synonym with
agent managers. People have to adapt to a different skill level. Call it agent managers too. I think we are coming uh to the end of the slide. I want to really uh make sure you take this form right. One is uh culture uh beats technology. Um so di diagnose the human dynamics before buying any other AI tool. Right? deploy an a manifesto. It can be single one page. Try to address the job security fears. Make sure like how do they save the invested time, the saved time, where should the saved time should go. If you can clearly address them, that's a big that's going to be a big difference for them. Understand the three tire autonomy model. Don't go
start from production, right? You can't use a agents directly on production. You have to go through stages. Promotion is earned unless you really test it out in sandbox in supervised state and then you go for a fully agentic mode right and try to turn your skeptics if your or has lot of skeptics who kind of uh disagree on thing ask make them as your uh safe safety requirement guard right help them make make them safe they going to be your uh they going to be your safety instinct right every time you can reach out to them you have the best people in the team who was little bit skeptical about AI but they can help help you
about and try to build this build champions around and build networks so that that's that's a peer capability right top down mandate does not really work in a in a modern organization you need to have this communitydriven builderdriven community and try to have this governance process bit more faster in your or if you have a legal process or a privacy process which is more timesting have something which is more practical for this innovative world right you can have a fast path approval with conditions And then you start to build more safety requirements after that. Like I said, I think prompt engineering concept is long gone. Now we have to make sure we should start getting ready
for this agent agent managers world. A lot of articles the public articles really starting to talk about recently. This is more on get uh you should make sure like uh get the get this right and the technology deliver will be more promising than ever. Right. I think that's about it. I'm kind of little bit over time. I can take any if if you have any questions offline. Uh thanks for your time. Thanks for attending.