← All talks

Vladimir Ožura | The Art of Infiltration: Leveraging Trusted Relationships

BSides Zagreb45:14130 viewsPublished 2025-03Watch on YouTube ↗
About this talk
Presentation: This talk examines the recent activities of Hazel Sandstorm, an actor targeting government, telecommunications, and IT organizations in the Middle East. The group employs sophisticated tactics, such as impersonating legitimate IT providers, to deceive users into opening malicious files or connecting to actor-controlled infrastructure. These deceptive strategies enable them to infiltrate organizations and gain unauthorized access to sensitive information. Once initial access is achieved, Hazel Sandstorm leverages custom backdoors and scripts deployed on compromised devices. These tools allow the group to maintain persistence within the target’s network, making it difficult for security teams to detect and eradicate their presence. Additionally, the threat actors utilize techniques to harvest credentials, enabling them to escalate privileges and move laterally across the network. We’ll highlight how threat intelligence efforts played a crucial role in uncovering the source of initial access, shedding light on Hazel Sandstorm’s methodologies. Through careful analysis and collaboration, researchers were able to piece together the attack chain, following the progression from initial compromise to persistence, credential access, and command and control. Understanding these activities provides valuable insights into the evolving landscape of cyber threats and underscores the importance of robust cybersecurity measures. By studying the tactics, techniques, and procedures employed by groups like Hazel Sandstorm, organizations can better prepare to defend against similar attacks and enhance their overall security posture. In summary, this talk will not only delve into the sophisticated methods used by Hazel Sandstorm but also emphasize the critical role of threat intelligence in identifying and mitigating advanced persistent threats. This knowledge equips defenders with the necessary tools and strategies to protect their networks from increasingly complex cyber adversaries. Speaker: Vladimir has over a decade of experience in cyber security and is currently a Principal Security Researcher with DART, where he has worked for the past three years. He leads global incident response efforts and conducts data analysis to uncover attack narratives. As a lead investigator, he delivers both findings and recommendations aimed at improving the security posture of various environments. Vladimir holds industry leading certifications such as GCFE, GCDA, and GSOM. In his free time, he enjoys hiking, travelling and spending time with his family. Recorded at BSidesZagreb (https://www.bsideszagreb.com/). #cybersecurity #bsides
Show transcript [en]

everyone thank you so much for joining and kind of uh being here in person it's nice to see a lot of uh people here a lot of uh you that I already know so that's very good um glad to be here I have a lot of slides to go through so we'll just kind of dive into it straight away so first of all uh I will be talking about an incident that I did last year uh where we were basically chasing well not chasing but um investigating a nation State thread actor uh in an environment uh it's not going to be your typical kind of presentation where I say okay this is the beginning you know this is

how it happened this is how they got in this is the middle this is the end so basically giving the you the story from the beginning to end it'll be the way that we have actually um investigated this and how we were finding things um throughout the engagement so let's dive in um a bit about me if you don't know me so my name is Vladimir ojur I've uh have 18 years of experience in it uh the last 10 in security currently employed as a principal security researcher in the detection response team in Microsoft um doing investigation leads so basically leading investigations within Microsoft Dart and uh a threat Hunter and I do have a couple of ss as

well so as it usually is um when there's an incident you basically just dive into uh the incident you are pulled in into an a call with a customer and then the customer starts telling you oh you know this is what we found this is what we have so why would it be any different in here so let's dive into it uh first thing uh the customer when we got in uh to an to disengagement when we started it the customer told us about uh that they got a notification from their sock about some malicious traffic that's coming from a SQL Server uh they already had a third party engaged so another incident response firm that was engaged

for 2 weeks and they already did an investigation found a lot of good things uh one of the things they found was that Hy prolis accounts was you were used for um RDP uh basically Lal movement using RDP there was enrock tunneling software that was used from a SQL Server so hence the notification from the sock that they got third thing they found was uh web shells on internet exposed web servers and this was believed to be the initial access method um which you will see throughout the presentation that obviously it wasn't so um the fourth thing so two domain controllers or two DLS were found on two domain controllers that were used to register uh that were registered to

capture password changes so this is how the the thread actor was basically doing uh credential theft and then they were attributing this attack to ap34 so that's kind of the the five things were given um the good thing is is the customer had an EDR solution deployed with 6 months of retention which is very good it's not very often that we see customers with EDR Solutions so when we enter an engagement it's it's usually without EDR very bad um security posture and all of that stuff environment size not that big um but the first thing that we usually do when we start an engagement we deploy tools so we do have some proprietary tools on the

left hand side as you can see uh data collection tools point and type de deep scan tools these are tools that are developed by us so basically we have three tools that play um fenic is the first one which is basically widely deployed to all windows machines we have possibility of deploying it to Linux and Mac OS as well uh but we wi widely deploy it because we want to have widescale threat hunting availability so this tool basically collects very specific registry keys or file registry keys or for example um event logs it will not collect everything so very surgical in that sense if we find device of Interest after we start investigating using that data then we basically um

execute Fox which is another Tool uh that will collect the full kind of registry Hive the full um event logs mft usn journal and all of that stuff and then last one is Arctic which basically is just an active directory um tool that collects all all the objects within active directory so users computers groups um Group Policy objects and so on if a customer doesn't have an EDR solution we would deploy the the EDR solution so microsof Defender for endpoint in this case or Defender for identity um for continuous monitoring so basically from this point on um just to basically have continuous monitoring of any activity that is happening within the environment the next thing so since the

uh customer told us that this was likely ap34 we as Microsoft don't track that as um as that we have our own naming convention so uh the ton or the naming convention that we use is U aligned with the theme of weather so we got things like blizzards and tempests and sleets and sandstorms and things like that uh the one that we were looking for specifically is Sandstorm um well falls into that family uh and specifically ap34 is known as Hazel within Microsoft so they are based out of Iran uh it's the ministry of intelligence and security they're basically targeting anybody that's in the government sector or has ties to government telecommunications or it and

uh targeting uh organizations within the Middle East the way that they get in at least what we kind of um had at that time was basically social engineering that was kind of it so we kind of found out a little bit about okay what was the what is this AP t34 um gain a little understanding of it and then we started diving into the data after the the data started pouring in from fenic and um all the other tools that we had deployed so we'll start start diving into the finding straight away first thing that we found wasn't a difficult one web shells so the thread actor basically um created this errors. aspx file which you

see on the bottom left hand corner I'm not sure if the if the pointer is working there so that file was created um and then used by the thread actor because the webshell only had capability of writing files to disk so the director used that webshell to drop uh two other files well one other file which was this ghost. in and then modify this sign off. aspx file which was a file already present on the system so the ghost. in file is at the bottom you can see the uh sorry the um sign of. aspx file is at the bottom which basically says if you have a header with the x checkme equal to ghost then it will include that

ghost. Inc which is the webshell and that webshell had more capability in terms of you know command execution file upload and file download So Perfect um wasn't a very big deal uh very easy find uh plus the customer already told us that they had that uh this was found on both web 01 and web 02 um continuing on so after the thread actor modified the sign of aspx use that aspx file to drop additional files onto the system so there's a lot of Powershell U scripts there was the the one that actually really stands out is the enro command so ng. PS1 uh which was basically a partial script to download um enrock and the

other ones that stand out I mean yes Port scanners and all that but there was a scheduled task that was also being kind of uh created we didn't see anything come out of that one um and then there was this kernel elevation of PR vulnerability uh so that was exploiting that part we did not see any signs of that uh happening in the environment as well um enr for those that don't know it is a tool that basically creates a tunnel to your local host so you don't have to expose it out to the Internet it's got an agent connects out to the enro edge um IP addresses and then basically allows it to allows it to kind

of um expose the the server to the internet that way the question that we had after this um was how how was that first webshell so the erors aspx file how did that end up on the system we just saw file creation that was it nothing else nothing before it no logins nothing along those lines so that was still a mystery so we said okay fine uh let's just leave it at that let's move on let's see what else we have and we started looking into that angr tunnel so the INR tunnel um was basically found only on SQL 01 it was created through web 01 and web 02 because they were executing those po shell commands that

were that was downloading ingr and then targeting uh various systems throughout the environment uh the only system that this actually worked on was SQL 01 so only that one box was there because it had access to the internet all the other boxes that they tried on didn't have any access so so after we saw that we saw that there was RDP connections to the SQL box from the enrock tunnel it's very easy to kind of identify that so in the top right hand corner you see what the kind of the detection mechanism for that is or how you can figure that out but basically the lateral movement piece uh once they got access to SQL 01 U they

used another privilege account to access not only web servers and another SQL box but also two other um domain controllers so those are the ones that we were kind of interested in because we want to know what's going on on your highest privileged assets so looked at the Domain controllers found the two files the two dlll files that were registered um they were created on C Windows system 32 soon after that uh those files were created we basically found that there was a notification package that was registered and um there was another file that was created so we also look into like you know the non standard directories where you would kind of see things that are weird in terms of um

thread actors putting stuff in there um so in this C program data Microsoft update service uh updates folder there was a file called LPD and we asked the customer can you collect that file let's see what's inside it the bottom left hand corner is what we found fine looked like Bas 64 encoded I was like okay that's easy you know let's decode it let's see what's there you get garbage okay fine um in the meantime we had the Ms update. and the pass ms. file uh handed over to our reverse Engineers they had a look at the file and they came back with the analysis of that and it turns out that it's a password filter so basically a

credential stealer and how that works is in this case so if you got a domain with you know users and computers and all that stuff um when a user hits control Al delete to change their password on their machine uh the the password is actually changed on the the main controller so this is what was intercepting so that password change request was hitting the uh local security Authority on the on the domain controller and then the domain controller basically passes the um has to call these password filters to basically say okay is the password compliant with you know the complexity and all of that stuff this is what was registered so that password filter was

registered by the thread actor it was inter intercepting those passwords and then writing them into the LPD file um there was a format where they basically put the time step when the password was changed the username and password um separated by pipes encoded in Bas 64 there was a key there was custom Al alphabet that was used and then basically encoded into some sort of visioners uh Cipher that was used um after that the algorithm was CRA cracked so the reverse engineer was able to crack the algorithm found eight entries in there um all the file all the accounts were for non-privileged accounts are basically just user account user accounts that were changing passwords on their

machines so after that um there is another file so this pass Ms file has the capabil has the capability of calling the MS update DL file and that is a c based dll but once invoked what it basically does so the the um Ms update dll file had hardcoded UNCC paths to both web 01 and web 02 and hardcoded credentials in it so it's specifically written for the customer um and this is the way that they were basically shipping the files so the LPD file from the domain controller to the web servers so that they could access it from the outside so in this case at the bottom you can see that the Thor was using the angro tunnel

to connect to SQL 01 uh basically connected uh then to dc01 D dc02 dropped the two files and then once the user started changing their passwords this LPD file was uh was created uh once the LPD file got created it was uh basically shipped off using the uh Ms update dll uh as icon02 JPEG to the web servers and then from outside the trctor was able to access it and get the get the passwords so how do you hunt for this for these kind of notification packages I'm not going to go through this but the logic is there basically none of these dll files were signed so that's a very simple one to kind of do if you're collecting dll uh

file signatures you will basically uh be able to you know find this very easy uh to detect so there's a kql how because we use kql um in our Hunts uh kql kind of logic behind that um detection so so moving on at this stage the threat intelligence intelligence team informed us about a domain that was used by Hazel Sandstorm um I had to Red the domain because it's not publicly known so cannot share that information unfortunately but okay perfect what do we do first we looked through our data so the fenic data that we collected we looked through that data and we found that there was uh DNS events specifically for invalid main name um DNS events which contained that

specific domain and one of the things like when you decode the so there is a kind of this binary data that you have um once you decode that you actually get to the you know let's say clear text portion of it so you can actually go and see what the what the domain is this is what we found and we found a what looked like a workstation within the the organization um in this case I put it down as wks so work workstation um again had to redact it there but that was it cool so we know that that workstation did something uh within the environment right did some sort of DNS request very likely because this this was found on

the main controllers the the main controllers were not compromised in that case these ones um so that at least told us okay fine let's go and look at this wkss machine but we basically said okay um let's not just focus on that one let's focus on all the other machines that are in the uh that have an EDR solution installed so we started searching in the EDR because it's got six months why not go you know full-blown 6 months let's try it with that no results found in like 10 seconds so okay experience shows that if you kind of search something for 6 months probably not indexed and all that stuff you will it will take a while

while it's not going to happen within 10 seconds that you get no results so we kind of Switched the logic and started searching in like two we time slots to give it a little bit more room and slowly going back in time and then finally we found it right so what we found was that this was a there was a Visual Basic script running on specifically that wks machine uh that was reaching out to that domain but the script also had um as you can see here like this specific stuff here where it says CC and then the computer name and then the domain that was specifically also found in the um invalid uh domain name uh DNS events so

the script had also capability of um active directory Discovery so it was kind of taking the users and computers out of it and a lot of other things um so okay that's good we found kind of the um the source of that information but what executed that VBS script that was the question that we had we want to know how did that VBS script get executed so we started looking into the EDR solution luckily we had the parent child relationships and we found that it was this op caka process uh which was uh which is a process uh that is well it's a sub component of um HP operations agent uh which is basically HP uh open view or what used to be I

think uh micr Focus so it's it's monitoring for the server and specifically this Obaka um agent has the capability of running scripts so hence why we see the VBS scripts right the temp script and then the ABC3 VBS which was reaching out to all of these um to to this domain so excellent we got that um we then wondered okay uh what else did this Obaka kind of execute was there anything else that we've um that we've missed you know we want to know about it and we found that there was the EDR Solutions um file Integrity monitoring uh component picked up the file Ms log on. dll which was being created by cscript this again not something that you

usually see um on a domain controller and then soon after that we see the network provider with the same name being registered by the operations agent so again not something typical you see um we had a look into all the folders that we already know that the thread actor was using so C users public was something that was heavily used by the by the thread actor we found this ab123 C.D file so we asked the customer can you go and collect that file let's see what happened what's inside it and we found the clear text users and passwords for privileged user accounts so okay that you know obviously has something to do with it you know the

network provider and that that file has some something to do one with the other so again we sent off the file to the reverse engineer have a look at it analyze it see what we get back what we got back was that it's again another kind of credential stealer but it's utilizing uh the api's MP law on notify and NP password change notify in this case so it's very similar to what was what we saw with the other one but in this case this is only on the local machine so you have to be interactively doing the log on on the machine whether it be through RDP or physically on the box um so every time a domain admin

control I'll delete you know logged on to the Box RDP to the Box we basically um had the passwords fill uh that text file um there's a little explanation of how this works so basically when you know we want to enter credentials the wind log on process is that username password box that you see uh once you enter the credentials it will go into this MP notify and then MP notify will basically notify all the registered Network providers on the system so in this case it was also U notifying the malicious network provider and that was outputting the file or the credentials to clear text uh clear text credentials to the to the output file so perfect we found that how do you

hunt for that kind of the same logic it was not signed so the dll fair fairly simple in terms of um in terms of um figuring out you know whether there's any any suspicious DLS on the file system um I will not go through the logic in here it's basically for reference if you guys are going to I think they're going to be published right the all of that stuff yeah so you will have that you will be able to copy and paste it and all that stuff cool um so how did this all look like um basically you had the thread actor um that was controlling this HP operations manager because that's the console for

the for all the agents and then it was using that to execute VBS scripts uh first on the wks machine doing DNS requests and kind of active directory Discovery then we saw that on DC 1 um it created and registered the network provider this one Ms log on. which writes the credentials to the file so um we thought okay this was obviously used throughout the environment by the thread actor um maybe this is how the web shells got created on web 01 and web 02 and that's why unfortunately this is in a different color because it's likely successful now what do we say likely successful because these ones didn't have an EDR solution so it was not

covered by an EDR solution we did not have the parent child relationships and all that stuff to be able to say with certainty yes this is what was actually done on the box so yes it's an assumption it's very likely because it was used but it did have the HP operations agent on it so finally we found how the errors. aspx got onto it um what we asked the customer afterwards was okay okay who's got control of the HP operations agents like who has control of that manager uh the console and they said oh um It's Our IT service provider it's like okay that's a big deal because it service providers provide services to other customers

right so if their HP operations manager console is compromised it's very likely that other customers are also compromised um and we've seen here is basically the thread actal leveraging The Trusted relationships technique uh within the initial access tactic of of miter so usually what you see is for example the uh thread actors using like valid credentials um for you know your vendors and then coming into your environment but in this case they were using the HP operations manager uh console to kind of control the agents and execute code on it so this was a first for me to be honest so I haven't seen this technique very of like used very often um and basically now that we

know kind of how all of these things happen right how does this look like in a timeline um so just kind of recapping everything so starting off on day one um that VBS script was executed on wks um then we see the after eight days dc01 registering that network provider after that we've got the file basically being created a day after and then the thread actor using those clear text credentials um to execute U you know basically gather gather credentials to be able to do things afterwards because they had high privileged account credentials to the environment now um day 24 both of these web servers they get a webshell uh the first one errors. aspx created by again very likely the

operation agent um on day 28 only on web 02 they started modifying the uh or they modified Des sign of aspx file created the ghost. in which was the the webshell that had more capabilities and uh basically dropped multiple binaries through it so they had persistence there and then on day 40 started executing wmi commands uh to get the enrock tunnel on the SQL box uh which they finally got on day 40 same day and started doing lateral movement RDP connections and all that to the other boxes on dc01 and dc02 uh the MS update and pass Ms dll files were again registered as notification packages and then on day 100 they get notified from their sock

hey there is some malicious traffic going on uh from this SQL box so it took them from day 40 when the enrock tunnel was established for the first time to day 100 and it took them 6 days to or two months to actually notice this not probably the best um sock out there um but anyway they got notified which is good right um after that obviously they took the server down so the thread actor had no no um options but to kind of go back to the web 02 and kind of fall back onto the uh the web shells again and then we were finally engaged on day 123 so all of this it took them 100 days

they were in there for 100 days when they first got notified by their sock there's something going on um what kind of key findings and recommendations did we give to the customer again this is not the full list of course but kind of just the key ones first of all um EDR solution wasn't deployed fully out there um it was just not deployed to web 01 web 02 in this case uh but it was missing on other servers which were not uh part of the uh the attack chain so make sure that you have EDR on all your machines make sure you know what you're protecting you have you know your assets and all that

stuff um in terms of uh unrestricted internet access so as mentioned the Parell script that was downloading and executing enro was uh executed towards multiple other machines it only worked on SQL 01 so one machine is enough they basically had to go and review their filtering strategy make sure that you know block by default and then allow only business Justified traffic so that's the second one third one we actually found out that the HP operations agents were unnecessary on the boxes they were not needed there anymore so it actually turned out that only like 20 boxes had access had the actual operations agent on it all the other boxes didn't now the IT service provider used that uh agent to kind of

collect information on CPU memory disk and all of that stuff to be able to charge the customer and kind of foresee whether they they need to kind of um add additional dis space memory and all that stuff but it was only for 20 devices which again it didn't need to be there so get rid of the software one if you don't need it uh to make the to basically reduce that attack surface unsecured privilege access so we've seen that the tractor was using High privileged accounts on a whole bunch of systems um so web servers that are exposed to the internet uh they're not treated as what we what we call tier zero machines or your highest value

assets uh and basically what we've recommended to the customer this is now called the Enterprise access model it used to be called the tearing model so for the on-prem uh side of things if you don't have like a hybrid connection you are basically kind of um stuck with um implementing the tearing model basically making sure that all your highest value Assets in terms of um domain controllers anything that has to do with identity and authentication is treated uh as tier zero and you can only log into those systems with like privilege access workstations it is all locked down you cannot use your domain admin credentials on any other system but those within tier zero um and you

cannot of course log into a tier zero asset from like a workstation that is being used by um somebody you know in in finance department or whatever so that's the fourth one and the fifth one is as we've seen it took the sock 60 days to notice this so obviously they were not uh kind of monitoring the alerts properly and they had no proper threat hunting so this is something that you want to have in the company as well um in terms of the Lessons Learned for incident responders so what we kind of get out of that but basically some of this stuff we already do um some of the stuff we notice during the engagement

first of all trust but verify so this is something we apply on every single engagement thank you Mr customer for giving us all the indicators and kind of providing us with the guidance of what you've already investigated but we're going to be verifying that so if we went with the Assumption as the customer said yep the um initial access method were the web shells we could have said yep okay we found the web shells that's it fine we don't care about it you know trust but verify make sure that you're pushing your boundaries the other uh the other piece is thread inel is crucial so in this case thread Intel played an important part because of that domain

that they gave us for us to be able to go back and look at you know uh look at the data to see if there was anything uh referencing that domain so work with your thread Intel team make sure you know they know their stuff they it can very much uh kind of steer the the investigation in the right direction and make you more effective um knowing your tools so that is something that again if you do an if you do an incident response if you are not familiar with a specific tool um you kind of have to learn on the spot but then use your kind of past experience again uh what we did

here was again use our past experience it's like yeah six months worth of data to have you know have uh to search through it within like 10 seconds and give us results doesn't work that way so that's why we you know question that try to use your experience from from the past and then the last one is thinking outside the box um the red teams out there the threat actors you know everybody is getting you know thinking outside box pushing their themselves harder uh trying harder being creative and all of that stuff we as incident responders should be doing the same thing so don't set boundaries for yourself um have the same approach right be creative um this is ultimately what

kind of uh was was the was the game changer here in this case because when you're dealing with a nation state they can be very creative you need to be able to kind of handle that as well so you need to be creative in that sense for the customer that was not all there was another nation state in there that we found and they were in there for three years they didn't care about that to be honest they had they didn't care about that so they cared about the incident that I was talking about this I will not cover here um because of time and all of that so it's a talk for another time that's it thank you so much

[Applause] questions thank you Vladimir do we have any questions okay yeah uh do you know how the HP operations agent was uh initially compromised and if there were any other customers that were affected with the same issue uh unfortunately no because it is another customer that we did not engage with so we could not engage with that customer we could not tell the customer um hey you know there's something fishy going on uh what we did with with the customer that we were engaged with we told them you have to go and I mean you have to you should go and speak to your vendor and tell them they have an issue we as in this case as

Microsoft IR we could not go to that um specific customer because we were not engaged with them you will get your own mic good joke uh thanks uh one question so you mentioned that they were quite for a long time inside the uh the company so what was their game so were there initial access Brokers waiting to sell the access for the exting data did they start maybe some kind of somewhere you know what were they doing for 100 23 days or whatever uh so in terms of the threat actor being there since it's a nation state um coming from Iran uh it's targeting Middle Eastern countries Espionage would be one of the things so basically just kind of

collecting data uh they would not be deploying Rand somewhere it's like not their thing and did we find exfiltration no the reason being is there was no data uh by the customer that would actually give you they couldn't provide us with the data that we go look and prove that there was exfiltration so did it did it likely occur yes do we have evidence no but do you have a strong feeling that this was very targeted or oh yeah yeah targeted 100% I mean I I get that national state actors would you know penetrate an IT provider because that gives them a l you know a wide correct but then they choose and pick which one they actually

compromise so obviously this client was a good Target for them for some reason yes it was it was a good Target for them unfortunately I cannot even mention the sector the target is in um but yes it was definitely a good Target for them to to go after and like I said they would not deploy ransomware it's basically Espionage and kind of collecting information collecting Intel that was their main goal okay and maybe if you have time so the uh cleanup was successful after that or for the client the clean yeah very good question because the customer U so what we usually do do in this uh in such engagements is we would basically reinstall all the main controllers we

start from scratch in terms of the main controllers right um you want to have a clean server clean system to start off with and that was basically it in this case um the customer was very so they had a dependency on Services because the domain control was hardcoded into those services and they couldn't shut it down they could not replace it in time so they asked us can you give us kind of just how do we clean this stuff up we'll do the you know the full kind of reinstall later on not the best not the ideal but again it's the customer decision right it's it's a risk-based decision for them thanks sure

uh I think it took around 8 to 10 days something like that I don't recall but it's 8 to 10 days so usually our engagements are between one and two weeks that's kind of the usual stuff we move very quickly because of the tooling um like I said the the fic tool that we have um that is something that we deploy widely into uh into the environment so we collect information then we can do wides scale hunting uh across the environment so yeah fairly

quick uh hi uh so this was a sand storm uh actor uh so there have been some geopolitical things going on so how would you react if this was a blizzard threat actor in the future engagements so I I I I I don't know if you understand what I'm referring to because there have been some in initiatives not to investigate blizzard threat actors anymore so is that uh affecting how Dart will investigate in the future thank you I don't thank you for the question I don't think um we have any kind of blocking points for investigating blizzard activity so I've done blizzard activity before like investigated blizzard before from Midnight blizzard um I don't recall the

other uh the other threat group there um that I did but it would how does it def um I would say the blizzards are Maybe more kind of uh blending into the environment than these guys at least from my experience that I have on the for the engagement that I've done with them but yeah I think you would with blizzards it's it's more difficult especially midnight blizzard that is a difficult one to to kind of deal with does that answer the question or any follow-ups

um I was asking because the new Administration uh from United States because of that yeah because of that is that going I don't think that will change it won't change I don't think it will change I hope it doesn't

yeah um hi just a question since you working with dart uh we were recently seen that if the aps don't get caught um in a typical fashion they usually slip up in like one step and that gets them caught so for example in the vity had an article last year about the Wi-Fi neighboring attacks where basically they were in three different environments for sever several years and then at the final stages they slipped up by running you know reg save uh s Sam and I noticed that the sock wasn't actually alerted in this incident until the enrock tunnel came up right yes so how often is it that you see as as an incident responder

that uh Advanced groups like this slip up only at one one step where basically if they change that to something like a custom C2 they would essentially evade likely any detection okay if you if we're talking about like really Advanced groups um it's very rare that that they slip up so this was I think just a coincidence for the customer because they did not have a very good sock like not good detections um EDR deployment they were not looking at the the alerts properly um and I do recall kind of the the alerts there was just too many alerts to look at there were no there was no tuning or anything like that so if you

if if a thread actor stumbles upon an environment like that then you know you could basically use anything you don't have to kind of go and masquerade and kind of hide um hide between you know different things and whatever the customer has in the in their environment but as you were talking about the the blizzards right those guys are very difficult to find because they do blend into the environment a lot um I did a trctor it was not a nation state but it was was an advanced Rector where they Blended in with the company's AV solution that they have that they had at the time and it took us a few days to find the

spiciness so yeah I I would say it just depends on the customers's kind of security uh maturity of of their security of their sock and you know the capabilities that I have thank you yep all the way the back we have time for only one question uh I watched the map uh about techniques and the timeline so uh do you think that there were two different threat actors operating or acting in the same moment because um I think that in the first 14 days uh the threat acto was using a couple techniques and then there was like a stop sort of break of 10 20 days and then started again and it made me remember about two different threat

actors and somehow you confirmed it that uh they were this act was from Iran so do you consider that maybe there were two different team operating in the same time or anyway like doing the end off of activities like first part was uh End by a and then uh was like just ended up the second one to another one yep I I get the question so in this case this was all by done by one thread actor that that's it we are sure of that because of the ttps and the knowledge that we have around that thread actor so specifically Hazel Sandstorm um yes there was another tractor in there acting at the same time

so they were overlapping but the ttps were different for that one and um in this case it was not handed off to like from one threat actor to another threat actor you would see typically that in like financially motivated groups or thread actors where you have initial access Brokers and then they would have you know sell that off and then some other Thor will pick it up and then come into the environment do whatever they're supposed to do and then a third thre actor would do the final kind of stages of it um this is not the case with uh with this one so even though that there was like differences in time these guys

are very patient right and the reason they switched from HP uh operations agent to web shells to enro and then you see when the enro tunnel got you know smacked by the customer they switched back to the web shells so they had multiple methods of kind of entering the environment maybe the the the customer sorry the the the vendor noticed that their operations manager was um also compromised so they lost access to that they had access to the webshell the enrock tunnel lost access to the enrock tunnel went back to web shs but it's all the same trctor thank you lir please Applause louder louder