
hello and welcome to my presentation on orchestrating return operations safely for both the offending and also the defending teams my name is yanish anilis and i am the cyber risk services director at deloitte cyprus and during the last 11 years i have been with deloitte i have been involved with many projects across multiple domains you can find me often on twitter which is my primary source of learning the latest tips and tricks around information security i try to maintain a blog and i really enjoy presenting and attending as many conferences as i can and giving but giving back to the community as much as i can so i would like to thank besides athens for this opportunity to
present before we begin let me start by saying that this presentation is not about best practices and not the bible and how you should operate this presentation is aimed at giving you some ideas on how you should operate during return operations to be safe and to also protect the client's data while we're doing this assessment because of the time is not feasible to cover every single technique or every single aspect of aerating operation some things are skipped but i'm giving out links at the end of the presentation and it's also very important to note that the techniques that i'm going to present have been first documented by other people so please be so kind and give the
appropriate credit where credit is due and let's dive into the presentation when we're designing the infrastructure prior to starting the operation we have to choose on the model that we want to operate under so in essence there are three different models to choose from one being the fully cloud model where everything related to the infrastructure of our of our operation is online on on cloud servers then we have the self-hosted model where everything is the opposite of the cloud model everything is hosted on site and lastly we have the hybrid model where we have some critical servers in-house and then we have some cloud redirectors for handling the traffic we usually operate with the hybrid model
which is the one i'm going to present today because i mentioned redirectors and it might not be familiar a familiar term to everybody when we say redirectors we basically mean servers located in the cloud where their only task is to receive traffic from our client and send it over to our infrastructure so they they take the traffic send it back and forth so in reality they hide the actual ip space of our operation so our client will not be able to block our command control server but instead they will be able to block the redirectors but then again we can spin off new redirectors and then continue the operations thus allowing the operation to continue
without stopping immediately at least redirectors can be changed chained so we can use multiple redirectors in a row and we can make it as complex as we want but in this example i assume that you're using one redirector for every aspect of the operation so one for fishing one for payload delivery and so on so this is how the infrastructure is going to look like where in the blue space let's say this is the lotion we have six different servers and we have three different domain names pointing to the servers depending on the on the actual stage of the operation so for the fishing side of things we used one domain one being one server being for the mail server who
is going to be responsible for sending the phishing emails and then we have another domain who is hosting the phishing payload then we have dns and http redirectors with two different domain names so if the client blocks one of the domain names then we have another domain name to utilize and remain active in our operations and all the traffic ends up on our in-house command control server but we're not limited into using just one cloud provider we can use two or three or more depending on our needs in this example i have four servers being digital lotion and then another two servers being on a different cloud provider the reason for splitting up the infrastructure like this
is for example the client sees that there is traffic which is unexplained going to digital ocean so they see a spike in traffic going to digital ocean they might decide to block the entire ep space of digitalocean while they investigate but from our side we still have a secondary channel to continue the operation via different cloud provider as you can understand it will be very hard for clients to to block everything therefore we remain active until we find another way to operate spin off spin up new redirectors and continue the operation we can harden our infrastructure even more using ip tables for example where we know that only our command control server is supposed to be connecting to the redirector
via ssh so since we know these parameters we can have a rule specifying that for port 2 we won't accept traffic only from a specific ip which can be mentioned in the rule now we also know that our payloads work over specific ports those ports are either 53 80 for 38080 or whatever else you configure your ports to use if we also know the ip space of our client we can also specify that so in this example we see that we accept tcp traffic on these ports only from this ip range if any other ip which is not part of this ip range requests any of these ports their traffic will be dropped and we also have another rule to accept
udp traffic for 53 for dns traffic again specifying the ip range of our clients now i know that in this example we might miss many targets just because we have people traveling or people over data connection or on their mobile devices for example but this is a risk that we're willing to take rather than exposing things that we don't want to expose because sometimes our infrastructure is is client specific so we might use locals for example of our client so we don't want everybody to know that there is a red team assessment or a phishing assessment happening for this client at least at this stage and i believe it's our responsibility to also monitor our infrastructure the way that we
expect our clients to monitor their infrastructure so what we do is that we gather a lot of logs from all of our endpoints and all of our cloud redirectors in order to make sure that the traffic that is hitting the redirectors and our infrastructure in general is the traffic that we want to accept and if it's traffic that was accepted and we don't want we don't need it or we don't want it then we can take corrective measures and change our configuration to ensure that the the only the authorized people get to see the resources that we want to to share which are valid for our red team infrastructure and what we do next instead of
you know actually being the blue team we use some other tools to help us so instead of just looking at the logs and doing thread hunting while we do a writing operation we want to be alerted if something happens out of the ordinary that is so we use bash scripts to check the logs and if they see anything strange they send it over to slack so we know that if we receive a slack alert there's most probably something wrong happening or as you can see from this example we have the root user that logged in on the command control server so if we know that we are we are the roots root user and we're not supposed to be
connecting and it's not us then we take corrective measures we know that we have been compromised like this as i said before what i showed you is a very simplified example of monitoring if you really want to have a full-blown cm i would ask you to check out red elk by outflank they have done a fantastic job in developing an actual cm to be used during rating operations sometimes this is too much depending on the operation type but it's something you should definitely check out so moving to the traffic that will be reaching our redirectors and more specifically http and https traffic we can make use of something like apache mode rewrite which does url manipulations on the fly
and what we can do is something like the example that i have here where we have a condition saying that if the user agent is one of the user agents in the list then redirect the user to gold.com or if the user agent is not one of the user agents in the list then send them back to our backend ip and the request uri that they ask for so why why do we do this so if we're having an assessment and we know from our intelligence that our client is a windows based client so they use windows microsoft windows we know that their user agent shouldn't be one of these we also know that some clients
if we send them a phishing email may try to open it from their mobile phones and then again you have search engines which index the domains and we definitely don't want to our phishing pages to be indexed so if the user agent is one of these user goes to google.com the url on the bar changes to google.com if not the information is going to be fetched from our backend ip but the url on the url bar will not change so our bucket ip again will not be exposed this is a final example of apache module write and it's really cool because again if we know that if we know the ip space of our clients
we can specify it in the condition so if there is traffic coming from ips other than the ones that we mentioned here so one range or the other range users go to google so let's say search engines or anyone else scanning our domain but users coming from the iep space of the client will be sent to our backend ip again acting as a proxy therefore just the information is coming from backend ip but the url is not exposed so let's move to other types of redirecting type traffic and we're going to talk about sockat which is a very nice tool and it's very versatile tool where we can send traffic back and forth on our servers
the first example it's a very easy example where we just accept for connections on port 80 on our cloud redirector so we just say listen for any type of traffic coming on port 80 and when you see that type of traffic send it over to our bug and ip on whatever port we specify so the first example we listen on port 80 and we send it to our backend ip on port 1890 and the communication works like that again we can use a reverse tunnel for the communication where our command control server connects back to the redirector and on the redirector instead of sending the traffic to our ip it will send the traffic to
itself where it's going to be picked up by the reverse ssh connection we can harden socket even more by specifying an ip range as with all the other examples that i mentioned before we can use the ip space of our clients so in this example we say that we want to accept traffic on port 80 only from a specific range and send it over to port 8090 of our command control server all other traffic coming from different ip spaces is going to be dropped because um soccer is it's very versatile and has a lot of flags i would urge you to visit the link on the page it's a very detailed blog post about shortcut and we'll show you a
couple of more tips and tricks so let's go through the communications and see it in a little bit more detail using this example where we have the internal network this is the red team network consisting of the command control server and another server hosted in-house of course these servers are not directly internet accessible and they're behind a gateway and each of these servers is responsible for initiating a reverse ssh connection on the same cloud redirector the reason is that one of these connections is going to be accepting traffic on port 80 and the other one for payloads running over port 443 so as you can see the connection the first connection is made on port
2222 and the other one on port 2223 on the cloud redirector so what happens is that our victim will request either an http or an https payload socad is running on the cloud redirector sending the traffic to itself as we discussed before so traffic coming to port 80 is being sent on port 2222 and any traffic coming on port 443 is being sent on the local host on port 2323. so the redirector will accept that traffic send it over to our gateway and the gateway we send it over to the responsible server for serving the payload dns traffic is a bit trickier just because the reverse stage connection is tcp based and dns is udp based
and therefore we have to do some additional steps here and for this reason the traffic we receive on the redirector will will be translated to tcp so udp coming in will be translated to tcp and sent over the reverse ssh tunnel on the in-house server we have another instance of socket running taking that tcp traffic and converting it back to udp in order to be used as dns as it is we can do something similar with ip tables although ip tables it's sometimes more complex to use and easier to get locked out from your server if you make a mistake and i have done many of those and i try to use and i prefer to use
actually fire hole which is a tool that has a really really nice feature which explains the rule before applying the rule but again here we're accepting udp traffic on port 53 and send it over to our command control server on port 53. something i don't see being discussed a lot is about data at rest so data that has been collected during the operation and has to stay on one of our servers for some period of time for whatever reason and linux has a lot of tools but i have found that tomb is working very nice it's a very easy tool to use and you can basically create a new tomb to keep your data throw the data in there and keep it
safely for as long as the operation dictates that you should keep it for so again something to keep in mind as well while running an operation i have included the links from my blog which i detail some of these techniques more and then the blogs that of the people that i used their research for to compose this presentation make sure you visit their blogs and they put out amazing information often and closing i would like to say a big thank you to besides athens and anybody who sat and went through this presentation and i really hope to see you in person in 2021 thank you