← All talks

Quaid DeLacluyse - DevAttackOps: Full Stack Red Team

BSides Augusta55:23298 viewsPublished 2022-10Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
TeamRed
StyleTalk
About this talk
Quaid DeLacluyse presents Terry, an open-source tool for automating red-team infrastructure deployment across multiple cloud providers. The talk traces his journey from Red Baron through failures and successes to building a unified, scalable pipeline that abstracts away cloud complexity, handles dynamic templating, and integrates with team tools like Slack.
Show original YouTube description
The key to successful Red Team operations is the ability to quickly and reliably deploy operational infrastructure. This presentation will take the audience through my journey of finding a way to automate operational infrastructure builds in a scalable, repeatable, and flexible manner. Throughout this talk, I will share the journey of both successes and failures that I faced while building a unified pipeline in a sea of complexity.
Show transcript [en]

can you guys hear me okay all right cool so thanks everyone today I'm going to be talking about Dev attack Ops full stack red team and kind of before I even get into it just know that it does come with an open source project so if at any time throughout the talk you want to check out what it actually is that we're talking about please feel free to go to the GitHub link and check that out so who am I I'm Quaid Delacruz I go by Ezra Buckingham so if you look at anything uh cyber related and you see Ezra Buckingham that's me I'm the only Quaid de la Clues on the internet I like

to keep it kind of separate between the world of red team and Ezra uh I have some certs but like HR only really cares about those so I won't tell you the bottom uh I joined cyber back in January of 2021 so I'm still kind of green to the industry however I've I've got a lot of different background that gives me a leg up as a red teamer and I am a red team operator at Fifth Third Bank we're based out of Cincinnati Ohio however we kind of have a presence out everywhere so if if you want to learn more like come up and talk to me so General disclaimer before I get into the actual talk all the opinions are my

own not the opinions of my employer um do good not evil the thing that I always tell my teammates and all my friends is play stupid games Win stupid prizes so if you want to go play that route go for it but I am not accountable for anything you do and then the other disclaimer is kind of more about the talk what I'm going to be talking about is not a new problem a lot of red teams a lot of people face this problem so my solution was built around the needs of me and my team your team might have different needs and thus need a different solution so today I'm going to talk first about

kind of the goals of my presentation the goals of the research I was doing um talk about kind of my journey so where I started with a project called the Red Baron the implementation that came along with the Red Baron and how I tried to adapt the solution reframe the problem and then I'll introduce you to Terry which is the solution that I came up with so first let me talk about the goals here today as well as for my research so my goal as a red team was I wanted to develop a red team infrastructure pipeline that is cross-cloud capable user friendly extensible and secure and when you start to think about all those

adjectives you're like okay those are kind of catch-22s you're not really looking at these things and saying like Okay I can have all those things but that was my goal so my goal here as a presenter today is to take you on a journey of how I seized an opportunity to automate my workflow learn how the cloud Works learn how to automate infrastructure creation and configuration and then learn how to abstract away the cloud using open source tools so like Tim talked about this morning I wanted to be lazy I did not want to really like have to manually do things so I want to take you through my journey of how I got to be really lazy and came

up with the solution that I have so if you're in the audience and you don't fit the red teamer I understand that not a lot of people are red teamers so I want to make sure that you all feel like you can answer your answer some questions as you go along with this presentation so if you're red team or you might ask yourself the question and be able to answer how can I remove the minutia from setting up operational infrastructure and still have badass emulation capability if you're a CIS admin how can I automate configuring and building infrastructure if you're a developer how can I integrate conflicting Solutions together and if you're none of those and you're not

really sure what you're going to get out of this I want you to walk away with the answer to the question of how can I tackle complex problems and Abstract Solutions away from their implementations so let me set the stage for you a little bit as far as like what my kind of dream state was for red team infrastructure so I sat down and I thought to myself and also there's a QR code I know there's a lot of hackers in the room don't scan the QR code there's also a URL but this the QR code will take you to the Wiki page for the GitHub where you can find all this information because I know it's

small so my goal was I sat down and I thought to myself here's what I want out of a red team infrastructure pipeline so I wanted to make sure that all of my malware Communications would come from my victims they would hit basically domains that are registered in various Cloud providers then all of it would hit a reverse proxy and then all of the traffic would go from that reverse proxy through a VPN tunnel to something that might be in a private Network right and then all of that I want to all logging off into an elastic stack that I own right and that was like the dream goal that was the thing that I was working

towards with doing all this research I wanted to get to this and that was kind of a Genesis of how I came up with the solution that I have today so let's first talk about where I started so let's talk about the Red Baron what is the Red Baron it's a project originally from coal fire research it's a set of terraform modules for kind of cloud resources all with a red team focus and it's actually now owned by a developer called bite leader and it's recently archived as a project and as we start to get through it you might start to recognize why it was uh recently archived and I also will say this is

going to be very like devops heavy so if you don't follow any of these terms like terraform just think of infrastructure as code and configuration as code that's essentially what I'm going to be talking about so when you dive into the Red Baron like I said it's a bunch of cloud modules all with a red team Focus so as we start to look at these modules we see Azure AWS digitalocean Google ansible and some other fun things and all of these if you look at them we see all these different types of resources that we might want to deploy as a red teamer right so we have a really great start of where we want to go and

the way my team used this was we had all of these kind of cookie cutter red team operational infrastructure terraform files and inside of that file we would instantiate each Red Baron module depending on what we really wanted to have in our operational infrastructure so fun fact dot TF file is terraform so whenever you see a DOT TF think terraform so the workflow for me was first we would make a new directory on a server that we owned in this case we make the super secret operation that's going to hold our terraform files we'd run certbot for all of our domains that we wanted to use for the operation to make sure that we get SSL certs set up for

all the domains we were using and we do it with wild card search for https then we would take one of those cookie cutter templates take the variables.tf file we'd put it into the new folder we created we'd edit the variables.tf and then we'd edit the actual kind of cookie cutter different red team operational template that we had run terraforma knit run terraform ply and then we'd SSH into each server Echo out public Keys private keys so that all of my team had access to all the infrastructure we stood up and then we'd manually install and configure C2 and other software and we'd have to rinse and repeat this for n number of servers and if any of you are sitting in the

crowd again this is where I go back to Tim's talk uh I am lazy I don't want to do this this sucks so then I sat there and asked myself the question does this scale or adapt and yes I have so many aliases so sometimes I like to think that I'm talking to myself it's very weird but the answer to that is no it doesn't but it's a great starting point it gave me all the things that I needed to know to understand where I wanted to kind of make my new vision of what I would be building towards so this is where I started to think about the Red Baron and I was like you

know what I have a really great starting point let's try to adapt to the solution so as we start to look at the Red Baron and really kind of scrutinize what we're looking at here we see each module inside of Red Baron consists of zero or more of servers domains DNS records cdns or even tools but I was constantly asking myself the question why repeat the resource or tool definition by the resources use case of Provider so what I mean by that is if we look at just an HTTP redirector like we're not Reinventing the wheel that is defined for all of the cloud providers we have one defined for lenode one defined for digitalocean one for AWS one

for Azure one for Google and let's say you wanted to scale out another cloud provider you would now have to repeat that tool definition so my new vision was you know what that's not going to fly I want to now have a base module for each fundamental resource for each cloud provider and then have module variables to control which Red Baron module to load again I didn't want to rebuild all the terraform modules that are already in that project but I wanted to have logic inside of the terraform module to dynamically set the source module I was going to load and then a little bit deeper was be the way that the actual servers were

configured is a provisioner called remote exec so remote exec think about that as just you manual sshing into a server and deploying A bash script and running the bash script that's essentially what remote exec does and what that would have done is then I could migrate all of that remote exec into ansible playbooks and that will remove all my code duplication so now I don't have the HTTP redirector code duplicated across all the different Cloud providers I have it managed in one single ansible Playbook so what this implementation looked like and I'm going to call out before I even talk about it is line 19 pay attention to this this is important so I would have on the right hand side

what the actual module would look like that's the kind of agnostic module so it would be server.tf for uh any cloud provider and then inside of it I have a locals block that has basically all my mappings for all the different Cloud providers and it would tell me okay if the user provides me AWS I want to load module AWS ec2 instance if someone provides me digitalocean I want to use this module and you see I didn't get very far and that's very intentional because you'll see why but essentially in here I have a little bit of logic being performed saying okay here's the instantiation of that module if I'm loading the source of modules

agnostic server and I want it to be cloud provider AWS and server type http redirector then I can start to dynamically load the different Red Baron module based on what a user input and getting even a little deeper Cloud providers don't configure servers the same way given the same operating system so if you're deploying a system in AWS you can only ssh in so again I'm talking about Debian you can only ssh in as the user admin but if you're on digitalocean you can SSH into the user root right so now we have some more discrepancies that we need to fight uh based on the different Cloud providers and how they actually give you a server

but as you see what we have here is we have this is the instantiation this is the module and again pay attention to line 19. because when you actually try to do this doesn't work what you see is terraform will yell at you it will say variables are not allowed on that line 19. you can't use variables in that source definition and of course I'm a red teamer right and I was looking at my teammates and I'm like haha I got you guys I'm a hacker I can bypass this and that's when the devastation kind of continued and I realized ah I can't do that because I went to the terraform GitHub and I saw that someone was asking

for this exact same thing and one of the contributors came back and said that does not work you cannot do that this is where I had to extremely hard pivot because I did not have the tools that I needed to go build what I wanted given the current solution this is where I had to reframe the problem my problem stayed the same but I had to rethink about how I was looking at it and think what do I need to make sure that I can achieve my or my vision but maybe a different way so when you go back and look at the implementation that I was trying to build out we again have our server

agnostic server module and if again look at 919 what I'm really trying to do is have a dynamic terraform files and that kind of draw drives where I started to go because I asked myself the question can I dynamically generate code and of course the the Personalities in me were then trying to like answer this question and yes it's weird to talk to myself I know my team makes fun of me all the time and the answer is yes it's called templating it was a totally New Concept to me I had not gone down into the world of templating before but this was kind of the Magic Bullet that I needed to achieve that question of can I

dynamically create terraform files and for me I was really comfortable with python so that's where I started using a templating engine called Jinja so Jinja is a templating engine that's built into python it's a library and what that allows me to do is build one resource type for each cloud provider and then dynamically inject parameters using Python and the Jinja templating engine into that template right so this gives me way more control in Python than in something like HCL which is the programming language for terraform right so now I have way more flexibility and I have the power of python behind me rather than trying to hack Solutions together inside a terraform this is where I call it like the turning

point for me because I now knew where where my new vision was going to go and where it was going to take me and this is where I'll introduce you to Terry this was the solution I came up with so my new vision had evolved from where it started and now I wanted a base template for each fundamental resource for each cloud provider so a server a DNS record an SSH key and maybe some other things that I can't think of at the moment and then I wanted a command line interface to render each template build the infrastructure with terraform map that state back to an ansible inventory and then run playbooks against each host so instead of doing all that

remote exec provisioners and repeating that code across all the different Cloud providers I want to have one Playbook to go build the things and have me never have to touch it and then kind of the the cherry on top was I wanted to use Docker to deploy containers onto the hose that just work and I'll talk about that in a little bit so if you're a little lost I totally understand and from this new vision this is where Terry was born and this is a logo for Terry and here's the GitHub project so if you want to go check it out feel free to um there's a lot of development that I'm still doing on it but this is a good

start for where I wanted to go so what the hell does Terry do the Terry is your command line interface what Terry will do is based on the arguments that you pass into the command line it will dynamically render some ginger templates which generate valid terraform code then Terry will do a terraform build so if you decided you wanted to build something Azure something in digitalocean something in AWS Terry will go build those things for you and then take that data from the terraform state so after you build it you hit all the cloud apis you now get a bunch of data back saying oh here's the things I built and then you can map that state back to

ansible and then run someplace so total Baseline thing that I wanted to do was I wanted to set up some firewalls I wanted a provision user access I wanted to install some different tools I will talk about nebula that's a fun project I wanted to set up elastic logging which I will not talk about because that is a whole like day-long conversation and then deploy some containers right so this is the vision that I had but let's break it down because this is that's a lot right so let's kind of break it into little chunks where we can talk about each piece so let's first start talking about Terry rendering some ginger and doing some

terraform build so the way I tackled this problem was I wanted to make every single thing into a configuration file so I didn't have to make any code changes but I also wanted to have all these resource templates kind of mapped out like I had or like we had in the Red Baron so you can look and see exactly what I have available to build so the terraform implementation looked like basically everything is in its own folder based on the cloud provider so if we're building a specific resource like a domain a server or an SSH key that would be inside of each uh each cloud provider folder and then I'd have a provider configuration to say okay if

I'm using AWS I want to use this terraform dependency because you need to tell terraform that we're going to use these specific packages to build specific resources what version to load of that provider and then what credentials I needed to then interact with that API so for AWS you need specific Keys like the AWS access key ID the AWS secret access key and the AWS default region and there's some other pieces here that I'll talk about in a little bit but that was the General configuration with those two pieces of data I can now have a command line interface to wrap around all of those things and say if I'm building this type of resource using this cloud provider I

want to render this template spit it out to a terraform plan now I have everything that I need to go build the resources that I've asked Terry to go build for me so now once you go build the resources you need some way to take that data back from terraform and map it to an ansible State and or ansible inventory and run some plays so what did that look like that again goes back to my provider configuration earlier so inside of the provider configuration I now have a block for servers and because each cloud provider when you're actually building things in terraform they have totally different nomenclatures for the exact same pieces of data and I wanted some

way programmatically to go through a terraform state for each provider and say Here's the data I need given this data point I'm looking for so when we're talking about a server in AWS that reference or that data point inside of the Json of the terraform state is AWS instance and the remote user that I can sign in as is admin for AWS and to get the public ipv4 address from AWS it's public underscore IP and that changes across across each cloud provider but I needed a way to basically say to Terry if we're using this provider go look for this data in the terraform state map that data back to ansible inventory so now I can run some plays against it

great now have a valid ansible inventory have all the resources built but I need to run some plays against it and actually do the things to configure a a piece of software on the system based on what definition I have in the CLI or maybe in my config file so I started off with okay let's just talk about literal base configuration and then I can then Target different host groups that I have inside uh that I have defined via the CLI and Target those different host groups to then install different software based on the needs of that specific resource so talking about literally just prepping a baseline server I might want to create user accounts as defined in some sort of

config file allow SSH key auth disable password auth because I want I care about offset uh install a host based firewall ufw and deny all traffic except for 22 TCP from the red team public IP addresses and that's important because I didn't want to lean on any firewall implementation of any cloud provider so AWS we have this concept of security groups but if I'm deploying cross-cloud now I have to manage all these different types of firewall implementations across all the clouds but if I do a host based firewall I can do everything on the host and not have to worry about it and then if I have it configured I can install something like nebula I'll talk

about that and then install filebeat that is my elastic stack logging structure and that I won't get into but there there are other ways that you can kind of research that and see exactly how that implementation is done then I started asking myself the question what if I want to install a very custom type of software on the server but I had maybe conflicting dependencies maybe Port collisions so let's say I have a https server and I want to install Cobalt strike now we're having resources trying to be hogged by different software so Cobalt strike might be looking for 443 apache's looking for 443. so now I started asking myself how can I manage installable software using

a single config file and get some more flexibility from that and that's where I then answer my question of yes I can it's called Docker and Docker compose and this again was a totally New Concept to me so don't butcher me if all the implementation is wrong however it works for us and I know that the understanding of Docker and Docker composes sound for the use case that I'm looking for so if you're not familiar with Docker and Docker compose let me talk to it a little bit so Docker is a software that allows you to build software packages into standardized units called containers doing so allows you for a build once Run Anywhere mindset and containers will run

the exact same way no matter where they're deployed right so now I don't have to worry about oh will Cobalt strike run in AWS or will Cobalt strike run on Debian 11 in AWS or will it have some issues versus digitalocean etc etc and then on top of that Docker composes a tool for defining and running multiple containers and you're able to turn really complex Docker composed or Docker commands into a configuration file and then use one single Docker command to go build all of the things and all your configuration is inside of a Docker combos file right anytime we can take a really complex command and turn it into a config file we're happy

right and I'm not going to dive into how exactly we do this I will dive into the implementation that I use inside of Terry but I won't talk about how you build containers and how you build a CI CD pipeline for containers however I will give you the subtle plug I do have a Blog and if you're ever interested in taking the dive of why I need to containerize if I'm a red team why is it valuable for me to containerize you can go check out my blog I have a series where I'm talking exactly about those things and how you can even build CI CD pipeline for your container images so now you just have container images that

work and if your team if anyone new comes on board and you're saying go play around with Cobalt strike I'm sure that you've told your team anyone new into red team and go play with Cobalt strike now they don't have to fight with dependencies they don't have to fight with installing stuff they just go pull down a container image and it runs everyone's happy cool so back to Terry so what Docker and Docker compose now allows us to do is the original Docker command to spin up something like demo C2 which is a C2 framework that we've used against Mac OS is we take this really complex Docker run command and then we can convert it into a config

file and now we use one single command to basically no matter what we have in this config file we use one command to deploy all the things and that's great so then how I implemented that with Docker compos and Terry is we now have a way to deploy multiple containers onto a server using a single config file onto our servers but not to mention we have an extra added benefit here now we have the port mappings because we're telling in the config file exactly what port city do we expose I can go look at that Docker compose and say oh look I have 443 open I probably need to open that up to my team

right that gives me a lot of power so I can now programmatically set the host based firewall based on the docker compose file that gives me a lot of awesome benefit so what that looks like in practice is Terry can now have a master Docker compose file with all possible containers that we can deploy and then in the CLI I can programmatically parse that Docker compose and say oh well I see Cobalt strike that's probably an option that I want to expose the user as a argument they can pass in now this is where I'm going to ask someone in the audience to help me understand what is wrong with this picture because there is something wrong

if this is my config file for all of Terry for every single thing that gets deployed and I know that this is probably an edge case for some of you red teamers but when you deploy Cobalt strike you need to pass in runtime arguments into the team server to make it configured to run a certain way so what's wrong with this as a config file for Terry can anyone answer that there are prizes on the line

thank you

exactly that and this IP address right the container and what kind of prize do you want lock pick set thank you but you're exactly right we have runtime arguments that are going to change based on where you are and we don't necessarily want to have a hard-coded password because we care about operational security and this 8.8.8.8 that is a IP address that is used inside of cobalt strike to say exactly where the team server lives we do not want that hard coded either and if we're deploying it in a container the container IP address is going to be different than the host IP address and that's really important so the realization here is we're hard

coding credentials we're hard coding runtime arguments but we don't want to do that because this is a master config file for all of my deployments ever I don't want to hard code that stuff and this is where I don't know if anyone can see where I'm going with this but we've already gone into a topic that we can reuse here templating there you go and inside of ansible Ginger is already baked in so now we already have a love story like we have ansible that already uses Ginger now I can basically take all the core functionality of ansible and put ginger templating inside of my master Docker compose and then basically just dynamically render those templates and

then deploy them to servers and this is great but I'm curious does anyone know the problem with this we have one single Docker compose file with all of my possible Deployable containers but a user might only want to deploy a select number of containers if anyone has used ansible or ginger in the past you know the issue with this can anyone answer that

okay this is where the breakup story comes in if we're not deploying Cobalt strike to a server but we're trying to render this template ansible's still going to read in that template and say oh Cobalt strike password I want to Define I want to instantiate this but what's that what if that's undefined what if I'm not deploying Cobalt strike I'll give you a spoiler ansible gets pissed ansible will just yell at you it will not care that you don't want to deploy Cobalt strike it will just say you're an idiot you don't know how to develop get out but again Deja Vu I'm a hacker I can bypass this I can figure out a different

way and I can rekindle that love between Jinja and ansible again and instead of using that valid Ginger syntax why not use my own custom syntax and then parse through basic based on what the user told me the containers they want to deploy are parse through the docker compose only collect those pieces write those out to disk use some regex to parse through the custom templating syntax that I had built and then re-import it back into ansible that way I know for a fact I only have the container inside being rendered with valid Ginger that I really care about so if you get yelled at at that point I hate to say it but again

play stupid games Win stupid prizes trust me I've done it so many times I'm at fault for this but so what this looks like now that we have this kind of custom templating syntax which essentially so Ginger is curly brace curly brace here now we have square bracket square bracket so what that looks like is inside of our Master Docker compose we have this square bracket square bracket for all of our valid Ginger syntax and then if Cobalt strike is configured for that server or if certain C2 is configured for that server we can modify the template syntax using regex and re-import that we can do that all inside of ansible this is super cool this gives us a lot

of awesome functionality that gives us a lot of flexibility to how we want to build all of our red team infrastructure pipeline but again as you pointed out earlier what about those containers that need specific files or commands to run and yes I'm looking at you Cobalt strike can anyone I think you kind of already answered it but can anyone else in the crowd answer to me like how we might do that

a script file located

yes you could do that that not exactly the approach I had but I'll give you a point for it anyway you get to pick which one you want

so yes exactly that I went about it a different way but you can do it that way too so what I decided to add inside of this master Docker compose that I already have is add an arbitrary block for pre-run commands and post run commands so some containers based on what you're dealing with might need you to run commands before the container boots and then after the container boots so I'll give you the example of sliver C2 sliver C2 if you want people to be able to connect to the container you need to first start the container or first start sliver and then you need to generate the config so that all the users can can connect to it because all

of the keys generated for the configs are dependent on sliver already being started so now within this custom Terry Docker compose we have this pre-run commands and post run commands blocks these blocks contain valid ansible tasks that are run pre and post container start and they too contain that escaped Ginger syntax that we talked about earlier so we know that only the defined things will be rendered at time of import now this is probably for a bit more of an edge case but how can I connect all of my infrastructure together when not all of it is directly connected to the internet so if you remember back to my initial kind of dream state I showed you that there's a server in a

private Network and that server in the private Network I wanted to be able to communicate with my HTTP redirectors but I don't know how to route to it and I don't want to manage all of my port forward rules at the firewall I just want things to work and this is for some of you overachieving teams that might have a dedicated lab environment who are really kind of crazy about controlling their infrastructure and this was the bucket I found myself in this is where I wanted to control all the traffic that's going to my servers even things that are in a private Network this is where earlier I talked about installing a tool called nebula and this is where I'll

talk about nebula so nebula is a slack developed so the team over at slack came up with this this solution and it's a mesh VPN solution that uses the same protocol as wire guard it's basically a software-defined network and there's a lot of different solutions out there that you might have heard of like tail scale that'll do the exact same thing but nebula fit the use case for me the best and what nebula allows for me to do is nap punching so even firewall resources inside your private Network can be directly reached from the internet if connected to the mesh so now I have my server sitting in a lab closet back in Cincinnati Ohio

that's able to connect back to all of my infrastructure that might be sitting in AWS across a VPN tunnel and this is great because this also makes sure if any of my traffic going from digitalocean to AWS or maybe back to my private cluster none of the clouds see exactly what traffic is going across that that Network they only see the things that come in from my implants and that's great so what this actually looks like is when we study the astronomy I mean nebula you see the HTTP reverse proxy has a direct internet connection so it has a routable public ipv4 then it connects to a nebula Lighthouse think about that as your router

and that also has a direct internet connection and then I have my C2 server sitting in a private Network that C2 server will initiate the connection outbound to my nebula Lighthouse when it establishes that connection it's a persistent connection so now I can funnel traffic through from my https reverse proxy to a private IP address that routes everything through the mesh I don't have to worry about anything being connected it just works so it seems like I'm sitting almost on the same exact Lan as that C2 server even though we could be four different clouds away right and that gives us a lot of really cool functionality okay that was a lot of me talking about

exactly what the hell this is uh so let's talk more about kind of what what the solution is what does it actually look like because I just talked about Concepts but I didn't talk about the actual tool so going back to this architecture diagram this is the Baseline this is the vision that I had for a tool I wanted something that could do this without having to think about the complexity and can anyone tell me who might have done all of their different like played with different clouds played with different operating systems played with all the things how long do you think and I'll give you one of the prizes for it how long would it take you to manually

go install everything here with all the things listed on the diagram

way too long I love that answer and remember we're lazy we like to be lazy so what I was able to do is take this exact configuration this exact deployment and now I can convert it into a command like this so what we're doing here is we have Terry we're calling it the operation super secret operation we're giving it the command of create we're building a server in AWS of type Lighthouse so that's our our nebula router on the internet we're building a server in proxmox which is my private Network that has my dedicated lab environment we're going to install the container Cobalt strike to it we're going to install the container sliver to it

we're also going to build a server in digitalocean of type redirector with the redirector type being https with the domain of C2 dot malware.com registered at GoDaddy pointing to it and then we'll build a server in gcp of type redirector but the redirector type being DNS and the DNS C2 being dns.evil.com registered at namecheap right so that takes really this massive problem and it boils it down into this syntax that's palatable for any level person in a red team right so now I can enable my team to say I don't want to focus on having to configure all the software I want to go build malware or I want to go research ttps I want to go do

cve research now they have a CLI so they can go deploy all the things to enable them to go do that research without having to worry about opsec without having to run worry about getting caught by other blue teams without having to worry about any of the things that you might have to worry about as a red teamer you just have a syntax that's easily understandable and you can feel unable to go do the things so I was going to do a live demo but I didn't really want to have to pray to the demo Gods so I'll do a quick demo with some gifs and kind of show you what this looks like in actuality it's not

the sexiest thing in the world but it's still cool to show off that it does work and it is a tool that you can go use and download off GitHub today and go use the thing so also a quick side note I have not implemented all the providers I've been working on bug fixes and making sure that the core functionality works so not all the providers used in that initial architecture diagram are the ones I use in the demo but the same concept applies so how it starts and I realize this is very small so I'll kind of walk you through it how it starts is the user inputs exactly what they're looking for Terry will parse through all the config

files and make sure that everything that you're requesting is something that we have configured so if you're requesting AWS make sure that you have the AWS template and you can actually render that make sure that you have all of the valid credentials needed for each cloud provider and then honestly if none of those things are found so if you can't find the config the required credentials in the CLI arguments or you can't find it in the environment variables and you can't find it in the config Last Resort effort is prompt the user for standard input say I don't have this thing give me the thing that you asked for because you haven't given it to me yet

so after Terry validates the request it will then go through and build out my Jinja templates that's valid terraform and Terry will go build the terraform resources so Terry does this all automatically and you actually have the option to say Auto approve anything that's built with Terry but if you want you can just manually type in yes and hit enter and then what that does is exactly what you expect terraform goes interacts with all the clouds builds all your things builds the DNS records having everything be dynamically referenced based on what you were asking for and then after everything's built Terry will read through the terraform state that uh of everything that was built and

then start to kick off some ansible plays against those hosts so everything gets configured and also we can generate all of our nebulous certificates so that we have everything interacting inside of that private mesh VPN securely and fully encrypted and we do that all through Terry and then we can go deploy all the configuration and all the certificates onto the servers so that we have that mesh VPN setup and everything just works so now I can route things between the private subnet and everything's happy and then my favorite part is my team is very lazy and we like to be lazy because we like to be malware developers right and we after building these things we

don't want to have to copy and paste and say oh well here's all the things we built so what I did is I added a little cherry on top of a slack integration so after everything's built we can have Terry send a message off to your team in slack saying exactly what was built and give you all the information you need to know as a red teamer to then have all the data to go build your payloads right so exactly what DNS records were created what's the provider what's the nebula ipv for what's the ipv4 what containers were installed basically all the data you might need to know as a red teamer to go do the things

so that's my solution of Terry and I want to give a call to action for you all in the crowd who might have an interest in doing some of this research because my goal as a developer is not done Terry's a good start it gives me a lot of the core functionality that I want as a red teamer to go build things without having to worry about the complexity of red team infrastructure but for any of you who are interested in taking the plunge with me I'm always looking for collaborators on the project so reach out to me if you're interested in helping Terry grow and come talk to me after this is a cool opportunity and

there's still a lot more to be done and I have a lot of really cool ideas for where it goes but I am still a red teamer and I still have a day job so I'm trying to kind of work and get more people interested in the project so come up to me and talk to me if you're interested and lastly I want to thank all the contributors who have helped me get here uh one of my old co-workers who worked with me at Fifth Third his name is Andrew Whitmer he gave me as I was struggling he wanted me to struggle and struggle and then finally get to a point where I had to reframe the problem and

once I reframed it he then handed me kind of the bones of what he had thought is the end goal for a red team infrastructure pipeline so he gave me the bones and I just ran with it so I want to thank him for letting me run with it and kind of coaching me without just handing me a solution and not to mention the couple of GitHub contributors I do have on the project already it would not be possible without y'all and also I had to do it so don't hate me if I actually am not supposed to do this but I'm going to do it anyway one of the cool things about where I

work is I am allowed to go do this research I have a lot of great leadership who I told them I want to be lazy I want to go learn all the things with Cloud but I want to do it during work hours and Leadership was like hell yeah go for it and I want to say it's been awesome to work with them and we are always hiring we're always looking for new people new Talent so if any of you all are like interested in taking the plunge deeper into cyber and you want to do it from a financial perspective come talk to me we're always hiring and also you can even just go look at our careers page but I had to do

it that's pretty much it I want to say thank you a special thanks to all the people who have helped me with this solution and the feedback again it's not possible and thanks to all the b-sides people for putting this on and and letting me talk uh I'll be honest this is my first presentation I've ever given at a cyber conference so kind of cool that people were reaching out to me and uh supporting me and going through this journey but if you want to talk to me just on the internet you can contact me on Twitter at Buckingham Ezra you can contact me via email ezra.blockingham at gmail or you can go to my blog

azerbuckingham.com and that's it thanks everyone oh I love questions

that's me uh

yeah got it

burn you want to set another one up can't it do that dynamically so the question was if I have infrastructure that gets burned after I've built it how can I modify my infrastructure to address that solution so yes kind of so inside of Terry I did think of what happens if they're scope creep so what if you say I want to go build a new redirector but you might want to keep some existing one ones alive I don't have it here but there is an ability inside of the tool to instead of run that create command that I had earlier let me see I can pull it up real quick somewhere stupid transitions cool so you can see that create command

instead of create you do add and you have the operation being existing operation that you've already built and then you can add redirectors and you can even Point them to resources you've already created so cool enough like what happens is every time you create a server it's assigned a random server name and that server name can then be referenced by other resources so if you want to in that redirect or config you want to say okay that DNS redirector I want that to point back to my team server I can name this team server something like the server mix server face and then down here I can do R2 server mix server face and then that

inside of ansible will Auto automatically map those two things together and say okay if there's a nebula IP I want it to route to that nebula IP but if it's a public IP just route the public IP so most of the configuration is done so if you have things that get burned absolutely you can spin up new infrastructure but you don't have the availability to deploy select pieces of resources because terraform might not like that and I figured managing that kind of a Pita but good question any other questions yeah

your container yes that's a good question so the question was I had Docker and terraform sitting side by side working independently why didn't I just use the kubernetes functionality inside of terraform and to answer your question I wanted to have kind of two two separate worlds I didn't want to have terraform baked in with the host configuration I wanted to have terraform just go build the things and then have ansible configure the things right so now I have a separation of resources because there might be some sort of need in the future to say I want to rip out terraform and I want to build something else or I want to use something else to build all my resources

right now I can do that and I have everything kind of ice siled into their own solution so I can say let's say I tomorrow wake up like I had a boss he told me why the hell are you using ants will go use salt and now with this solution I can say okay I can rebuild everything in salt but like why would I there's no real competitive advantage to do that but I could and I don't have to rip all the implementation out I just have every little siled instance to go do the things but that's a good question

I'm not sure so I'll ask a question or I'll repeat it so one of the things that he asked was given a security professional and given a security researcher what are the things that I'm worried about that against our nation I don't know if I have a good answer I'm still like I said I started not too long ago so I'm still trying to figure out the threat landscape um I think that's something that I'll probably be better equipped to answer but I don't want to give you an answer without having a better understanding of the threat landscape

yes the question was how does this apply to people or organizations that have vpcs or different resources across different regions in AWS to answer your question yes so the whole reason that I wanted to do all of this research was oftentimes my threat Intel team they'd come to us and say oh well you guys are going to try to emulate Finn seven right and I'd say yep we are and then I'd go ask them what are the hosting providers or what are the different infrastructure or where do they deploy their infrastructure I wanted to have a pipeline to go mirror that of the Finn sevens of the world or the other apt groups and I wanted to do it in a

configurable way and this is the solution I came up with to do that so I can emulate uh fin 7 infrastructure or I can go emulate unc2529 infrastructure or do that all using a single config file or using a single CLI syntax instead of having to go figure out how all the different providers work and now I have an extensible framework to go do that you're welcome any other questions I almost don't want to call on you all right Dan

very Centric downside so the question was given the fact that Harry is geared towards red teamers do I have any plan for developing Terry Centric documentation for the things that are built is that correct okay you just made me yeah

yeah I'm still trying to work through that because there's a lot of different solutions here like we're talking about ansible we're talking about terraform we're talking about Docker we're talking about nebula we're talking about elastic right there's all these things that you can dive into I'm going to try to cover those things in my blog as far as documentation for resources that already get deployed because of the unique state of Terry and Building Things on top of terraform I don't want to steal the thunder away from the terraform State and all basically all the self-documentation that comes along with that but I do have a custom I'll call it Terry form state that I build across all of these

resources so I can say exactly what nebula IP was already used what's the nebula cert what are all the things that I need to know if I want to add to the deployment or if I want to add things to it but as far as general documentation I do have some pretty good documentation on the GitHub I'm still working on like how that actually manifest because there's so many different things I can document more to come and he's on my team so that's why I'm giving him crap

um

yep wanted to transition more over to resting so I was going to ask you um what we remind me for someone who doesn't have any red team experience who's wanted to yep so the question was I'm in more I I'm in the military I'm in more of a blue team focused role how do I break into red teaming since I'm interested in that I have a lot of feelings about how people break into the industry as a whole the answer that I like to give people is find a mentor find someone who really cares about you as a person make them I'm going to say it but probably get in trouble for saying it make someone give a [ __ ] about you

right and if you get someone if you find that person they will help coach you into the places that you want to go and if you if you have that person you'll never be unemployed the rest of your life because that sort of person will help make sure that you're going where you want to go and they're invested in you and that's how I'm going to answer that question I know that doesn't help you with the the exact like red team how you break into those roles but having a mentor is going to help you learn from why why you might not get into a red team role sooner rather than later and having that support system is more

important than getting into the role that you really want

that's great advice would you maybe share a story about how someone poured into how you made someone care about and investing sure uh so question was how did I find someone who invested in me so I started off my career I did kind of freelance Consulting in college I graduated back in 2020 so I have kind of a different background um did some Consulting started getting really I'll call it like well versed in different roles like data analytics business intelligence web development all that background and then I went into a leadership program because I didn't really know where I wanted to fit in but it was an I.T focused leadership program and I went into that thinking I don't

know what I want to do I want to figure out I want to figure that out and use this as a means to do that through that program I kind of had a support system in place but I wanted to Branch out even more because once I started feeling like cyber was the way to go I found that one person in cyber who was investing in early career pipeline people and I just started asking them questions but right like barraging him with questions and he has been my mentor ever since because I just started asking him questions and then he and I developed a relationship kind of a personal relationship and now I go to

him with any sort of question so it started off just be curious you ask people enough questions and they'll start to be like oh well you know what this person's curing us curious enough I might want to invest in them I might want to cultivate that passion that I see so really I I have a lot of feelings on this topic so if anyone wants to like talk to me about it after I'm happy to talk through it but I would say be curious go ask people questions go take initiative and that will set you up for success